article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
kidneys are the core organs in the urinary system .their principal functions are to remove metabolic waste from the blood and to regulate blood salt and water levels . through the regulation of salt and water , kidneys also play an important role in the regulation of arterial blood pressure . to perform these functions , each kidney adjusts the composition of the urine it produces .each kidney has an outer layer , called the _cortex _ , and an inner layer , known as the _ medulla _ .much of the space in these regions is filled by the functional units of the kidney , which are termed _depending on the organism , each kidney contains thousands to millions of nephrons .nephrons are responsible for the production of urine .kidneys contain two types of nephrons , cortical ( short ) and juxtamedullary ( long ) nephrons , each of which is surrounded by a net of capillaries .cortical nephrons remain almost entirely in the cortex , while juxtamedullary nephrons extent deep into the medulla .each nephron consists of a _ glomerulus _ and a _renal tubule_. further , each renal tubule consists of various permeable or impermeable segments .additionally , each nephron has access to a collecting duct for removal of the produced urine .kidneys are connected with the rest of the body by two blood vessels , the renal artery , which carries blood into the kidney , and the renal vein , which carries blood out of the kidney to recirculate the body .in addition , urine is excreted from the body through the ureter .blood coming from the renal artery is delivered to the afferent arterioles .a steady flow of blood coming from the afferent arteriole of a nephron is filtered in the glomerulus and flows into the renal tubule .the blood flow is maintained constant in each glomerulus by the constriction or relaxation of its afferent arteriole .nearly all of the fluid that passes through the renal tubules is reabsorbed and only a minor fraction results in urine .fluid is reabsorbed from the renal tubules in two stages : first by the renal interstitium and then by the surrounding capillaries .the processes underlying reabsorption are driven by the pressures in the interstitial spaces .although the pressures in the renal interstitium are important determinants of kidney function , there is a lack of investigations that look at the factors affecting them . herewe develop a computational model of the rat kidney , for which several experimental data exist , and use it to study the relationship between arterial blood pressure and interstitial fluid pressure .in addition , we study how tissue flexibility affects this relationship and how the model predictions are affected by the uncertainty of key model parameters .we model the uncertain parameters as random variables and quantify their impact using monte carlo sampling and global sensitivity analysis .. , scaledwidth=75.0% ] the model consists of a collection of compartments that follow the characteristic anatomy of the kidneys of mammals .the compartments fall in three categories : ( i ) _ regions _ that model the cortical and medullary interstitial spaces , ( ii ) _ pipes _ that model the blood vessels and renal tubules , and ( iii ) _ spheres _ that model the glomeruli .a schematic diagram depicting the arrangement of the compartments ( 135 ) is shown on figure [ nephronmodel ] and a summary is given in table [ tb:1 ] . to facilitate the description of the model equations below, we use a set of nodes ( c1c32 ) that mark the connections of the compartments ; these nodes are also included in figure [ nephronmodel ] and table [ tb:1 ] . briefly speaking , blood enters through the renal artery ( node c1 ) and splits into a number of large arteries ( compartments 35 ) that drain to the afferent arterioles ( compartments 6 and 12 ) .each afferent arteriole supplies one glomerulus ( compartments 21 and 27 ) . in the glomeruli, blood is divided between the efferent arterioles ( compartments 8 and 14 ) and the renal tubules ( compartments 2226 and 2832 ) . leaving the efferent arterioles, blood passes through the cortical microcirculation ( compartments 9 and 10 ) or the medullary microcirculation ( compartments 1518 ) , before it rejoins in large veins ( compartments 11 , 19 , 20 ) and leaves through the renal vein ( node c18 ) .the model represents short ( compartments 2126 ) and long nephrons ( compartments 2732 ) that both drain in the same collecting duct ( compartments 3335 ) , which , in turn , drains to the ureter ( node c32 ) .the model accounts for the spacial as well as the anatomical differences between the two nephrons that are developed in the mammalian kidney .for example , the model accounts for differences in the location within the cortex or medulla , in the pre- and post - glomerular vascular supply , dimensions , reabsorptive capacity , etc .blood vessels and renal tubules are modeled as distensible pipes .glomeruli are modeled as distensible spheres .fluid flows through a compartment at a volumetric rate of ( figure [ pic : cylinder ] ) . following the physiology ,some of the pipes are considered permeable while others impermeable . for simplicity , we assume that the only pipes modeling blood vessels that are permeable are those that model capillaries . the flow that passes through the walls of a permeable pipe is denoted by . according to the common convention , denotes fluid leaving the pipe and fluid entering the pipe . due to conservation of mass ,the flow that leaves from an impermeable pipe is the same as the flow that enters , thus : while the flow that leaves a permeable pipe is given by : we assume that the flow crossing through the walls of renal tubules and glomerular capillaries are constant fractions of the corresponding inflow : where is the fraction of fluid that crosses through the pipe s wall . for the fractional coefficients we use the values listed on table [ tb:1 ] , which are chosen such that the model predicts flows similar to the antidiuretic rat model in .flow through the walls of the cortical and medullary capillaries are computed by the starling equation where /mmhg / min and /mmhg / min are the filtration coefficients of the cortical and medullary capillaries and , , , and are the oncotic pressures and , , , and are the hydrostatic pressures in the associated compartments .the oncotic pressures are obtained by an approximation of the landis - pappenheimer relation where mmhg / gr and mmhg as used in . in equation, denotes the concentration of protein in the compartment .we assume a fixed protein concentration of the blood entering through the renal artery of gr / dl and compute concentrations throughout the blood vessels ( compartments 39 and 1216 ) by taking into consideration conservation of mass where and denote the inflow and outflow concentrations of the compartment .the oncotic pressures and at equations and are computed based on the averages in each pipe and glomerulus , the internal pressure is denoted and the external . for pipes , is computed by the average of the pressures at the associated inflow and outflow nodes ( figure [ nephronmodel ] ) . for the glomeruli, internal pressure equals to the pressure of the associated node ( figure [ nephronmodel ] and table [ tb:1 ] ) . for all pipes and glomerulus compartments , the external pressures equal the internal pressure of the surrounding compartment , which , in the case of the cortical and medullary regions , are denoted by and , respectively .exceptions to this are the arcuate arteries and veins ( compartments 4 and 19 , respectively ) , which anatomically are located between the cortex and the medulla , so we compute for these compartments by the average of and .the volumes of the compartments , besides the regions and the afferent arterioles ( compartments 1 , 2 and 6 , 12 ) , depend _ passively _ on the pressure difference that is developed across their walls : where , , and are constants .in particular , denotes a reference volume , and denotes the pressure difference across the walls of the compartment when equals .the parameters are a measure of the distensibility of the compartments .a large value indicates a compartment that is very distensible , while a low value indicates a more rigid compartment . in the model, we use such that an increase in or a decrease in leads to an expansion of the volume , and _vise versa_. for a model pipe , let and denote the pressures at its inflow and outflow nodes , respectively .these pressures are related by a modified form of the poiseuille law : where is the viscosity of the flowing fluid , is the length of the pipe , and is its radius . in the model , we assume and to be constants , while we compute based on the compartment s volume ( i.e. ) .equation reduces to the common poiseuille equation for the impermeable pipes , while for the permeable pipes , it is assumed that is linearly distributed along the length of the pipe with a value of 0 at the end of the pipe .pressure at node c1 equals the arterial blood pressure , which in our model is a free variable .pressures at nodes c18 and c32 are kept constant at 4 mmhg and 2 mmhg , respectively , in agreement with the values of venous and ureter pressures used in previous modeling studies .the afferent arterioles are unique vessels in the sense that they _actively _ adjust radii such that blood flows through them at a fixed rate .in the model , we assume that blood flow in the afferent arterioles that feed the short and long nephrons ( i.e. and , respectively ) are kept fixed at 280 nl / min and 336 nl / min , respectively , as in previous modeling studies of renal hemodynamics , for example . we compute the radii of the afferent arterioles by the poiseuille equation , which yields note that equations and imply that whenever the pressure difference along the afferent arterioles and increases , the radii and decrease .this , in turn , implies that whenever the arterial blood pressure increases , the afferent arterioles constrict , and thus the total volumes occupied by them , and are reduced .the cortical and medullary interstitial spaces , i.e. compartments 1 and 2 , lie outside of the compartments 335 and therefore must be calculated separately using a different set of equations .we obtain the first of such relationships by assuming that the net accumulation of interstitial fluid within the cortex and medulla is zero . that is where the flows are weighted based on the total number of the compartments contained in the full model ( table [ tb:1 ] ) .equations and require the oncotic pressures and , which in turn require the cortical and medullary protein concentrations and for equation .protein concentrations in the cortical and medullary regions are computed assuming that the total mass of protein contained in each region , and , respectively , remains constant .thus , we use the values mgr and mgr , which are computed such that the resulting model predicts reference pressures in the renal cortex and medulla of mmhg , similar to those estimated experimentally .cortical and medullary interstitial volumes and are assumed to change proportionally ; thus , where is the proportionality constant .the combined volume of the interstitial regions is calculated based on the total volume of the kidney according to where and are found by summing the total volumes of the pipe and glomerulus compartments contained within each region .finally , the total volume of the kidney is calculated by where in this case refers to the pressure external to the kidney , which is set to 0 mmhg .equation assumes that the total volume of the kidney is determined by the distensibility of the renal capsule , which is stretched by the difference of the pressures developed across it , i.e. . values for the model parameters are given in table [ tb:3 ] .these values are chosen such that at a reference arterial blood pressure = 100 mmhg the model predicts pressures and volumes that are in good agreement with either direct experimental measurements or previous modeling studies .the pressure - volume relationships used in the model , equations and , require values for the parameters .we assume that ( i ) scale proportionally to the reference volumes and ( ii ) the coefficients depend only on the histology of the associated compartment .that is , we group the compartments as follows : * group g1 : renal capsule ( ) and papillary collecting duct ( ) * group g2 : glomeruli ( and ) * group g3 : renal tubules ( ) and proximal collecting ducts ( ) * group g4 : pre - afferent arteriole blood vessels ( ) * group g5 : post - afferent arteriole blood vessels ( and ) then we assign the same flexibility value to all members of each group ( table [ tb:3 ] ) . with this formulation , the model compartments in each histological group experience the same fractional change in volume whenever they are challenged by the same pressure gradient . the available experimental data do not permit an accurate estimate of the values of the flexibility parameters .for this reason , we treat the flexibilities of the five groups as _ independent random variables_. to facilitate the comparison among the different groups , we set where are constants , and are random variables configured to have mode 1 .we estimate the values of empirically based on _ ex vivo _ measurements reported in ( table [ tb:3 ] ) . of the histological groups g1g5 used in this study . ] for each simulation , are drawn from the log - normal distribution ( figure [ fig : lambda_pdfs ] ) , which is chosen such that ( i ) attain non - negative values , ( ii ) arbitrarily large values of are allowed , and ( iii ) low values are more frequent than large ones .we choose the latter condition assuming that the experimental procedures ( anesthesia , renal decapsulation , tissue isolation , etc . )utilized in likely increase rather than decrease tissue flexibility , thus our computed likely overestimate rather than underestimate .finally , we configure the log - normal distributions such that and have log - standard deviation of 1.1 , and , , and have log - standard deviation of 1.25 ( figure [ fig : lambda_pdfs ] ) . according to our experience , such configuration reflects the degree of the uncertainty in our estimated values of , for which we consider , , and less accurately estimated than and . for the sensitivity analysis of the model described in the previous sections ,we adopt a _ variance - based method _ which is best suited for non - linear models .let denote a generic model , where is an output value and are some random inputs ( in our case those represent the uncertain parameters ) .for a factor , the first- and total - order sensitivity indices are given by respectively , .in the equations above , and denote mean value and variance , respectively . in , first the mean of computed by fixing the factor to some value , and then the variance of the mean values is computed over all possible . in ,first the mean value is computed by fixing all factors except ( which is denoted by ) , and then the variance of the mean values is computed over all possible . according to the above definitions ,the first - order index indicates the fraction by which the variance of will be reduced if only the value of the factor is certainly specified .similar , the total - order index indicates the fraction of the variance of that will be left if all factors besides are certainly specified .we compute both indices , because generally for a non - linear model the factors are expected to interact in a non - additive way , and therefore is expected to be larger than .the difference characterizes the extent of the interactions with the other factors that is involved with . to better characterize the contribution of the individual factors of equation , in the variance of and , we calculate their first- and total - order sensitivity indices given in and .we compute the indices according to the method proposed by saltelli , which is computationally less demanding than a straightforward application of the formulas in and . briefly , according to the saltelli method we form two input matrices : by generating monte carlo samples and for the factors .subsequently , for each factor , we forme a matrix .each is formed by the columns of , except the column that corresponds to the factor , which is taken from .for instance , is given by : we use each row of the matrices , , and to solve the model equations at mmhg and combine the solutions in the vectors : where corresponds to the pressure in the cortical region , and to the pressure in the medullary region .the first- and total - order sensitivity indices are then computed by respectively . in the above equations ( [ eq:30 ] ) - ( [ eq:31 ] ), denotes the sample variance . for further details on the method ,see . for the numerical solution, we combine the model equations into a system of 69 coupled non - linear equations . given a value for the arterial blood pressure and a choice for the flexibility parameters , the resulting system is solved to yield the values for the pressures at the interstitial regions and , the pressures at the model nodes , and the volumes of the compartments . to obtain solutions , we implement the system in `matlab ` and use the standard root - finding function ( ` fsolve ` ) .this function computes solutions to the model equations iteratively by starting from a given initial approximation .for the initial approximation we use the reference values from literature ( table [ tb:3 ] ) .note that by the construction of the model , the solution at reference can be obtained trivially , and thus no root - finding is necessary for this step .in the first set of simulations , we investigate how the pressures in the interstitial regions and are affected by the arterial blood pressure for selected choices of the flexibility parameters when varies in the range 80180 mmhg .in particular , we make the following choices for the flexibility parameters : * case 1 : * case 2 : * case 3 : * case 4 : , , , , figure [ fig : selected ] shows key solution values .case 1 corresponds to a kidney with rigid compartments . in this case, pressure does not affect the volume of the compartments except of the two afferent arterioles and .for example , at elevated , the pressure differences along the afferent arterioles and increase . as a result ,the arterioles constrict in order to maintain constant blood flow ( equations and ) . given that total kidney volume not change as given by equation , the reduction in afferent arteriole volume increases the volume of the interstitial regions and given by equation . in turn ,increases in interstitial volumes reduce the protein concentrations and by equations and the oncotic pressures and that promote uptake and of interstitial fluid by equations . however , due to tubular reabsorption , the flow of fluid into the interstitial spaces is kept constant ( equations and ) .thus , in order to maintain a constant uptake and avoid accumulation of interstitial fluid , and increase . _ vise versa _ , a decrease in has the opposite effects and results in a decrease of and . because the total volume of the afferent arterioles is only a minor fraction of the volume of the interstitial regions ( % , see table [ tb:3 ] ) , even large changes of and induce small changes of and .therefore , the total change in and , across the full range of variation , is in the order of 0.1 mmhg ( see blue curves in figure [ fig : selected ] ) .case 2 corresponds to a kidney with distensible compartments .this case is similar to case 1 ; however , the changes of induced by the constriction of the afferent arterioles is followed by an expansion of the renal capsule ( equation ) , which increases whole kidney volume .so , in this case , the cortical and medullary interstitial volumes and increase to a larger extent compared with case 1 in order to accommodate the expansion of . as a result , interstitial protein concentrations , , and oncotic pressures , and drop by larger amounts than in case 1 . consequently , significant drops in and follow ( see orange curves in figure [ fig : selected ] ) .case 3 corresponds to a kidney with very flexible compartments and renal capsule . through the same effects as in cases 1 and 2 , changes in arterial pressure lead to similar changes in and .because in this case the expansion of whole kidney volume is greater than in case 2 , due to the increased flexibility of the renal capsule , the interstitial pressures are affected to a greater extent too ( see yellow curves in figure [ fig : selected ] ) .case 4 shows a different behavior that corresponds to a kidney with flexible capsule but relatively rigid compartments . as in all cases, affects severely the pressures in the pre - afferent arteriole vascular compartments , , and ( equation ) , which are not regulated by the active constriction / dilation of the afferent arterioles . as a result , whenever increases , , , and also increase , leading to an increase of the associated pre - afferent arteriole vascular volumes , , and .note that the increase of , , and opposes the reduction of and caused by constriction of the afferent arterioles . in this particular case ,opposite to what happens in cases 1 - 3 , the increase of the total volume of the pre - afferent arteriole compartments , , and exceeds the reduction of the total volume of the afferent arterioles and . as a result ,the interstitial regions are compressed , which in turn leads to increases of the protein concentrations and and oncotic pressures and . because the uptake of interstitial fluid is maintained constant ,this leads to reductions of and .finally , the reductions of and are further amplified by constriction of the renal capsule that follows the reduction of . from the previous section ,it is apparent that the predictions of the model depend on the choice of the flexibility parameters , which are not well - characterized ( section [ sec : params ] ) . to assess the degree to which different choices affect the pressures in the interstitial regions and , we sample the parameter space . for each sample point, we evaluate the model solution at an elevated arterial blood pressure .for all simulations , we keep constant at 180 mmhg .the model utilizes 5 factors that correspond to the flexibility parameters associated with the histological groups of section [ sec : params ] .we use a sample size of and perform sampling with the monte carlo method .the resulting probability densities and cumulative distributions of and are shown in figure [ fig : hist_7_factors ] .( left panel ) and ( right panel ) at elevated arterial blood pressure ( mmhg ) as estimated by model simulations .vertical lines indicate the values at the reference arterial blood pressure ( mmhg ) . ]as can be seen in figure [ fig : hist_7_factors ] , the model predicts mostly increased and at elevated .however , the uncertainty in the flexibility parameters induces a significant degree of variability for both pressures .the mean values of and are 9.1 and 8.6 mmhg , and the standard deviations are 4.1 and 3.7 mmhg , respectively .both pressure distributions are heavily skewed towards large values .interestingly , the model also predicts low or even negative pressures .negative pressure values indicate that the pressures in the interstitial regions fall below the pressure in the space surrounding the kidney , which in this study is set to 0 mmhg . in summary ,84% of and 77% of values at mmhg are above the corresponding values at mmhg , and 16% of and 11% of values lie below 0 mmhg or above 15 mmhg .( upper panels ) and ( lower panels ) with respect to the sampled input factors .dashed lines indicate the linear regression estimates . for clarity , only 1/5 of the computed points are shown . ]scatter plots between the input factors and the computed pressures and are shown in figure [ fig : scatter_7_factors ] .only shows a clear influence on and , with high values of being associated generally with higher interstitial pressures .no apparent trend can be identified for the rest of the factors .linear regressions between the computed pressures and the input factors ( shown by the dashed lines in figure [ fig : scatter_7_factors ] ) yield low .precisely , for equal 0.25 for and 0.16 for .the rest of the factors yield for 0.02 or less .such low indicate strong non - linear dependencies of the interstitial pressures on the input factors , a behavior that most likely stems from the inverse - forth - power in the poiseuille law given by equation . and the computed pressures in the cortical and medullary interstitial spaces and , respectively . ]correlation coefficients computed between the input factors and the computed pressures and are shown on figure [ fig : corr_7_factors ] ( left panel ) . as is suggested by figure [ fig : scatter_7_factors ] , is positively correlated , weakly though , with and . from the rest of the factors , , , and are negatively correlated with and , however to an even weaker than for , and shows no correlation with either or .in contrast to the apparent lack of any trend between the computed pressures and and the input factors , the model predicts a high degree of correlation between and .the associated correlation coefficient reaches as high as 0.95 ( figure [ fig : corr_7_factors ] right panel ) , which indicates that and are predicted to change _ in tandem _ in a seemingly linear way . to better characterize the contribution of the individual factors in the variance of and , we calculate their first- and total - order sensitivity indices shown on equations and .details on the adopted computational methods can be found in section [ sec : sens ] .figure [ fig : indices_7_factors ] shows the computed indices . evidently , the flexibility of the pre - afferent arteriole vascular segments ( group g4 ) accounts for most of the variation in or with respect to either the first- or total - order indices .the post - afferent arteriole vasculature ( group g5 ) has the second most significant contribution .groups g1g3 have only minor contributions according to the first - order sensitivity indices . however , this is not the case with the total - order indices , which indicate that g1 and g3 are involved to a significant degree in interactions . on the contrary ,the glomeruli ( group g2 ) have only a minor involvement in interactions . for all groups ,it is observed and , which indicate that the medullary pressure is more susceptible to interactions than cortical pressure .this behavior is expected , given that the afferent arterioles ( compartments 6 and 12 ) , which initiate the changes in and , are located exclusively in the cortex , while the medulla is susceptible mostly to secondary interactions initiated by the expansion / constriction of the renal capsule . and at elevated arterial blood pressure ( mmhg ) .lower panel shows the difference between the first- and total - order sensitivity indices . ]we develop a multi - compartmental computational model of the rat kidney .the model is constructed using conservation laws ( equations and ) , fluid dynamics ( equation ) , simplified pressure - volume relationships ( equations and ) , and constitutive equations specific to the physiology of the kidney ( equations and ) .we assign values to the model parameters ( tables [ tb:1 ] and [ tb:3 ] ) using experimental measurements when such measurements were available and previous modeling studies when direct measurements were not available . however , the data required for the flexibility parameters are sparse and do not suffice for an accurate estimation of their values . to that end , we choose to model these parameters as random variables with probability distributions that permit values spanning multiple orders of magnitude ( section [ sec : params ] and figure [ fig : lambda_pdfs ] ) . to determine the probability distributions of the random variables , we define five histological groups within the model kidney. _ group g1 _ models thick and relatively inflexible structures , for which we use pressure - mass data obtained from whole kidneys in dogs ._ group g2 _ models the glomeruli , for which we use pressure - volume data measured in rats . _group g3 _ models the various segments of the nephrons and the proximal parts of the collecting duct , for which we use pressure - radius measurements of the rat proximal tubule . _ groups g4 and g5 _ model the blood vessels , for which we use pressure - volume measurements of the systemic circulation measured in rats .we combine the post - afferent arteriole vasculature in one group ( group g5 ) , despite that it consists of segments of the arterial and venous vascular trees .we are motivated to do so by the fact that these vascular segments have considerably thiner walls and therefore should be considerably more flexible than the pre - afferent arteriole segments .output from the model leads to a range of predictions depending on the choices of the flexibility values .generally , increased arterial blood pressure is predicted to increase the pressure in both interstitial spaces ( figure [ fig : hist_7_factors ] ) . as arterialblood pressure increases from 100 mmhg to 180 mmhg , interstitial pressures are predicted to increase on average by mmhg .changes of similar magnitude have been observed in the kidneys of rats and dogs . upon a limited number of flexibility choices , however , the model predicts decreased interstitial pressures as a result .further , the model predicts a tight correlation between the cortical and the medullary pressures , figure [ fig : corr_7_factors ] ( right panel ) , which is also in agreement with the experimental observations reported in . concerning the four case studies of section [ sec : select ] ,cases 2 and 3 are in best agreement with the experimental observations in .in contrast , case 4 deviates from the experimental observations . and interstitial pressures and .changes in are transmitted to and primarily by two pathways : one is mediated by afferent arteriole volumes ( , ) which is marked with red arrows , the other is mediated by pre - afferent arteriole volumes ( , , ) and is marked with blue arrows .the two pathways have competing effects .secondary interactions are denoted with dashed lines . for simplicity , some of the secondary interactions are omitted . ] as arterial blood pressure increases , mainly two distinct pathways that lead to interstitial pressure and changes can be identified ( figure [ fig : block_summary ] ) .the first pathway ( denoted with red ) leads to _ increase _ of interstitial pressure upon constriction of the afferent arterioles .the second pathway ( denoted with blue ) leads to _ decrease _ of interstitial pressure upon dilation of the pre - afferent arteriole blood vessels .primarily , both pathways lead to changes in interstitial volumes and , which are subsequently transmitted to protein concentrations and , oncotic pressures and , and finally to and .the two pathways have competing effects ; the first leads to changes of and towards the same direction as , while the second leads to changes of and towards the opposite direction of .it is important to note that , in general , both pathways are active .however , the model results ( figure [ fig : hist_7_factors ] ) indicate that under most circumstances the first pathway dominates over the second .the model predictions appear particularly sensitive to the flexibility of the pre - afferent arteriole blood vessels ( histological group g4 ) ( figure [ fig : indices_7_factors ] ) .such behavior is attributed mostly to the fact that blood pressure is only regulated by the afferent arterioles , which are located after these vessels .the lack of pressure regulation , in the pre - afferent arteriole compartments , leads to larger internal pressure changes upon increases in arterial pressure than in the rest of the compartments .for example , as increases from 100 mmhg to 180 mmhg , assuming an increase in the interstitial pressures of mmhg , we see that the compartments of group g4 are stretched by a pressure difference of mmhg , while the walls of the rest of the compartments are stretched by a pressure difference of mmhg .thus , in view of the pressure - volume relations given by equation the resulting change in total kidney volume , which mediates the changes in interstitial pressures , is mostly affected by rather than , , , or .the model developed in this study uses several simplifications .for example , the current model assumes perfect autoregulation of blood flow for equations , which limits its applicability to cases with arterial blood pressures between 80 mmhg and 180 mmhg .the model does not account for the differences in tubular reabsorption , e.g. coefficients in , occurring between diuretic and antidiuretic animals or for pressure - diuretic responses .further , the model assumes linear pressure - volume relationships for equations and .lifting those limitations requires a more detailed model , the development of which will be the focus of future studies . despite these limitations ,the present model could be a useful component in comprehensive models of renal physiology .the authors thank dr . vasileios maroulas for assistance with the statistical analysis in this study and for other helpful discussions .this work is conducted as a part of the 2015 summer research experience for undergraduates and teachers at the national institute for mathematical and biological synthesis ( nimbios ) , sponsored by the national science foundation through nsf award -1300426 , with additional support from the university of tennessee , knoxville .c l c c c c c c c & compartment & type & number & & & nodes & frac . coeff .+ 1 & cortical interstitium & region & 1 & & - & - & - + 2 & medullary interstitium & region & 1 & & - & - & - + 3 & medullary artery & pipe & 8 & & & c1-c2 & 0 + 4 & arcuate artery & pipe & 24 & & & c2-c3 & 0 + 5 & cortical radial artery & pipe & 864 & & & c3-c4 & 0 + 6 & afferent arteriole^sn^ & pipe & 20736 & & & c4-c5 & 0 + 7 & glomerular capillary^sn^ & pipe & 5598720 & & & c5-c6 & + 8 & efferent arteriole^sn^ & pipe & 20736 & & & c6-c7 & 0 + 9 & cortical capillary & pipe & 1658880 & & & c7-c8 & see eq .+ 10 & venule^sn^ & pipe & 20736 & & & c8-c9 & 0 + 11 & cortical radial vein & pipe & 864 & & & c9-c16 & 0 + 12 & afferent arteriole^ln^ & pipe & 10368 & & & c3-c10 & 0 + 13 & glomerular capillary^ln^ & pipe & 4302720 & & & c10-c11 & + 14 & efferent arteriole^ln^ & pipe & 10368 & & & c11-c12 & 0 + 15 & descending vas rectum & pipe & 207360 & & & c12-c13 & 0 + 16 & medullary capillary & pipe & 10368000 & & & c13-c14 & see eq .+ 17 & ascending vas rectum & pipe & 414720 & & & c14-c15 & 0 + 18 & venule^sn^ & pipe & 10368 & & & c15-c16 & 0 + 19 & arcuate vein & pipe & 24 & & & c16-c17 & 0 + 20 & medullary vein & pipe & 8 & & & c17-c18 & 0 + 21 & glomerulus^sn^ & sphere & 20736 & & & c19 & - + 22 & proximal tubule^sn^ & pipe & 20736 & & & c19-c20 & + 23 & descending limb^sn^ & pipe & 20736 & & & c20-c21 & + 24 & medullary ascending limb^sn^ & pipe & 20736 & & & c21-c22 & 0 + 25 & cortical ascending limb^sn^ & pipe & 20736 & & & c22-c23 & 0 + 26 & distal tubule^sn^ & pipe & 20736 & & & c23-c29 & + 27 & glomerulus^ln^ & sphere & 10368 & & & c24 & - + 28 & proximal tubule^ln^ & pipe & 10368 & & & c24-c25 & + 29 & descending limb^ln^ & pipe & 10368 & & & c25-c26 & + 30 & medullary ascending limb^ln^ & pipe & 10368 & & & c26-c27 & 0 + 31 & cortical ascending limb^ln^ & pipe & 10368 & & & c27-c28 & 0 + 32 & distal tubule^ln^ & pipe & 10368 & & & c28-c29 & 0 + 33 & cortical collecting duct & pipe & 144 & & & c29-c30 & + 34 & medullary collecting duct & pipe & 144 & & & c30-c31 & + 35 & papillary collecting duct & pipe & 8 & & & c31-c32 & 0 + cccccccc||cc & & & & & & & & & + & m & & mmhg & mmhg & m & & & & mmhg + 1 & - & - & 6 & - & - & 7.62 & - & c1 & 100 + 2 & - & - & 6 & - & - & 4.92 & - & c2 & 97.51 + 3 & 7 & & 98.75 & -92.75 & 270 & 1.60 & & c3 & 95.02 + 4 & 2 & & 96.26 & -90.26 & 150 & 1.41 & & c4 & 93.97 + 5 & 3 & & 94.50 & -88.50 & 75 & 5.30 & & c5 & 51.17 + 6 & 300 & & 72.57 & -66.57 & 10 & 9.42 & - & c6 & 48.08 + 7 & 80 & &49.62 & -37.27 & 4.2 & 4.43 & & c7 & 14.38 + 8 & 310 & & 31.23 & 25.23 & 11 & 1.17 & & c8 & 8.92 + 9 & 40 & &11.65 & -5.65 & 4.2 & 2.21 & & c9 & 5.44 + 10 & 50 & & 7.17 & -1.18 & 12 & 2.26 & & c10 & 50.52 + 11 & 3 & & 5.40 & 0.60 & 150 & 2.12 & & c11 & 47.51 + 12 & 260 & & 72.77 & -66.77 & 10 & 8.16 & - & c12 & 12.94 + 13 & 100 & &49.02 & -35.35 & 4.2 & 5.54 & & c13 & 9.88 + 14 & 265 & & 30.22 & -24.22 & 11 & 1.00 & & c14 & 9.12 + 15 & 210 & & 11.41 & -5.41 & 9 & 5.34 & & c15 & 7.78 + 16 & 60 & &9.50 & -3.50 & 4.2 & 3.32 & & c16 & 5.37 + 17 & 210 & & 8.45 & -2.45 & 9 & 5.34 & & c17 & 4.41 + 18 & 30 & & 6.58 & -0.58 & 12 & 1.35 & & c18 & 4 + 19 & 2 & & 4.89 & 1.11 & 190 & 2.26 & & c19 & 12.36 + 20 & 7 & & 4.20 & 1.79 & 425 & 3.97 & & c20 & 11.73 + 21 & - & - & 12.36 & -6.36 & 80 & 2.14 & & c21 & 11.30 + 22 & 14 & &12.04 & -6.04 & 15 & 9.89 & & c22 & 10.93 + 23 & 2 & &11.51 & -5.51 & 8.5 & 4.53 & & c23 & 10.79 + 24 & 2 & &11.12 & 5.11 & 8.5 & 4.53 & & c24 & 13.66 + 25 & 3 & &10.86 & -4.86 & 12 & 1.35 & & c25 & 12.90 + 26 & 5 & &10.73 & -4.73 & 13.5 & 2.86 & & c26 & 11.76 + 27 & - & - & 13.66 & -7.66 & 100 & 4.18 & & c27 & 10.84 + 28 & 14 & &13.28 & -7.28 & 55 & 9.89 & & c28 & 10.79 + 29 & 5 & &12.33 & -6.33 & 8.5 & 1.13 & & c29 & 10.66 + 30 & 5 & &11.30 & -5.30 & 8.5 & 1.13 & & c30 & 6.64 + 31 & 1 & &10.82 & -4.82 & 12 & 4.52 & & c31 & 2.00 + 32 & 5 & &10.73 & -4.73 & 13.5 & 2.86 & & c32 & 2 + 33 & 1.5 & &8.65 & -2.65 & 16 & 1.20 & & + 34 & 4.5 & &4.32 & 1.68 & 16 & 3.61 & & + 35 & 2.5 & &2.00 & 4.00 & 2.3 & 4.15 & & + m heilmann , s neudecker , i wolf , l gubhaju , c sticht , daniel schock - kusch , wilhelm kriz , john f bertram , lothar r schad , and norbert gretz .quantification of glomerular number and size distribution in normal rat kidneys using magnetic resonance imaging . , 27(1):1007 , 2012 .i sgouralis , rg evans , bs gardiner , ja smith , bc fry , and at layton .renal hemodynamics , function , and oxygenation during cardiac surgery performed on cardiopulmonary bypass : a modeling study . , 3(1 ) , 2015 .
|
the pressure in the renal interstitium is an important factor for normal kidney function . here we develop a computational model of the rat kidney and use it to investigate the relationship between arterial blood pressure and interstitial fluid pressure . in addition , we investigate how tissue flexibility influences this relationship . due to the complexity of the model , the large number of parameters , and the inherent uncertainty of the experimental data , we utilize monte carlo sampling to study the model s behavior under a wide range of parameter values and to compute first- and total - order sensitivity indices . characteristically , at elevated arterial blood pressure , the model predicts cases with increased or reduced interstitial pressure . the transition between the two cases is controlled mostly by the compliance of the blood vessels located before the afferent arterioles . * keywords : * mathematical model , sensitivity analysis , monte carlo , kidney , interstitium * global sensitivity analysis in a mathematical model of the renal interstitium * + _ mariel bedell , carnegie mellon university + claire yilin lin , emory university + emmie romn - melndez , university of puerto rico mayaguez + ioannis sgouralis , national institute for mathematical and biological synthesis _
|
spherical harmonics are the eigenfunctions of the laplace operator on the 2-sphere .they form a basis and are useful and convenient to describe data on a sphere in a consistent way in spectral space .spherical harmonic transforms ( sht ) are the spherical counterpart of the fourier transform , casting spatial data to the spectral domain and vice versa .they are commonly used in various pseudo - spectral direct numerical simulations in spherical geometry , for simulating the sun or the liquid core of the earth among others .all numerical simulations that take advantage of spherical harmonics use the classical gauss - legendre algorithm ( see section [ sec : sht ] ) with complexity for a truncation at spherical harmonic degree . as a consequence of this high computational cost when increases , high resolution spherical codes currently spend most of their time performing sht .a few years ago , state of the art numerical simulations used .however , there exist several asymptotically fast algorithms , but the overhead for these fast algorithms is such that they do not claim to be effectively faster for .in addition , some of them lack stability ( the error becomes too large even for moderate ) and flexibility ( e.g. must be a power of 2 ) . among the asymptotically fast algorithms ,only two have open - source implementations , and the only one which seems to perform reasonably well is ` spharmonickit ` , based on the algorithms described by .its main drawback is the need of a latitudinal grid of size while the gauss - legendre quadrature allows the use of only collocation points .thus , even if it were as fast as the gauss - legendre approach for the same truncation , the overall numerical simulation would be slower because it would operate on twice as many points .these facts explain why the gauss - legendre algorithm is still the most efficient solution for numerical simulations . a recent paper reports that carefully tuned software could finally run 9 times faster on the same cpu than the initial non - optimized version , and insists on the importance of vectorization and careful optimization of the code . as the goal of this work is to speed - up numerical simulations, we have written a highly optimized and explicitly vectorized version of the gauss - legendre sht algorithm .the next section recalls the basics of spherical harmonic transforms .we then describe the optimizations we use and we compare the performance of our transform to other sht implementations .we conclude this paper by a short summary and perspectives for future developments .the orthonormalized spherical harmonics of degree and order are functions defined on the sphere as : where is the colatitude , is the longitude and are the associated legendre polynomials normalized for spherical harmonics which involve derivatives of legendre polynomials defined by the following recurrence : the spherical harmonics form an orthonormal basis for functions defined on the sphere : with the kronecker symbol . by construction, they are eigenfunctions of the laplace operator on the unit sphere : this property is very appealing for solving many physical problems in spherical geometry involving the laplace operator .the spherical harmonic synthesis is the evaluation of the sum up to degree , given the complex coefficients . if is a real - valued function , , where stands for the complex conjugate of .the sums can be exchanged , and using the expression of we can write from this last expression , it appears that the summation over is a regular fourier transform .hence the remaining task is to evaluate or its discrete version at given collocation points .the analysis step of the sht consists in computing the coefficients the integral over is obtained using the fourier transform : so the remaining legendre transform reads the discrete problem reduces to the appropriate quadrature rule to evaluate the integral ( [ eq : analysis ] ) knowing only the values . in particular , the use of the gauss - legendre quadrature replaces the integral of expression [ eq : analysis ] by the sum where and are respectively the gauss nodes and weights .note that the sum equals the integral if is a polynomial in of order or less .if is given by expression [ eq : synth_direct ] , then is always a polynomial in , of degree at most .hence the gauss - legendre quadrature is exact for .a discrete spherical harmonic transform using gauss nodes as latitudinal grid points and a gauss - legendre quadrature for the analysis step is referred to as a gauss - legendre algorithm .let us first recall some standard optimizations found in almost every serious implementation of the gauss - legendre algorithm .all the following optimizations are used in the ` shtns ` library . [[ use - the - fast - fourier - transform ] ] use the fast - fourier transform + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the expressions of section [ sec : sht ] show that part of the sht is in fact a fourier transform .the fast fourier transform ( fft ) should be used for this part , as it improves accuracy and speed . `shtns ` uses the ` fftw ` library , a portable , flexible and highly efficient fft implementation .[ [ take - advantage - of - hermitian - symmetry - for - real - data ] ] take advantage of hermitian symmetry for real data + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + when dealing with real - valued data , the spectral coefficients fulfill , so we only need to store them for .this also allows the use of faster real - valued ffts . [[ take - advantage - of - mirror - symmetry ] ] take advantage of mirror symmetry + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + due to the defined symmetry of spherical harmonics with respect to a reflection about the equator one can reduce by a factor of 2 the operation count of both forward and inverse transforms . andorder ( blue ) and ( red ) , showing the localization near the equator.,scaledwidth=70.0% ] [ [ precompute - values - of - p_nm ] ] precompute values of + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the coefficients appear in both synthesis and analysis expressions ( [ eq : synth_direct ] and [ eq : analysis ] ) , and can be precomputed and stored for all ( ,, ) .when performing multiple transforms , it avoids computing the legendre polynomial recursion at every transform and saves some computing power , at the expense of memory bandwidth .this may or may not be efficient , as we will discuss later .[ [ polar - optimization ] ] polar optimization + + + + + + + + + + + + + + + + + + high order spherical harmonics have their magnitude decrease exponentially when approaching the poles as shown in figure [ fig : polar ] .hence , the integral of expression [ eq : analysis ] can be reduced to where is a threshold below which is considered to be zero .similarly , the synthesis of ( eq . [ eq : synth_direct ] ) is only needed for . `shtns ` uses a threshold that does not depend on , which leads to around 5% to 20% speed increase , depending on the desired accuracy and the truncation .it can be shown that can be computed recursively by with the coefficients and do not depend on , and can be easily precomputed and stored into an array of values .this has to be compared to the order values of , which are usually precomputed and stored in the spherical harmonic transforms implemented in numerical simulations .the amount of memory required to store all in double - precision is at least bytes , which gives 2 gb for .our on - the - fly algorithm only needs about bytes of storage ( same size as a spectral representation ) , that is 8 mb for .when becomes very large , it is no longer possible to store in memory ( for nowadays ) and on - the - fly algorithms ( which recompute from the recurrence relation when needed ) are then the only possibility .we would like to stress that even far from that storage limit , on - the - fly algorithm can be significantly faster thanks to vector capabilities of modern processors .most desktop and laptop computers , as well as many high performance computing clusters , have support for single - instruction - multiple - data ( simd ) operations in double precision .the sse2 instruction set is available since year 2000 and currently supported by almost every pc , allowing to perform the same double precision arithmetic operations on a vector of 2 double precision numbers , effectively doubling the computing power .the recently introduced avx instruction set increases the vector size to 4 double precision numbers .this means that can be computed from the recursion relation [ eq : rec ] ( which requires 3 multiplications and 1 addition ) for 2 or 4 values of simultaneously , which may be faster than loading pre - computed values from memory . hence , as already pointed out by , it is therefore very important to use the vector capabilities of modern processors to address their full computing power .furthermore , when running multiple transforms on the different cores of a computer , the performance of on - the - fly transforms ( which use less memory bandwidth ) scales much better than algorithms with precomputed matrices , because the memory bandwidth is shared between cores .superscalar architectures that do not have double - precision simd instructions but have many computation units per core ( like the power7 or sparc64 ) could also benefit from on - the - fly transforms by saturating the many computation units with independent computations ( at different ) . figure [ fig : avx ] shows the benefit of explicit vectorization of on - the - fly algorithms on an intel xeon e5 - 2680 ( _ sandy bridge _ architecture with avx instruction set running at 2.7ghz ) and compares on - the - fly algorithms with algorithms based on precomputed matrices . with the 4-vectors of avx ,the fastest algorithm is always on - the - fly , while for 2-vectors , the fastest algorithm uses precomputed matrices for . in the forthcoming years, wider vector architecture are expected to become widely available , and the benefits of on - the - fly vectorized transforms will become even more important . [ [ runtime - tuning ] ] runtime tuning + + + + + + + + + + + + + + we have now two different available algorithms : one uses precomputed values for and the other one computes them on - the - fly at each transform .the ` shtns ` library compares the time taken by those algorithms ( and variants ) at startup and chooses the fastest , similarly to what the ` fftw ` library does .the time overhead required by runtime tuning can be several order of magnitude larger than that of a single transform .the observed performance gain varies between 10 and 30% .this is significant for numerical simulations , but runtime tuning can be entirely skipped for applications performing only a few transforms , in which case there is no noticeable overhead .modern computers have several computing cores .we use openmp to implement a multi - threaded algorithm for the legendre transform including the above optimizations and the _ on - the - fly _ approach . the lower memory bandwidth requirements for the _ on - the - fly _ approach is an asset for a multi - threaded transform because if each thread would read a different portion of a large matrix , it can saturate the memory bus very quickly .the multi - threaded fourier transform is left to the fftw library .we need to decide how to share the work between different threads . because we compute the on the fly using the recurrence relation [ eq : rec ] , we are left with each thread computing different , or different . as the analysis stepinvolve a sum over , we choose the latter option . from equation [ eq : synth_direct ], we see that the number of terms involved in the sum depends on , so that the computing cost will also depend on . in order to achieve the best workload balance between a team of threads ,the thread number ( ) handles , with integer from to . for different thread number , we have measured the time and needed for a scalar spherical harmonic synthesis and analysis respectively ( including the fft ) .figure [ fig : avx_omp ] shows the speedup , where is the largest of and , and is the time of the fastest single threaded tranform .it shows that there is no point in doing a parallel transform with below 128 .the speedup is good for or above , and excellent up to 8 threads for or up to 16 threads for very large transform ( ) .table [ tab : speed ] reports the timing measurements of two sht libraries , compared to the optimized gauss - legendre implementation found in the ` shtns ` library ( this work ) .we compare with the gauss - legendre implementation of ` libpsht ` , a parallel spherical harmonic transform library targeting very large , and with ` spharmonickit ` 2.7 ( dh ) which implements one of the driscoll - healy fast algorithms .all the timings are for a complete sht , which includes the fast fourier transform .note that the gauss - legendre algorithm is by far ( a factor of order 2 ) the fastest algorithm of the ` libpsht ` library .note also that ` spharmonickit ` is limited to being a power of two , requires latitudinal colocation points , and crashed for . the software library implementing the fast legendre transform described by , ` libftsh ` , has also been tested , and found to be of comparable performance to that of ` spharmonickit ` , although the comparison is not straightforward because ` libftsh ` did not include the fourier transform . again , that fast library could not operate at because of memory limitations .note finally that these measurements were performed on a machine that did not support the new avx instruction set . [ cols="^,^,^,^,^,^,^,^",options="header " , ] of the implementations from table [ tab : speed ] , where is the execution time and the frequency of the xeon x5650 cpu ( 2.67ghz ) with 12 cores.,scaledwidth=65.0% ] in order to ease the comparison , we define the efficiency of the sht by , where is the execution time ( reported in table [ tab : speed ] ) and the frequency of the cpu .note that reflects the number of computation elements of a gauss - legendre algorithm ( the number of modes times the number of latitudinal points ) .an efficiency that does not depend on corresponds to an algorithm with an execution time proportional to .the efficiency of the tested algorithms are displayed in figure [ fig : speed ] .not surprisingly , the driscoll - healy implementation has the largest slope , which means that its efficiency grows fastest with , as expected for a fast algorithm .it also performs slightly better than ` libpsht ` for .however , even for ( the largest size that it can compute ) , it is still 2.8 times slower than the gauss - legendre algorithm implemented in ` shtns ` .it is remarkable that ` shtns ` achieves an efficiency very close to 1 , meaning that almost one element per clock cycle is computed for .overall , ` shtns ` is between two and ten times faster than the best alternative .one can not write about an sht implementation without addressing its accuracy .the gauss - legendre quadrature ensures very good accuracy , at least on par with other high quality implementations .the recurrence relation we use ( see [ sec : fly ] ) is numerically stable , but for , the value can become so small that it can not be represented by a double precision number anymore .to avoid this underflow problem , the code dynamically rescales the values of during the recursion , when they reach a given threshold .the number of rescalings is stored in an integer , which acts as an enhanced exponent .our implementation of the rescaling does not impact performance negatively , as it is compensated by dynamic polar optimization : these very small values are treated as zero in the transform ( eq . [ eq : synth_direct ] and [ eq : gauss ] ) , but not in the recurrence .this technique ensures good accuracy up to at least , but partial transforms have been performed successfully up to . to quantify the error we start with random spherical harmonic coefficients with each real part and imaginary part between and . after a backward and forward transform ( with orthonormal spherical harmonics ) , we compare the resulting coefficients with the originals .we use two different error measurements : the maximum error is defined as while the root mean square ( rms ) error is defined as the error measurements for our on - the - fly gauss - legendre implementation with the default polar optimization and for various truncation degrees are shown in figure [ fig : accuracy ] .the errors steadily increase with and are comparable to other implementations .for we have , which is negligible compared to other sources of errors in most numerical simulations .despite the many fast spherical harmonic transform algorithms published , the few with a publicly available implementation are far from the performance of a carefully written gauss - legendre algorithm , as implemented in the ` shtns ` library , even for quite large truncation ( ) .explicitly vectorized on - the - fly algorithms seem to be able to unleash the computing power of nowadays and future computers , without suffering too much of memory bandwidth limitations , which is an asset for multi - threaded transforms .the ` shtns ` library has already been used in various demanding computations ( eg . * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) .the versatile truncation , the various normalization conventions supported , as well as the scalar and vector transform routines available for c / c++ , fortran or python , should suit most of the current and future needs in high performance computing involving partial differential equations in spherical geometry .thanks to the significant performance gain , as well as the much lower memory requirement of vectorized on - the - fly implementations , we should be able to run spectral geodynamo simulations at in the next few years .such high resolution simulations will operate in a regime much closer to the dynamics of the earth s core .the author thanks alexandre fournier and daniel lemire for their comments that helped to improve the paper .some computations have been carried out at the service commun de calcul intensif de lobservatoire de grenoble ( scci ) and other were run on the prace research infrastructure _ curie _ at the tgcc ( grant pa1039 ) .augier , p. , lindborg , e. , nov .2013 . a new formulation of the spectral energy budget of the atmosphere , with application to two high - resolution general circulation models .submitted to j. atmos .http://arxiv.org/abs/1211.0607 christensen , u. r. , aubert , j. , cardin , p. , dormy , e. , gibbons , s. , glatzmaier , g. a. , grote , e. , honkura , y. , jones , c. , kono , m. , matsushima , m. , sakuraba , a. , takahashi , f. , tilgner , a. , wicht , j. , zhang , k. , dec .a numerical dynamo benchmark .physics of the earth and planetary interiors 128 ( 1 - 4 ) , 2534 .http://dx.doi.org/10.1016/s0031-9201(01)00275-8 dickson , n. g. , karimi , k. , hamze , f. , jun .importance of explicit vectorization for cpu and gpu software performance .journal of computational physics 230 ( 13 ) , 53835398 .figueroa , a. , schaeffer , n. , nataf , h. c. , schmitt , d. , jan . 2013 .modes and instabilities in magnetized spherical couette flow .journal of fluid mechanics 716 , 445469 .http://dx.doi.org/10.1017/jfm.2012.551 glatzmaier , g. a. , sep . 1984 .numerical simulations of stellar convective dynamos . i. the model and method .journal of computational physics 55 ( 3 ) , 461484 .http://dx.doi.org/10.1016/0021-9991(84)90033-0 healy , d. m. , rockmore , d. n. , kostelec , p. j. , moore , s. , july 2003 .ffts for the 2-sphere - improvements and variations. journal of fourier analysis and applications 9 ( 4 ) , 341385 . http://dx.doi.org/10.1007/s00041-003-0018-9 sakuraba , a. , feb .effect of the inner core on the numerical solution of the magnetohydrodynamic dynamo .physics of the earth and planetary interiors 111 ( 1 - 2 ) , 105121 .http://dx.doi.org/10.1016/s0031-9201(98)00150-2 schaeffer , n. , jault , d. , cardin , p. , drouard , m. , 2012 . on the reflection of alfvn waves and its implication for earth s core modelling .geophysical journal international 191 ( 2 ) , 508516 .
|
in this paper , we report on very efficient algorithms for the spherical harmonic transform ( sht ) . explicitly vectorized variations of the algorithm based on the gauss - legendre quadrature are discussed and implemented in the ` shtns ` library which includes scalar and vector transforms . the main breakthrough is to achieve very efficient on - the - fly computations of the legendre associated functions , even for very high resolutions , by taking advantage of the specific properties of the sht and the advanced capabilities of current and future computers . this allows us to simultaneously and significantly reduce memory usage and computation time of the sht . we measure the performance and accuracy of our algorithms . even though the complexity of the algorithms implemented in ` shtns ` are in ( where is the maximum harmonic degree of the transform ) , they perform much better than any third party implementation , including lower complexity algorithms , even for truncations as high as . ` shtns ` is available at https://bitbucket.org/nschaeff/shtns as open source software .
|
the gravitational wave spectrum covers many decades in frequency space , just like the electromagnetic spectrum .the particular waves that are radiated in any given band of the spectrum reflect the astrophysical phenomena that generated the waves , and in particular the astrophysical timescales that dominate the movement of mass in the system . in the very - low frequency band of the spectrum , ,the primary detection technique is known as pulsar timing .pulsar timing uses the stable rotation of distant pulsars as clocks .it was first described by detweiler , and proceeds as follows .the arrival time of pulses from a pulsar are monitored on earth , and compared against a model for the expected arrival times ( the `` ephemeris '' ) .using simple models for pulsar spindown over time , models for pulse arrival times can be built with precisions at the level of fractions of a microsecond when pulsar monitoring spans several years .the fundamental signature of a gravitational wave passing between the earth and the distant pulsar is a change in the time of flight for individual pulses , advancing or retarding their time of arrival compared to the model ephemeris .the difference between the arrival time of the pulses and the model are known as `` timing residuals , '' and they constitute the basic data stream in pulsar timing searches for gravitational waves . while the measurement can be made with long - term observations of a single pulsar , the implementation of this method as a viable detection technique has been realized through the development of _ pulsar timing arrays _, where many pulsars in many parts of the sky are monitored over long periods of time , combining timing residuals from multiple pulsars to search for gravitational waves .many efforts have been launched on this front , including the north american nanohertz observatory for gravitational waves ( nanograv) , the parkes pulsar timing array , the european pulsar timing array , and the international pulsar timing array .the expected astrophysical sources of very - low frequency gravitational waves include supermassive black hole binaries ( smbhb ) , stochastic backgrounds of smbhbs , as well as a variety of other stochastic sources such as phase transitions in the early universe or relic radiation from the big bang .see for a recent review of pta sources .like most gravitational wave detection techniques , pulsar timing arrays are omnidirectional , in the sense that they are sensitive to gravitational waves from any location on the sky . as a general rule of thumb ,the sensitivity of any particular pulsar to incident gravitational waves is a function of the angle between the line of sight to the pulsar and the line of sight to the gravitational wave source .this relationship is distinctly evident in the hellings and downs curve , which relates the correlation of residual signals from two pulsars to their angular separation in the sky .this work develops a data combination technique known as `` pulsar null streams '' to provide a good estimate of the location of a gravitational wave source on the sky .this is an explicit analysis technique for determining the sky location of a putative gravitational wave source without engaging in a full parameter search .source location knowledge absent other parameter information is useful for counterpart searches , as well as restricting the search space of computationally intensive signal searches .null stream mapping of gravitational wave sources has been described for interferometric detectors , and relies on the fact that there are correlated gravitational wave signals between detectors in a network . for the case of a pulsar timing array , one has the same situation a gravitational wavefront will produce a correlated response in the timing of every pulsar in the array .this correlation may be exploited to create a null stream by taking advantage of the geometrical properties of a pulsar s response to incident gravitational waves .the paper is organized as follows . in section [ sec.pulsartiming ]we review a basic signal model for pulsar timing residuals , and express it in a form conducive to build a pulsar null stream . in section [ sec.pns ] the pulsar null stream is described and written out . section [ sec.demonstration ] shows how the pulsar null stream works for an array of three pulsars .section [ sec.pulsarsep ] discusses the errors inherent in one sub - array of pulsars .section [ sec.multsubarray ] demonstrates how multiple sub - arrays of pulsars strengthen the null signal technique .section [ sec.noise ] examines the efficacy and overall pointing ability of the method in the presence of noise .section [ sec.discussion ] summarizes the key results and discusses future directions for this work .pulsar timing arrays use the long term stability in the spacing of signal pulses from radio pulsars to detect and characterize gravitational waves . a passing gravitational wave advances or delays the arrival time of regular pulses at the earth from the pulsar .the advance or delay of the pulse is found by subtracting the pulse record from a model of the pulse arrival times in the absence of a gravitational wave ; the result is referred to as the residual , .the residual of a pulsar signal is dependent on the sky location and physical characteristics of the gravitational wave source as well as the sky location and distance to the pulsar being timed .consider a binary source of gravitational waves located on the sky at an ecliptic longitude and an ecliptic latitude .for a pulsar located at the timing residual is given by \ , \end{aligned}\ ] ] where is the gravitational wave frequency , is the ascending node of the source binary orbital plane , and the remaining terms are defined as \\ & -\sin\left(2\beta\right)\sin\left(2\beta_{p}\right)\cos \left(\lambda-\lambda_{p}\right)\nonumber\\ & + \left(2 - 3\cos^{2}\beta_{p}\right)\cos^{2}\beta\nonumber \\ b_{2}= & 2\cos\beta\sin\left(2\beta_{p}\right)\sin\left(\lambda- \lambda_{p}\right)\\ & -2\sin\beta\cos^{2}\beta_{p}\sin\left[2\left(\lambda- \lambda_{p}\right)\right]\nonumber\\ \delta\phi&=\frac{1}{2}\omega_{g}d_{p}\left(1-\cos\theta\right)\\ \cos\theta&=\cos\beta\cos\beta_{p}\cos\left(\lambda- \lambda_{p}\right)+\sin\beta\sin\beta_{p}\end{aligned}\ ] ] this form of the residual from is written so that the pulsar term is taken into account as a phase shift and amplitude modulation and can be tracked by the presence of , where the distance to the pulsar resides .the fourier transform of the residual can be written as , where the are `` beam pattern functions '' for the pulsar and are parameterized by the sky position of the source , .they can be read off as the coefficients of the individual polarizations of the waveform in eq ( [ residual ] ) .[ beampattern1 ] it should be noted that the coefficients are symmetric under the transformation and for the gravitational wave source sky location , corresponding to the antipodal point on the sphere .unfortunately , this degeneracy is manifest from the starting equations and will always result in a strong null signal at the antipodal point .a null stream is constructed from the timing residuals of three pulsars by noting that the same source polarization amplitudes , , appear in the data stream from each pulsar .this fact is exploited by taking linear combinations of pulsar data streams and factoring them in terms of the .the _ null stream _ , is the the linear combination of signals from the requisite set of pulsars for which the putative gravitational wave signal vanishes .if the fourier transform of the pulsar residuals is written as , then the null stream may be written as : where the are linear coefficients .this factors into \nonumber \\ & + & { \tilde h}_{\times}(f)\left[\alpha_{1}{\cal f}_{1}^{\times } + \alpha_{2}{\cal f}_{2}^{\times } + \alpha_{3}{\cal f}_{3}^{\times}\right]\ .\label{factoredeta}\end{aligned}\ ] ] the only way for generically is if the coefficients multiplying in eq.[factoredeta ] are zero : setting the null stream to zero implies a relationship between the arbitrary coefficients and the response functions .the system of equations is underdetermined for the , however the choice of three residual signals is crucial .if only two were chosen then only would solve the equations .here we have a freedom to choose , but there is an obvious choice , examination of the pulsar beam patterns in eqs .( [ beampattern1 ] ) shows that these combinations have two free parameters : and , the position of the gravitational wave source on the sky . with the pulsar pattern functions in hand, the derivation of the null stream is reduced to determining the values of that satisfy eq .[ nulleta ] . the sare combinations of the pattern functions , which are themselves only functions of the sky angles .operationally , the null stream search for the sky location is then reduced to a two parameter minimization problem what values of the sky angles minimize ? in noise free data , it will be a true nulling , by definition ; in the presence of noise the null stream will simply change the character of the spectrum by suppressing features that are related to the gravitational wave signal .these linear combinations can be constructed from any set of three pulsar data streams and minimized to determine the location of the source on the sky .given that modern pulsar timing arrays have greater than pulsars as part of the array , there are many different sub - arrays of pulsars that can be chosen to implement the pulsar null stream .this ability to chose sub - arrays can be exploited to increase the pointing ability of the technique .note that we have chosen above to do the analysis on a continuous source in the frequency domain , however , this need not be the case .as in previous work on null signals , , the analysis should work just as well for burst sources and in the time domain .the in depth analysis of combining various sub - arrays of pulsars has been favored over a full treatment of burst sources which will be treated in future work .using the null stream , eq .( [ nulleta ] ) , we can localize the sky position of a gravitational wave source by minimizing . in the following example , a continuous ,sinusoidal smbh was modeled , using ( [ residual ] ) , in maple with a sky position at ( , ) .we use the fft of the residual signal and the sky position of the pulsars to calculate the magnitude of eq .( [ factoredeta ] ) as a function of the sky position of the source .( [ beta1 ] ) shows cross sections of along different longitudes . with the value of set to the ecliptic longitude of the source , .notice that the strong dip at the correct latitude , . the dashed line is a cross section of the same null signal , but at a longitude that is not the correct longitude for the gravitational wave source , and hence does not have the dip .the dotted signal is another realization of the null signal , computed from an independent subarray made up of three other pulsars in the pta .the cross section is taken at the correct longitude and one can see that even though the rest of the signal does not resemble the signal from the first three pulsars , it still possesses the same strong dip at .,width=326 ] notice the large dip at the correct value of the sky position . there is a secondary minimum in the cross section , but it is an order of magnitude larger than the primary minimum . figure [ density1 ] shows the full sky density plot of the null signal . , for 1 set of three pulsars is shown as a density plot .darker areas represent low points in the signal , while lighter shades have a higher signal .notice that there are deep nulls at the correct sky position ( and ) , the antipodal point , and other parts of the sky.,width=326 ]in a given null signal there are local minima which do not coincide with the correct sky position .this is evident in fig .[ beta1 ] , where the null signal given by the solid line has a strong local minimum at around .in fact , it is common , for a single sub - array of 3 pulsars , to have the absolute minimum for a null signal to be at an incorrect sky position . the true null can be identified , and the overall size of the localization uncertainty can be minimized , by combining the null - signal from multiple sub - arrays of pulsars .this is one distinct advantage this method has over interferometric networks the large number of pulsars in the time array yields a large number of sub - arrays that can be combined to create a good null - stream pointing . in this sectionwe will focus on how to characterize the errors in one sub - array .the null stream localization error for a given sub - array of pulsars can be characterized by calculating the distance from the correct sky position to the absolute minimum for a given null signal . as one might suspect from localization schemes for other gravitational wave detectors as the angular distance between the pulsars in the null signal increases , the error of the sky location decreases . in fig .[ errorskymap ] are two examples of what the errors look like for a given null signal .these plots are constructed by putting three pulsars down at the locations indicated .then for every point in the sky , we inject a signal , and ask where the minimum in the null signal is . the shading in fig . [ errorskymap ] indicates the angular distance ( error ) from that point in the sky to the absolute minimum of the null signal . qualitatively it can be seen that there is less error when the pulsars in the null signal are more separated on the sky .this can be investigated more quantitatively by averaging over many sets of pulsars with the same separation . in fig .[ errortriangle1 ] is a statistical analysis of sets of pulsars separated as equilateral triangles in the sky .each data point represents the average of 40 different triangles sprinkled randomly across the sky . .it should also be noted that these errors are still substantial even at larger separations.,width=240 ] the same gravitational wave source sky location was used for all of the different pulsar placements .while the error to the minimum decreases substantially for a single sub - array , the absolute minimum still has a significant error , even at large angular separations .while there is a strong null in the density map of shown in fig .[ density1 ] , there are strong secondary minima as well across the entire sky .the location and strength of the secondary minima are dependent on the combined geometric orientations of the source and the pulsars used in constructing the null stream . because the strength of secondary minima varies dramatically with source position , searches for sky position would benefit from reduction in size of the secondary minima with respect to the null at the true sky location of the source .these secondary minima can be reduced , amplifying the true null , by combining multiple null streams together . in the case of gravitational wave interferometerswe are limited by the number of observatories at our disposal . however , the pta catalog has more than pulsars .any three can be used to construct a null stream .we call any choice of three pulsars a `` sub - array . ''the sub - array null signals are combined into a single signal by taking their product .this acts to strengthen the signal while suppressing the random fluctuations in the data . for purposes of comparisonthe individual null streams are all normalized by dividing a null signal by its maximum before taking their product ..,width=336 ] combining multiple null streams from independent sub - arrays immediately suppresses the secondary minima , quickly revealing the location of the null at the true sky location of a source .the top sky map in fig . [ densitysets ] shows the product of 3 independent sets of pulsars .there are still swaths of low values where the secondary minima are seen in fig .[ density1 ] , but their relative strength is greatly reduced . the lower sky map in fig .[ densitysets ] shows the product of 9 independent null signals , revealing the location of the true null on the sky .[ betanulls ] shows the one dimensional cross - sections through the parameter space at a constant value of , showing the strong null at the true source location , and only minor variations across the rest of the sky . , but the root has been taken , where is the number of sub - arrays in the null stream .the root is taken in localization comparisons so that the width of the minima can be compared over roughly the same range of values of the null signal.,width=336 ] while there are immediate gains in localization ability by combining multiple sub - arrays , relative gains are eventually reduced each time a new sub - array is added to the product .figure [ localizationnumbersets ] shows one example of how localization ability grows with the number of sub - arrays . to quantify the amount of localization from multiplying multiple null signals we first normalize the signals further by taking the root of the signal , where is the number of null signals in the product ( see figure [ betanullsroots ] to see what this looks like )this gives the null signals the same range .then define a cutoff value of the null signal given as a percentage of the maximum of the null signal .looking at the number of square degrees that falls under this cutoff gives a measure of how much one can localize the position of a given source .these localizations vary depending on the relationship between the gravitational wave source and the pulsars .another way to characterize the localization is to repeat the more statistical analysis done using sets of equilateral triangles to characterize the error of the minimal signal for a given null signal .[ errorarea ] shows how the product of null signals significantly decreases the errors in the minimum .once we get past 3 sub - arrays the error is significantly reduced .the formal presentation of the null stream solution given by eq .[ nulleta ] is an idealized , mathematical observation about the nature of the pulsar timing data and the putative signals it contains .real data , however , is subject to the presence of noise from random processes that can reduce the promising capabilities of this localization method . the effects of noise can be considered in this demonstration by injecting noise into the residual data for the pulsars at various levels , then examining how it affects the solutions for sky positions . to produce a noisy data set ,white gaussian noise is generated using maple and added to the residuals for each pulsar . in the examples herethe mean of the noise is set to zero while the standard deviation is set to the maximum amplitude of the gravitational wave source . while an snr is an accepted standard for detection , here it is difficult to see the effect that noise has on the null signal until the noise is closer to snr . in figures [ betanullsnoise ] and [ densitysetsnoise ]we see that the added noise affects the null signal , but fairly subtly . in fig .[ betanullsnoise ] we see that by the time we have the product of three null signals we have regained much of the localization we had without noise . , added using 1 , 3 and 9 sub - arrays of pulsars .as before these are cross - sections through , the correct ecliptic longitude for the source .notice that with only one noisy signal it is difficult to discern the correct latitude for the source , however with a product of three null signals we see a marked dip , and with a product of nine null signals an even more localized minimum.,width=336 ] perhaps counter - intuitively , the null stream approach still provides good localization even when the noise is comparable to the size of the signal , a situation that would be unfathomable in traditional parameter estimation .this can be understood by considering that the product of null signals amplifies the null ( by making the null smaller ) , while white noise is just as likely to increase the signal as decrease it . .the top figure is a single null signal and resembles fig .[ density1 ] strongly , however there is no dip at the correct sky location , as can be seen in the crossection in fig .[ betanullsnoise ] .the lower figure is a product of three null signals , where we see that we have regained the strong localization.,title="fig:",width=326 ] .the top figure is a single null signal and resembles fig .[ density1 ] strongly , however there is no dip at the correct sky location , as can be seen in the crossection in fig .[ betanullsnoise ] .the lower figure is a product of three null signals , where we see that we have regained the strong localization.,title="fig:",width=326 ]null stream mapping of gravitational wave sources relies on the fact that there are correlated gravitational wave signals between detectors .the underlying premise of the null stream construction is that for a collection of pulsars observing the same source , the gravitational wave signal is common to all pulsars in the array , but modified by geometric factors related to the relative position of the source on the sky .we have shown how a linear combination of three pulsar timing streams gives a signal that is minimized at the correct sky location of the gravitational wave source , though not as localized as one might need for electromagnetic counterpart searches .further we have characterized the error and localization ability of one null signal .though there are significant errors with one null signal when multiple signals are combined as products the errors decrease and the localization increases dramatically . the techniques here have focused on analysis in the frequency domain specialized for looking at stable sinusoidal signals expected from super massive black hole mergers , however , future work will consider how these sky localization techniques will work for burst type sources in the time domain .
|
locating sources on the sky is one of the largest challenges in gravitational wave astronomy , owing to the omni - directional nature of gravitational wave detection techniques , and the often intrinsically weak signals being observed . ground - based detectors can address the pointing problem by observing with a network of detectors , effectively triangulating signal locations by observing the arrival times across the network . space - based detectors will observe long - lived sources that persist while the detector moves relative to their location on the sky , using doppler shifts of the signal to locate the sky position . while these methods improve the pointing capability of a detector or network , the angular resolution is still coarse compared to the standards one expects from electromagnetic astronomy . another technique that can be used for sky localization is null - stream pointing . in the case where multiple independent data streams exist , a single astrophysical source of gravitational waves will appear in each of the data streams . taking the signals from multiple detectors in linear combination with each other , one finds there is a two parameter family of coefficients that effectively null the gravitational wave signal ; those two parameters are the angles that define the sky location of the source . this technique has been demonstrated for a network of ground - based interferometric observatories , and for 6-link space interferometers . this paper derives and extends the null - stream pointing method to the unique case of pulsar timing residuals . the basic method is derived and demonstrated , and the necessity of using the method with multiple sub - arrays of pulsars in the pulsar timing array network is considered .
|
we consider two - player games on graphs with winning objectives formalized as a _ weak - parity _ objective . in a two - player game , the set of vertices or states are partitioned into player 1 states and player 2 states . at player 1states player 1 decides the successor and likewise for player 2 .we consider weak - parity objectives , where we have a priority function that maps every state to an integer priority .play _ is an infinite sequence of states , and in a weak - parity objective the winner of a play is decided by considering the minimum priority state that appear in the play : if the minimum priority is even , then player 1 wins , and otherwise player 2 is the winner . the classical algorithm to solve weak - parity games with a naive running time analysis works in time , where is the number of priorities and is the number of edges of the game graph .since can be , in the worst case the naive analysis requires time , where is the number of states .we present an improved analysis of the algorithm and show that the algorithm works in time .we consider turn - based deterministic games played by two - players with _weak - parity _ objectives ; we call them weak - parity games .we define game graphs , plays , strategies , objectives and notion of winning below . * game graphs . * a _ game graph _ consists of a directed graph with a finite state space and a set of edges , and a partition of the state space into two sets .the states in are player 1 states , and the states in are player 2 states . for a state , we write for the set of successor states of .we assume that every state has at least one out - going edge , i.e. , is non - empty for all states .* plays . * a game is played by two players : player 1 and player 2 , who form an infinite path in the game graph by moving a token along edges . they start by placing the token on an initial state , and then they take moves indefinitely in the following way . if the token is on a state in , then player 1 moves the token along one of the edges going out of the state .if the token is on a state in , then player 2 does likewise .the result is an infinite path in the game graph ; we refer to such infinite paths as plays .formally , a _play _ is an infinite sequence of states such that for all .we write for the set of all plays. * strategies . *a strategy for a player is a recipe that specifies how to extend plays .formally , a _ strategy _ for player 1 is a function : that , given a finite sequence of states ( representing the history of the play so far ) which ends in a player 1 state , chooses the next state .the strategy must choose only available successors , i.e. , for all and we have .the strategies for player 2 are defined analogously .we write and for the sets of all strategies for player 1 and player 2 , respectively. an important special class of strategies are _ memoryless _ strategies .the memoryless strategies do not depend on the history of a play , but only on the current state .each memoryless strategy for player 1 can be specified as a function : such that for all , and analogously for memoryless player 2 strategies .given a starting state , a strategy for player 1 , and a strategy for player 2 , there is a unique play , denoted , which is defined as follows : and for all , if , then , and if , then .* weak - parity objectives .* we consider game graphs with weak - parity objectives for player 1 and the complementary weak - parity objectives for player 2 .for a play , we define to be the set of states that occur in .we also define reachability and safety objectives as they will be useful in the analysis of the algorithms .1 . _ reachability and safety objectives . _ given a set of states , the reachability objective requires that some state in be visited , and dually , the safety objective requires that only states in be visited .formally , the sets of winning plays are and .the reachability and safety objectives are dual in the sense that ._ weak - parity objectives ._ for , we let = { \{0 , 1 , \ldots , d-1\}} ] .let ] .the objective for player 1 is the weak - parity objective and the objective for player 2 is the complementary objective .the algorithm proceeds by computing attractors and removing the attractors from the game graph and proceeds on the subgame graph . at iteration ,we denote the game graph by and the state space as and the set of edges of as . at iteration , the attractor set to the set of states of priority in ( i.e. , attractor to ) is computed .if is even , the set is included in the winning set for player 1 , and otherwise it is included in the winning set for player 2 and the set is removed from the game graph for the next iterations .aa = aa = aaa = aa = aa = aa = aa = aa a 2-player game graph and priority function ] is given as an array ] .the procedure for obtaining the target sets will involve several steps . we present the steps below . 1 . _ renaming phase ._ first , we construct a renaming of the states such that states in are numbered lower than for . hereis a time procedure for renaming . 1 .consider an array of counters ] ( each ] ; ] ; + = { \mathsf{ct}}[k]+1 ] the set of states with priority ( in the same relative order ) and also works in time .the counter ] and the procedure is as follows .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * for * ( ) + ( ; j:=j+1 ] ; + [j ] ] = { \mathsf{ct}}[i-1]+j; ] are states of priority 0 , then we have states of priority 1 for .. { \mathsf{ct}}[1]-1] ] , then =i ] this procedure also works in time . 2 . in the renaming phasewe have obtained in time a renaming in the array and the inverse renaming in the array . since renaming and its inverse , for a given state ,can be obtained in constant time and can be accessed in constant time .] we can move back and forth the renaming without increasing the time complexity other than in constants .we now obtain the set of states as targets required for the attractor computation of step 2.1 of algorithm [ algorithm : classical ] in total time across the whole computation .first , we create a copy of as an array , and keep a global counter called initialized to 0 .we modify the attractor computation in step 2.1 such that in the attractor computation when a state is removed from the game graph , then ] , ( the entry of the array that represent state is set to ) .this is simply done as follows =-1 ] . also , for all we have =-1 ] .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ obtaintargets_( ) + ; + ( - 1 ; j:=j+1 ] ) + \ { = d[j+g] ] ; + .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ + the work for a given is )$ ] and since , the total work to get the target sets over all iterations is .
|
we consider games played on graphs with the winning conditions for the players specified as _ weak - parity _ conditions . in weak - parity conditions the winner of a play is decided by looking into the set of states appearing in the play , rather than the set of states appearing infinitely often in the play . a naive analysis of the classical algorithm for weak - parity games yields a quadratic time algorithm . we present a linear time algorithm for solving weak - parity games .
|
as is well known , financial time series present a strongly inhomogeneous time behavior .this is specially true when one considers either the volatility or the activity , _i.e. _ , the number of transactions per unit of time . indeed if we look at the variance of the return in a time window of , say , one day , we will observe periods of relative constant and regular behavior followed by other periods of strong variation of the price . in the same way there are days with few transactions and others where the number of trades is considerably larger .this great variability in the volatility or in the activity is generally referred to as volatility clustering or intermittency of volatility and activity . in this workwe refer to both quantities. we will thus perform measures on the activity and use two volatility models : ( i ) arch models and ( ii ) stochastic volatility ( sv ) models , where the relationship between volatility and activity is set by the usual assumption of proportionality between them .as is well known , the time interval or distance between two consecutive transactions is a random variable described by a probability density function ( pdf ) which in many cases presents an asymptotic power law of the form however does not tell anything about the independence of consecutive s .we note that if consecutive s are independent a power law tail in can explain an inhomogeneous behavior in the number of events per unit of time , where a possible measure of this inhomogeneity is the distribution of the number of trades in a fixed period of time . as feller proved many years ago ,if the time interval between some particular events , which we will call markers , in a time series is distributed according to a given density and the independence condition for holds , then the probability distribution to observe a fixed number of these markers in a given time interval , , follows a scaling law of the form where is some positive exponent and is a positive and integrable function . with the help of a recently developed technique for the analysis of time series called diffusion entropy ( de ) , we will see that the scaling observed in the distribution of the number of transactions in a time interval does not correspond to feller s analytical prescription obtained with the density estimated from data .we are thus forced to make the additional hypothesis that consecutive s are not independent .we will also assume that this correlation is due to the presence of peaks ( or clusters ) in the mean activity followed by periods of relative calm .therefore the s are positively correlated because during a peak of activity they are shorter than the mean value while away from a peak they are greater than the mean .indeed , in such a case which implies a positive correlation : .let be the random time distance between two consecutive peaks and denote by its probability density function .similarly to the distribution of the distance between two consecutive transactions , we will also assume that obeys an asymptotic power law : in this scheme the results of the de technique can be described directly in terms of the time distance and the magnitude of the peaks of activity , the latter described by a pdf .we will see , like in ref . , that the distribution of the size of the cluster given by does not play an important role , because the time distance distribution is characterized by a more anomalous exponent than .consequently we will interpret the results of de as a consequence of a non - poissonian distribution of the distance between peaks of activity .there are in the literature several approaches that try to explain the autocorrelation of activity and volatility .one recent model is based on the hypothesis that the intermittency of activity is caused by a subordination to a random walk , like in the case of the so called on - off intermittency .as clearly described in , this procedure should give for the distribution of distances between clusters a scaling law of the form where is a cutoff function ensuring the existence of the first moment of and is the time scale at which this cutoff takes place .one simple choice for is given by the exponential which allows that presents an asymptotic poissonian behavior .as we have already mentioned , other possible approaches to the problem of activity correlation are provided by arch models or sv models .we will show that both , arch and sv , models lead to a correlation in the volatility which more likely resembles to a power law tail exponent observed in a variety of financial markets .we will also show that a particular arch model , the tarch model presented in , and the sv model presented in both result in the same scaling law than that observed with the de technique . finally , and due to the absence of intra - day disturbances in the time series obtained by arch and sv models , we are also able to evaluate numerically the waiting time distribution of the distance between peaks and this distribution is compatible with the empirical evidence given by a power - law behavior for a ( long ) transient period followed by a poissonian ( exponential ) behavior .we incidentally note that this asymptotic exponential behavior indicates that very far clusters do not influence each other .the paper is organized as follows .we start with a brief review on the de technique and a simple analytical proof that the de results are determined by the most anomalous power law tail between and .after that we show the results obtained by means of the de technique on tick by tick data of a foreign exchange ( fx ) market .we also perform a filtering procedure on data in order to prove that the observed scaling is due to the anomaly in the waiting time pdf and not in the cluster size pdf .finally , we briefly describe the arch and the sv models and the results of the de and the waiting time distribution on the time series constructed using these models .the diffusion entropy technique is basically an algorithm designed to detect memory in time series .de is specially suitable for intermittent signals , _i.e. , _ for time series where bursts of activity are separated by periods of quiescent and regular behavior .the technique has been designed to study the time distribution of some markers ( or events ) along the time series and thus discover whether these events satisfy the independence condition ( ) where is the time interval between the marker labeled and the next one . as markerwe use here a very simple definition : _ each trade in the time series is a marker_. in order to apply the de technique we need to construct a new series which is a function of a coarse grained time ( in our case s ) and where is precisely the number of transactions that occurred in the previous second .we next define a new random process through the following moving counting on note that is precisely the number of markers ( _ i.e. , _ trades ) in an interval of length starting at position .if we vary the value of along the interval ] , where is the mean waiting time . hence note that , as we have ( see eq .( [ powerlaw ] ) ) moreover , as and consistently with the pareto law according to which decays as , we have where is the average peak intensity . taking into account eqs .( [ phi(s)])-([h(k ) ] ) we see that as and the joint distribution can be written as where and are such that and .we recall that the de technique measures the scaling in a moving reference frame " where the average activity is zero , for all . in order to obtain the pdf for in such reference frame we perform in eq .( [ rhoapprox ] ) the following substitution and after applying the diffusive limit we get , to the lowest order , substituting eq .( [ 2ndapprox ] ) into eq .( [ 1stapprox ] ) finally yields this equation shows that the smallest exponent between and determines the asymptotic scaling of according to the exponent therefore , the scaling perceived by de is determined by the most anomalous exponent of the scaling between the size of the clusters of activity and the distribution of their time distances .note that the case agrees with that of eq .( [ giacomoeq ] ) .we also observe that we have proven this fundamental result for the most general case in which there is no assumption on the possible correlation , or independence , among intensities and waiting times .having this in mind , we return to the problem of understanding the scaling exponent appearing in the us dollar - deutsche mark futures market . to what effect is due this scaling ?in other words , is the exponent determined by the time distance between clusters or by their size ? in order to solve this question we impose a cutoff in the size of the peaks of activity by eliminating those transactions whose time distance from the previous one is below certain threshold ( note that this actually reduces cluster sizes because the number of transactions counted is now smaller ) .if after this cutoff procedure the scaling remains invariant then would be determined by the time distances and not by the size of the clusters . in fig .[ fig1b ] the de results are shown for different values of the time - threshold ranging from to s. we see there that the slope is practically unchanged which confirms the assumption that the exponent is solely determined by the anomaly in the time distances between the clusters and not by any anomaly of their size .at the end of the last section , we have indirectly shown that the anomalous scaling observed in data is not caused by fat tails in the peak intensity distribution but by the anomalous scaling in the waiting time distribution between peaks . another more direct way to prove this would have been to single out the peaks on real data and look for their time distribution .unfortunately it is very difficult , on real data , to define a peak of activity and compute the waiting time distribution between them .this is because there are peaks of activity that appear at fixed times ( we will call these deterministic peaks " ) at the daily opening and closing sessions , at the opening during the day of other markets and even weekly at the opening of each monday .these deterministic peaks do not contribute to the increase of entropy .however , they do affect any estimation of the waiting time distribution making it very difficult to get a reliable estimate of it . a possible way out from this situation would be to generate an artificial time series simulating the real market evolution . in this artificial series , the activity would be replaced by volatility following the accepted correspondence between them and we would check there all the scaling phenomena reported up till now .results of the de analysis for us dollar - deutsche mark futures with ( solid circles ) , the tarch model ( empty circles ) , the sv model ( diamonds ) , and finally for the on - off intermittency model as given by eq .( [ eqbouchaud ] ) ( crosses).,scaledwidth=90.0% ] we will follow this procedure and choose two well accepted models for reconstructing market activity without deterministic peaks : ( i ) the tarch model , and ( ii ) the stochastic volatility model presented in .we will see that both models give the same results than those of the de technique .we finally discuss the prescription given in eq.([eqbouchaud ] ) based on multi - agent models to see whether it agrees with our results or not .this particular arch model , called tarch by its authors , is given by where is the volatility , is the one day return calculated at time , is the heavyside step function and is gaussian noise with zero mean and unit variance .the other parameters , estimated from daily data of the dow jones industrial index from 1988 to 2000 and obtained in , are : , , , . using eq .( [ tarch ] ) we generate a time series for .we then perform the de analysis on this series ( with a time step of 1 day ) by supposing that the number of trades in the i - th day is proportional to .the results are shown in fig .[ fig1 ] ( empty circles ) compared with the results on real data for ( solid circles ) .we clearly see that the tarch model predicts for a scaling exponent which agrees with actual data . for than a poissonian time days the model yields .it is worth noticing that the change in the slope of real data is likely due to the lack of statistics .moreover , we do not have enough data points to determine whether the change of slope takes place at the same time scale than in the tarch model .nevertheless , arch - type models ( similar results were obtained with in eq.([tarch ] ) ) seem to take into account the correct structure of the intermittency of financial series . waiting time distribution for the distance between clusters ( in logarithmic scale ) for the tarch model ( solid circles ) and the sv model ( empty circles ) .a distinct asymptotic behavior is clearly present : a power law tail with exponent and an exponential decay.,scaledwidth=90.0% ] from the series generated using eq .( [ tarch ] ) we can also evaluate the waiting time distribution between peaks because now we do not have deterministic peaks and other periodic effects that were present in actual data .the result is shown in fig .[ fig2 ] and as we see there that for a good fit is provided by the following power law : where and . for a clear exponential ( poisson ) behavioris present .this result has a simple physical explanation : if there is a first cluster at time the probability to observe another one just after the first is high while very distant clusters are practically independent which explains the asymptotic poisson observed behavior in fig.[fig2 ] .there exists another way of modelling volatility clustering .the so - called stochastic volatility models are an alternative choice to arch models and they are considered to be the most natural extension to the classic geometric brownian motion for the price dynamics in continuous - time finance .let us start with the zero - mean return ( i.e. , the log - price without drift ) and whose dynamics is given by the following stochastic differential equation where this equation has to be understood in the it sense and is the volatility and is the wiener process , _ i.e. _ , is gaussian white noise with zero mean and correlation function given by all sv models assume that the volatility is itself a random process .there are several ways to describe the dynamics of the volatility .one of the simplest models , which still contains almost all the basic ingredients prescribed by real markets , is given by the ornstein - uhlenbeck ( ou ) process one key property of this model is that it exhibits a stationary solution thanks to the existing reverting force quantified by to a certain average , the so - called normal level of volatility " .the stationary solution is a gaussian distribution and the resulting distribution for the return has fat tails .in addition , stylized facts such as the negative skewness and the leverage correlation require that the changes of the volatility be negatively correlated with the random source of return changes .in other words , the driving noises appearing in eqs .( [ xeq ] ) and ( [ sigma ] ) are anti correlated , that is : where . for the ou sv as given in eq .( [ sigma ] ) the characteristic exponential time decay of leverage correlation is given by which is typically of the order of few trading days ( see below ) .although the ou model has some disagreements with observations , it is complex enough to catch all the statistical properties that we are studying here .we therefore simulate the sv model with the parameters estimated from daily data of the dow jones index from 1900 to 1999 .thus , the reverting force is equal to , the noise amplitude is , the normal level of volatility reads and the correlation coefficient is .the results of the de analysis are reported in fig .[ fig1 ] ( diamonds ) while the waiting time distribution between clusters is shown in fig .[ fig2 ] ( empty circles ) . in this case , and analogously to the tarch model , we also observe a power law behavior followed by an exponential decay .the only difference is the value of the poissonian time which for this model is near to 40 days while for the tarch model is approximately 100 days .we have checked numerically that this difference is due to the fact that the parameters defining each model are estimated from the dow jones index _ over a different period of time _, much larger for the sv model than for tarch model . in any case, we can not discard any of these approaches on the basis of the empirical results . the intermittent model of the activityis also predicted and studied by several multi - agent or minority game models .these models can be connected to on - off intermittency and they generally imply that the persistency of activity is subordinated to a random walk which indicates that the waiting time distribution has the form given in eq.([eqbouchaud ] ) .as suggested in we also obtain the so - called variogram of the data , although the results denote that , if a tail is present like in eq.([eqbouchaud ] ) , causing a behavior in the variogram , the tail lasts less than 5 days probably because the fx market is more liquid than the ones considered in . furthermore the de analysis performed on a series generated according to eq .( [ eqbouchaud ] ) also leads to a transient followed by the exponent of the asymptotic behavior whereas , as is clearly seen in fig .[ fig1 ] , the transient never exhibits the exponent beyond 1 day but presents a constantly increasing exponent which is most of the time greater than 1 .we have performed the de analysis on the activity of the tick by tick time series us dollar - deutsche mark futures from 1993 to 1997 .the results clearly show the presence of an anomalous scaling , for the probability distribution of the activity , near the exponent .we have also implemented the same analysis on the volatility obtained with the tarch model and with the ou stochastic volatility model .we find in both cases an excellent agreement between the scaling measured either on the actual data and on the constructed series .we compare the results with the scheme of the subordination of the volatility to a random walk leading to eq .( [ eqbouchaud ] ) observing that a power law exponent for the tail of the distribution of the distances between peaks of volatility is more plausible .we believe that the main reason why the tarch and the sv models give better results is that in on - off intermittency models the occurrence of a peak can be considered as subordinated to a random walk but the weak restoring force ( which has to be included in the model in order to describe mean reversion ) not only causes the final stationarity and the poisson tail of fig .[ fig2 ] but also affects the process of regression to equilibrium modifying in a fundamental way ( from to ) the transient behavior of .this work has been supported in part by direccin general de proyectos de investigacin under contract no .bfm2003 - 04574 , and by generalitat de catalunya under contract no .2001sgr-00061 .engle , econometrica * 61 * , 987 ( 1982 ) j.p .fouque , g. papanicolaou , k. ronnie sircar , _ derivatives in financial markets with stochastic volatility _ , 1st .( cambridge university press , cambridge , uk , 2000 ) j. perell , j. masoliver , phys .e * 67 * , 037102 ( 2003 ) j.p .bouchaud , i. giardina , m. mzard .quant . fin . * 1 * , 212 ( 2001 ) j.p .bouchaud , i. giardina , physica a , * 324 * , 6 ( 2003 ) ; i. giardina , j.p .bouchaud , m. mzard , physica a * 299 * , 28 ( 2001 ) j. masoliver , m. montero , g. h. weiss , phys .e * 67 * , 021112 ( 2003 ) j. masoliver , m. montero , j. perell , g. h. weiss , cond - mat/0308017 w. feller , trans .soc . * 67 * , 98 ( 1949 ) p. grigolini , l. palatella , g. raffaelli , fractals * 9 * , 439 ( 2001 ) m.s .mega , p. allegrini , p. grigolini , v. latora , l. palatella , a. rapisarda , s. vinciguerra , phys .lett * 90 * , 188501 ( 2003 ) t. lux , m. marchesi , nature * 397 * , 498 ( 1999 ) ; int .appl . fin . * 3 * , 675 ( 2000 ) n. platt , e.a .spiegel , c. tresser , phys .70 * , 279 ( 1993 ) ; n. platt , s.m .hammel , j. f. heagy , phys .* 72 * , 3498 ( 1994 ) r.f .engle , a.j .patton , quant .fin . * 1 * , 237 ( 2001 ) j. masoliver , j. perell , int .appl . fin . * 5 * , 541 ( 2002 ) p. allegrini , p. grigolini , p. hamilton , l. palatella , g. raffaelli , phys .e * 65 * , 041926 ( 2002 ) p. allegrini , r. balocchi , s. chillemi , p. grigolini , l. palatella , g. raffaelli , in _ismda 2002 , lecture notes computer sciences 2526 , medical data analysis " _ , edited by a. colosimo et al .( springer - verlag , berlin , heidelberg , 2002 ) p. 115 p. allegrini , r. balocchi , s. chillemi , p. grigolini , p. hamilton , r. maestri , l. palatella , g. raffaelli , phys .e * 67 * , 062901 ( 2003 ) data record is made up of 6 hours and 40 minutes of trading activity each day .the opening of a given day has been attached to the closure of the previous day thus neglecting any possible over - night dynamics .we have checked numerically that this hypothesis should not be considered as a real constraint . even in the casethat the typical time duration of a peak is comparable with the distance between peaks , the condition required in de technique for obtaining the correct asymptotic scaling is that . this periodic behavior can be easily singled out by a fast fourier transform of the fx data . in the original modelthere is a mean drift in the return corresponding to the mean growth of the market .our data correspond to a fx market where this effect is not present , therefore , there is no mean drift in our analysis .j. perell , j. masoliver , and j .-bouchaud , cond - mat/0302095 a. krawiecki , j.a .hoyst , d. helbing , phys .rev . lett . * 89 * , 158701 ( 2002 )
|
we study the activity , _ i.e. _ , the number of transactions per unit time , of financial markets . using the diffusion entropy technique we show that the autocorrelation of the activity is caused by the presence of peaks whose time distances are distributed following an asymptotic power law which ultimately recovers the poissonian behavior . we discuss these results in comparison with arch models , stochastic volatility models and multi - agent models showing that arch and stochastic volatility models better describe the observed experimental evidences .
|
cell division is a complex biological process whose success crucially depends on the correct segregation of the genetic material enclosed in chromosomes into the two daughter cells .successful division requires that chromosomes should align on a central plate between the two poles of an extensive microtubule ( mt ) structure , called the mitotic spindle , in a process known as congression .furthermore , the central region of each chromosome , the kinetochore , should attach to mts emanating from each of the two poles , a condition known as bi - orientation .only when this arrangement is reached , do chromosomes split into two chromatid sisters that are then synchronously transported towards the poles .failure for chromosomes to congress or bi - orient can induce mitotic errors which lead to chromosomal instability ( cin ) , a state of altered chromosome number , also known as aneuploidy .cin is a characteristic feature of human solid tumors and of many hematological malignancies , a principal contributor to genetic heterogeneity in cancer and an important determinant of clinical prognosis and therapeutic resistance .chromosome congression occurs in a rapidly fluctuating environment since the mitotic spindle is constantly changing due to random mt polymerization and depolymerization events .this process , known as dynamic instability , is thought to provide a simple mechanism for mts to search - and - capture all the chromosomes scattered throughout the cell after nuclear envelope breakdown ( neb) . once chromosomes are captured , they are transported to the central plate by molecular motors that use mts as tracks .the main motor proteins implicated in this process are kinetochore dynein , which moves towards the spindle pole ( i.e. the mt minus end ) and centromere protein e ( cenp - e or kinesin-7 ) and polar ejection forces ( pefs ) , both moving away from the pole ( i.e. they are directed towards the mt plus end ) .pefs mainly originate from kinesin-10 ( kid ) and are antagonized by kinesin-4 ( kif4a ) motors , sitting on chromosome arms . while pefs are not necessary for chromosome congression , they are vital for cell division since they orient chromosome arms , indirectly stabilize end - on attached mts and are even able to align chromosomes in the absence of kinetochores .recent experimental results show that chromosome transport is first driven towards the poles by dynein and later towards the center of the cell by cenp - e and pef ( see fig [ fig : newcartoon ] ) .a quantitative understanding of chromosome congression has been the goal of intense theoretical research focusing on the mechanisms for chromosome search - and - capture , motor driven dynamics and attachments with mts . a mathematical study of search - and - capturewas performed by holy and leibler who computed the rate for a single mt to find a chromosome by randomly exploring a spherical region around the pole .later , however , wollman et al . showed numerically that a few hundred mts would take about an hour to search and capture a chromosome , instead of few minutes as observed experimentally .it was therefore argued that mts should be chemically biased towards the chromosomes . an alternative mechanism proposed to resolvethis discrepancy is the nucleation of mts directly from kinetochores , which was incorporated in a computational model treating chromosomal movement as random fluctuations in three dimensions .describing motor driven chromosome dynamics and mt attachment has also been the object of several computational studies mainly focusing on chromosome oscillations .these one - dimensional models do not account for congression , because they do not consider peripheral chromosomes , not lying between the spindle poles at neb , which are , however , experimentally observed in mammalian cells three dimensional numerical models have been extensively introduced to study cell division in yeast but in that case motor proteins are not essential for congression and there is no neb .it is not therefore not clear to which extent these models can be applied to mammalian cells . despite the number of insightful experimental and theoretical results , it is still unclear how a collection of deterministic active motor forces interact with a multitude of randomly changing mts to drive a reliable and coherent congression process in a relatively short time . a key factor that has been completely overlooked in previous studies is the role of the number of mts composing the spindlethis is because , on the one hand , it is very difficult to measure this number experimentally in a dividing cell : the only measurement to our knowledge is reported in an early paper estimating the number of mts in the mitotic spindle of kangaroo - rat kidney ( ptk ) cells as larger than . on the other hand ,computational limitations have restricted the number of simulated mts to justs few hundred . yet the misregulation of several biochemical factors controlling mt nucleation ( e.g. the centrosomal protein 4.1-associated protein cpap ) or mt depolymerization ( e.g. the mitotic centromere - associated kinase or kinesin family member 2c mcak / kif2c )are known to affect congression , suggesting that the number of mts should indeed play an important , but as yet unexplored , role in the process . herewe tackle this issue by introducing a three dimensional model of motor driven chromosome congression and bi - orientation during mitosis involving a large number of randomly evolving mts .our model describes accurately the processes of stochastic search - and - capture by mts and deterministic motor - driven transport , reproducing accurately experimental observations obtained when individual motor proteins were knocked down .furthermore , the model allows us to explore ground that is extremely difficult to cover experimentally and vividly demonstrate the crucial role played by the number of mts to achieve successful chromosome congression and bi - orientation . increasing the number of mts enhances the probability of bi - orientation but slows down congression of peripheral chromosomes due to the increase of pefs with the number of mts .conversely when the number of mts is too low , congression probability is increased but bi - orientation is impaired .most importantly , the numerical value of the optimal number of mts is around , which agrees with experimental estimates but is two orders of magnitude larger than the numbers employed in previous computational studies .we consider a three - dimensional model for chromosome congression and bi - orientation in mammalian cells based on the coordinated action of three motor proteins and a large number of mts emanating from two spindle poles .chromosomes and mts follow a combination of deterministic and stochastic rules .attached chromosomes obey a deterministic overdamped equation driven by motor forces and use mts as rails , but attachments and detachments occur stochastically .similarly , mts grow at constant velocity but can randomly switch between growing and shrinking phases .the dynamics is confined within the cell cortex , modelled as a hard envelope that repels mts and chromosomes .we set the cortex major principal axis parallel to the axis , and the minor axes as and parallel to the and axes , respectively .this results in a slightly flattened but almost circular cell . chromosomes are initially uniformly distributed in a sphere of radius representing the nuclear envelope .we assume that spindle poles are already separated and kept at a constant distance throughout the congression / bi - orientation process , in positions .mts emanate from each pole radially as straight lines in random spatial directions .a fraction of interpolar mts forms a stable scaffold , and the remainder grow or shrink with velocities and , following the dynamical instability paradigm . in this paradigm ,the transition from growing to shrinking , known as catastrophe , occurs with rate and the reverse process , known as rescue , occurs with rate .following ref . , the rate of mt catastrophe and rescue both depend on the force acting on the tip of the mt as and , where and are the sensitivities of the processes . in our simulations , the only forces on the mts are due to end - on attachments with kinetochores , which we describe in detail below . in most simulations , we consider a constant number of , but we also study the case of in which mts nucleate at rate from each pole .chromosomes consist of two large cylindrical objects , the chromatid sisters , joined at approximately their centers .chromosome arms are floppy , with an elastic modulus around 500 pa but they tend to be aligned on a plane by pefs .we therefore treat chromosome arms as a two dimensional disk of radius , representing the cross - section for their interaction with mts ( see fig [ fig : cartoon]a ) . at the centre of each chromosomesit two kinetochores , highly intricate protein complexes fulfilling a wide variety of tasks , chief of which is interacting with mts . in the model ,the two kinetochores are treated as a sphere of radius defining the interaction range with mts ( see fig [ fig : cartoon]a ) . , representing the kinetochore . in the modelthe arms is represented by a disk of radius , corresponding to the chromosome cross - section , and the kinetochore by a sphere of radius .microtubules ( red ) interact with the chromosome and exert forces on it .b ) a mt passing through a chromosome arm , adds a force in the direction of the plus - end of the mt .c ) lateral attachments add constant forces originating from groups of motor proteins at the kinetochore .which group , dynein or cenp - e is active , is determined by the simulation and described in detail in the main body of the text .d ) mt tips can form end - on attachments with the kinetochore , which is represented by a harmonic spring with stiffness and zero rest length . ]chromosomes can interact with mts in three distinct ways : pefs ( fig [ fig : cartoon]b ) , lateral attachments ( fig [ fig : cartoon]c ) and end - on attachments ( fig [ fig : cartoon]d ) . each of these interactions is associated with a specific motor force , as illustrated in the schematic in fig [ fig : cartoon ] and described below .time is discretized and at each time step we first implement stochastic events in parallel , then perform mt growth / shrinking and update chromosome positions according to the discretized overdamped equations of motion where is the drag coefficient and is the total motor force acting on chromosome .the total force is the sum of pefs , , lateral attachment forces due to dynein , and cenp - e , , and end - on - attachment spring forces .the precise form of these forces is described in detail below . for every mt crossing the chromosome within a distance of its geometrical center ( fig [ fig : cartoon]b ), the chromosome acquires a pef due to motors sitting at the chromosome arms , in direction of the plus end of the mt . in our model , lateral kinetochore - mt attachments form when a mt crosses the kinetochore interaction sphere of radius .then the mt serves as a track along which the chromosome is slid by one of two groups of motor proteins , cenp - e or dynein .cenp - e applies a force towards the plus end of the mt , away from the spindle pole , while dynein applies a force towards the minus end of the mt , thus pointing in the direction of the spindle pole , as illustrated in fig [ fig : cartoon]c .since we use overdamped dynamics , a constant force corresponds to a constant velocity with which the group of motor proteins moves the chromosome . to determine which type of motor is active , we take a deterministic approach motivated by experimental results : we initially set cenp - e as the active motor for chromosomes that are inside a shell of radius and dynein for the rest of peripheral chromosomes .experiments show that dynein brings peripheral chromosomes to the poles and is then inactivated by the action of the kinase aurora a , while cenp - e is activated . we simulate this by switching off dynein at the pole and replacing it by cenp - e .the cenp - e motor prefers to walk on long - lived mts , giving the chromosome a necessary bias to congress at the cell center .the biochemical factor underlying this process has been recently identified with the detyrosination of spindle microtubules pointing towards center of the cell . in the model, we form lateral attachments when cenp - e is active only if the mt has a lifetime larger than . the two kinetochores in our modelare represented as half - spheres and each has slots for end - on attachments with mts . in general , when the tip of an itinerant mt is within distance of a kinetochore with available slots , the mt and the kinetochore form an end - on attachment . however , after neb the kinetochores of peripheral chromosomes are covered by dynein , inhibiting end - on attachments .hence , we allow for end - on attachments only when cenp - e is active .the force on the chromosome from an end - on attached mts is translated via a harmonic coupling with zero rest length and spring constant .mts can detach stochastically from kinetochores with a rate that depends on the applied force and on the stability of the attachment .biochemical factors , such as aurora b kinase , ensure that faulty attachments are de - stabilized and correct attachments stabilized .in particular , intra - kinetochore tension in bi - oriented chromosomes inhibits the de - stabilizing effect of aurora b kinase on end - on attachments .furthermore , stabilization of chromosomes at the central plate is also due to action of kinesin-8 motors . in the present model, we simply stabilize attachments if both kinetochores have end - on attached mts stemming from both poles , while we treat as unstable the cases in which only a single kinetochore has end - on attachments or in which two kinetochores have end - on attached mts all stemming from a single pole .unstable attachment detach with a probability that decreases exponentially with applied force , where is the force on the mt tip due to coupling with the kinetochore and is the sensitivity . when the attachment is stable , we assume that the growth / shrinkage velocity of the attached mts is slowed exponentially ( see table [ tab : constants ] and ref . ) , and that attachment is contrary to intuition stabilized by an applied load .this peculiar behavior , known as a _ catch - bond _ , has been revealed experimentally and explained theoretically .the numerical solution is implemented in a custom made c++ code .images and videos are rendered in 3d using povray .simulation and rendering codes are available at https://github.com/complexitybiosystems/chromosome-congression all parameters used in the model are summarized in table [ tab : constants ] . where experimentally - measured parameters are not available , we have used estimated values .we have tested these to ensure simulation results are robust against changes in parameter values .[ cols="<,^,^,<",options="header " , ] list of parameter values employed in the simulations .[ tab : constants ]in most of our simulations , the number of mts is fixed . to justify this , we have performed simulations in which mts nucleate from the two spindle poles with a rate . at the beginning of the simulation , we assume that the mitotic spindle is already formed , the nuclear envelope is broken , and 46 chromosomes are randomly distributed in a spherical region enclosing the poles .we then integrate the equations of motion for each chromosome and monitor the number of mts as a function of the nucleation rate .we find that after a transition time ( approximately 50s ) , that is much shorter than the congression time ( fig [ fig : figure1]a ) , the number of mts fluctuates around a constant value that is linearly dependent on ( fig [ fig : figure1]b ) . .( b ) the number of mts is proportional to the rate of nucleation .the numerical results here refer to a single pole .lines are fits with the theory discussed in the text .the curves have been obtained by averaging over independent runs of the simulations .error bars are smaller than the plotted symbols . ]the result shown in fig [ fig : figure1 ] can be understood from a simple kinetic equation for the number of mts where the second term on the right - hand side is the total rate of mt collapse .the rate of collapse per mt , , is the inverse of the mt lifetime , proportional to the mt half - life .the solution of eq .[ eq : mtkinetics ] provides an excellent fit to the data with ( fig [ fig : figure1]a ) .the theory also shows that for long times , , the number of mts approaches .hence the number of mts is essentially constant during the congression process , depending only on the rate of nucleation and collapse , which are controlled by several biochemical factors .based on this result , we ignore the transient and keep constant during each simulation .after nuclear envelope breakdown , there are two possible scenarios for congression . in the first case, all chromosomes already lie between the poles , and have access to stable mts .hence , cenp - e overcomes dynein , moving the chromosome directly towards the center of the cell .the second scenario involves chromosomes not having access to stable mts , because their initial position does not lie between the poles .those chromosomes are first driven by dynein to the nearest pole and remain there until they find a stable mt to which they attach laterally .at this point , they slide towards the central plate using cenp - e motor on the stable mt .we show the evolution of these two scenarios in s1 and s2 videos . in all simulations we ran with the present parameters ( instances per scenario ) all chromosomes congress and bi - orient .next , we switch off motor proteins individually ( dynein , cenp - e or pef ) to show that the model successfully reproduces what happens in cells , where all these motors are essential .the results are summarized in fig [ fig : figure2 ] ( see also s3,s4 and s5 videos ) and show that the suppression of each of the motors leads to incorrect congression or bi - orientation . suppressing kinetochore dynein does not allow peripheral chromosomes to congress , as shown in row 2 of fig [ fig : figure2 ] .deletion of cenp - e traps chromosomes at the poles , as shown in row 3 , and pef knockdown severely reduces the cohesion of the central plate where chromosomes can not bi - orient , as shown in row 4 .these knock - downs have also been studied experimentally , yielding results in line with ours . in refs . principal contributors to pef are knocked down , and it is shown that in cases where there are no peripheral chromosomes , the chromosomes can congress but are not stable at the central plate . furthermore ,other experiments show that chromosomes are also stabilized at the central plate due to the effect of the kinesin-8 kif18a on mt plus ends .these observations fold neatly into our model and yield a possible explanation of the above mentioned slowing down of mt plus ends at kinetochores .it should also be noted that when the effect of kif18a is removed and mt plus ends follow fast dynamics again , the effective pefs in the vicinity of the central plate are reduced , further destabilizing chromosome alignment . in refs . , on the other hand , cenp - e is knocked down or suppressed , and results in chromosomes being trapped at spindle poles . finally . in ref . all three motors are suppressed individually , with exactly the same results as presented here from our simulations .we find that the ratio of the number of total mts in the system divided by twice the total number of chromosomes that is , the total number of kinetochores , affects the congression process in a non - trivial manner , as illustrated in fig [ fig : figure3 ] and s6 and s7 videos .in particular , chromosome congression and bi - orientation are influenced by the number of mt in opposite ways : while a large number of mts enhances the chances of bi - orientation , it slows down congression .this is due to the fact that pefs increase with the number of mts , thus acting against kinetochore dynein and possibly hindering the motion of peripheral chromosome towards the poles . in the wild - type case , kinetochore dynein in usually strong enough to overcome these pefs .overexpression of motors giving rise to pefs can have adverse effects , such as the over - stabilization of kinetochore - mt attachments . on the other hand ,stabilizing mts by disrupting various mt - depolymerase chains results in much slowed down congression and bi - orientation .we show the effect of too strong pefs on our model in fig [ fig : figure4]a , where the distribution of congressed chromosomes is plotted versus time for different mt densities . on the other hand ,pefs stabilize congressed chromosomes at the central plate , and in a simple search and capture scenario , like the one implemented in our model , the more mts there are the faster chromosomes become bi - oriented , as indicated in fig [ fig : figure4]b . in fig[ fig : figure4]c , we plot the median of the congression / bi - orientation time distribution defined as the time for which the probability of congression ( black ) and bi - orientation ( red ) is one half . at very low mt densities ( blue shaded area ) , reported in the left - hand - side of fig [ fig : figure4]c , not all samples congress within the limit of seconds . at slightly higher mt densities ( red shaded area ) ,not all samples bi - orient within the limit of seconds .finally , at very high mt densities , pefs become so strong that they reduce the congression probability .these observations indicate the existence of a _ sweet spot _ for the mt density suggesting that successful congression and bi - orientation can only happen only if the total number of mts in the spindle lies in the range of 7 .for which the congression / bi - orientation probability is one half .the maximum waiting time for congression is and for bi - orientation .if the mt density is too low , not all samples bi - orient , as indicated by the red shaded area . decreasing the mt density even further severely reduces the congression probability , indicated by the blue shaded area . on the other hand , increasing the mt density too much also impairs congression since kinetochore dynein will not be strong enough to overcome pefs .these results show that there is a sweet spot for congression / bi - orientation as a function of the number of mt , lying between and mts .all curves have been obtained by averaging over independent runs of the simulations .error bars are smaller than the plotted curves . ]an experimentally testable prediction of our model is the effect on congression of the overexpression of factors affecting mt depolymerization .the catastrophe / rescue rate ratio determines the mt length distribution during cell division .shorter mts would significantly hamper the search and capture process : chromosomes lying at the extreme periphery would be harder to reach , decreasing the chances for congression . to quantify this effect , we performed simulations for each mt density and increasing the value of the catastrophe rate , as illustrated in fig [ fig : figure5 ] and in s8 video .the corresponding congression probability is reported in fig [ fig : figure6 ] . for low mt densitiesthe effect is very drastic and even partial congression is suppressed . for the _ sweet - spot _ densities ,mt depolymerases overexpression has only a small effect , until the catastrophe rate becomes too large and congression disappears . .large values of , that is , overexpression of mt depolymerases , lead to unsuccessful congression .the nuclear envelope is shown for reference in each of the first panels as a white sphere .the cortex is represented in dark grey . ]independent runs of the simulations that have reached congression during a waiting time of .congression is stable over a wide range of catastrophe rates , but breaks down completely at approximately at . ]understanding cell division and its possible failures is a key problem that is relevant for many pathological conditions including cancer . while many biochemical factors controlling several aspects of the division process have been identified ,how these factors work together in a coherent fashion is still an open issue .we have introduced a comprehensive three dimensional computational model for chromosome congression in mammalian cells , using stochastic mt dynamics as well as motor - protein interplay .the model incorporates movement of the peripheral chromosomes to the poles and their escape from there towards the central plate .contrary to previous models that only used a limited number of mts ( e.g. a few hundred in ref . ) , we are able to simulate up to mts .mcintosh et al . reported already in 1975 that the number of mts in the mitotic spindle of kangaroo - rat kidney ( ptk ) cells during metaphase is larger than , in good agreement with our predictions . also , to put this number in perspective, we notice that each human chromosome has up to 50 end - on attachment slots per kinetochore , and on average 25 mts attached .since there are 46 chromosomes in human cells , this corresponds to 2300 attached mts on average. the total number of mts in the spindle should be much larger than the number of attached mt and therefore mts appears to be a reasonable number .it is interesting to remark that with this number of mts , congression and bi - orientation of chromosomes is quick enough that the assumption of biased search is not needed . with our modelwe show that the total number of mts in the spindle is _ per se _ a crucial controlling factor for successful cell division .when this number is too low or too high , congression and/or bi - orientation fail .this explains apparent paradoxes where the same factors can lead to different pathological conditions when up or down regulated .for instance , the centrosomal protein 4.1-associated protein ( cpap ) , belonging to the microcephalin ( mcph ) family , is known inhibit mt nucleation .cpap overexpression leads to abnormal cell division , whereas mutations in cpap can cause autosomal recessive primary microcephaly , characterized by a marked reduction in brain size . in the model , we can account for cpap overexpression by inhibiting mt nucleation , while its mutation can be simulated by increasing .the two processes push the number of mts out of its _ sweet spot _ , along different directions and therefore explain the different pathological conditions with a single mechanism .a similar reasoning explains the role of mitotic centromere - associated kinase or kinesin family member 2c ( mcak / kif2c ) that is localized at mt plus ends and functions as a key regulator of mitotic spindle assembly and dynamics by controlling mt length .higher expression of mcak level has been found in gastric cancer tissue , colorectal and other epithelial cancers and breast cancer .in fact , both depletion and overexpression of mcak lead to cell division errors . from the point of view of our model, we can understand that mcak overexpression increases the rate of mt depolymerization reducing their length and number to a level in which bi - orientation is not possible .finally our model explains the recent results linking cin to the overexpression of aurka or the loss of chk2 , both enhancing mt assembly rate .increasing mt velocity effectively reduces the amount of tubulin units available for mt nucleation , thus decreasing the number of mts and imparing bi - orientation . in conclusion, our model represents a general computational tool to predict the effect of biological factors on cell division making it a valid tool for _ in silico _ investigation of related pathological conditions . the main strength of our computational approach is that can it help answer questions that are extremely difficult to address experimentally , such as the role of the number of microtubules in driving successful cell division .we thank m. barisic and h. maiato for useful suggestions and for sharing the results of ref . before publication .we thank j. r. mcintosh for pointing out ref . and m. zaiser for critical reading of the manuscript .10 magidson v , oconnell cb , lonarek j , paul r , mogilner a , khodjakov a. the spatial arrangement of chromosomes during prometaphase facilitates spindle assembly . cell .2011 aug;146(4):55567 .walczak ce , cai s , khodjakov a. mechanisms of chromosome behaviour during mitosis .nat rev mol cell biol .2010 feb;11(2):91102 .matos i , pereira aj , lince - faria m , cameron la , salmon ed , maiato h. synchronizing chromosome segregation by flux - dependent force equalization at kinetochores .j cell biol .2009;186:1126 .boveri t. bermehrpolige mitosenals mittelzur analysedes zellkerns .stuber ; 1903 .burrell ra , mcgranahan n , bartek j , swanton c. the causes and consequences of genetic heterogeneity in cancer evolution . nature .2013 sep;501(7467):33845 .lee ajx , endesfelder d , rowan aj , walther a , birkbak nj , futreal pa , et al .chromosomal instability confers intrinsic multidrug resistance .cancer res .2011 mar;71(5):185870 .bakhoum sf , compton da .chromosomal instability and cancer : a complex relationship with therapeutic potential .j clin invest .2012 apr;122(4):113843 .kirschner m , mitchison tj .beyond self - assembly : from microtubules to morphogenesis . cell .1986;45:329342 .rieder cl , alexander sp .kinetochores are transported poleward along a single astral microtubule during chromosome attachment to the spindle in newt lung cells .j cell biol .1990;110:8195 .li y , yu w , liang y , zhu x. kinetochore dynein generates a poleward pulling force to facilitate congression and full chromosome alignment .. 2007;17:701712 .yang z , tulu us , wadsworth p , rieder cl .kinetochore dynein is required for chromosome motion and congression independent of the spindle checkpoint .curr biol .2007 jun;17(11):97380 .vorozhko vv , emanuele mj , kallio mj , gorbsky ptsgj .multiple mechanisms of chromosome movement in vertebrate cells mediated through the ndc80 complex and dynein / dynactin . chromosoma .2008;117:169179 .kapoor tm , lampson ma , hergert p , cameron l , cimini d , salmon ed , et al .chromosomes can congress to the metaphase plate before biorientation . science. 2006;311:38891 .cai s , oconnell cb , khodjakov a , walczak ce .chromosome congression in the absence of kinetochore fibres. nat cell biol. 2009;11:832838 .barisic m , aguiar p , geley s , maiato h. kinetochore motors drive congression of peripheral polar chromosomes by overcoming random arm - ejection forces .nat cell biol .2014 dec;16(12):124956 .rieder cl , davison ea , jensen lc , cassimeris l , salmon ed .oscillatory movements of monooriented chromosomes and their position relative to the spindle pole result from the ejection properties of the aster and half - spindle .j cell biol .1986 aug;103(2):58191 .stumpff j , wagenbach m , franck a , asbury cl , wordeman l. kif18a and chromokinesins confine centromere movements via microtubule growth suppression and spatial control of kinetochore tension . dev cell .2012 may;22(5):101729 .wandke c , barisic m , sigl r , rauch v , wolf f , amaro ac , et al .human chromokinesins promote chromosome congression and spindle microtubule dynamics during mitosis .j cell biol .2012;198:847863 .cane s , ye aa , luks - morgan sj , maresca tj .elevated polar ejection forces stabilize kinetochore - microtubule attachments .j cell biol .2013;200:203 .holy te , leiber s. dynamic instability of microtubules as an efficient way to search in space . pnas .1995;91:56825685 .wollman r , cytrynbaum en , jones jt , meyer t , scholey jm , mogilne a. efficient chromosome capture requires a bias in the ` search - and - capture ' process during mitotic - spindle assembly .curr biol .2005;15:828832 .paul r , wollman r , silkworth wt , nardi ik , cimini d , mogilner a. computer simulations predict that chromosome movements and rotations accelerate mitotic spindle assembly without compromising accuracy .2009;106:15708 15713 .joglekar ap , hunt aj . a simple , mechanistic model for directional instability during mitotic chromosome movement .biophys j. 2002;83:4258 .civelekoglu - scholey g , sharp dj , mogilner a , scholey jm .model of chromosome motility in drosophila embryos : adaptation of a general mechanism for rapid mitosis .biophys j. 2006;90:3966 .gardner mk , pearson cg , sprague bl , zarzar tr , bloom k , salmon ed , et al .tension - dependent regulation of microtubule dynamics at kinetochores can explain metaphase congression in yeast .mol biol cell .2005;16:37643775 .chacn jm , gardner mk .analysis and modeling of chromosome congression during mitosis in the chemotherapy drug cisplatin .cell mol bioeng .2013 dec;6(4):406417 .gluni m , maghelli n , krull a , krsti v , ramunno - johnson d , pavin n , et al .kinesin-8 motors improve nuclear centering by promoting microtubule catastrophe .phys rev lett .2015 feb;114(7):078103 .theoretical problems related to the attachment of microtubules to kinetochores .proc natl acad sci u s a. 1985 jul;82(13):44048 .bertalan z , porta caml , maiato h , zapperi s. conformational mechanism for the stability of microtubule - kinetochore attachments .biophys j. 2014;107:289300 .oconnell cb , khodjakov al .cooperative mechanisms of mitotic spindle formation . j cell sci .2007 may;120(pt 10):171722 .mcintosh jr , cande wz , snyder ja .structure and physiology of the mammalian mitotic spindle .soc gen physiol ser .. hung ly , chen hl , chang cw , li br , tang tk .identification of a novel microtubule - destabilizing motif in cpap that binds to tubulin heterodimers and inhibits microtubule assembly .mol biol cell .2004 jun;15(6):2697706 .bakhoum sf , genovese g , compton da .deviant kinetochore microtubule dynamics underlie chromosomal instability .curr biol .2009 dec;19(22):193742 .stout jr , rizk rs , kline sl , walczak ce . deciphering protein function during mitosis in ptk cells using rnai .bmc cell biol .2006;7:26 .domnitz sb , wagenbach m , decarreau j , wordeman l. mcak activity at microtubule tips regulates spindle microtubule length to promote robust kinetochore attachment .j cell biol .2012 apr;197(2):2317 .antonio c , ferby i , wilhelm h , jones m , karsenti e. xkid , a chromokinesin required for chromosome alignment on the metaphase plate . cell .2000;102:425435 .putkey fr , cramer t , morphew mk , silk ad , johnson rs , mcintosh jr , et al .unstable kinetochore - microtubule capture and chromosomal instability following deletion of cenp - e . dev cell .2002;3:351365 .silk ad , zasadil lm , holland aj , vitre b , cleveland dw , weaver ba .chromosome missegregation rate predicts whether aneuploidy will promote or suppress tumors .proc natl acad sci u s a. 2013 oct;110(44):e413441 .sharp dj , rogers gc , scholey jm .microtubule motors in mitosis .2000;407:4147 .mitchison tj , kirschner m. dynamic instability of microtubule growth .1984;312:237242 .akiyoshi b , sarangapani kk , powers af , nelson cr , reichow sl , arellano - santoyo h , et al .tension directly stabilizes reconstituted kinetochore microtubule attachments . nature .2010;468:576579 .nicklas rb .measurements of the force produced by the mitotic spindle in anaphase .j cell biol .1983;97:542548 .marshall wf , marko jf , agard da , sedat jw .chromosome elasticity and mitotic polar ejection force measured in living drosophila embryos by four - dimensional microscopy - based motion analysis .curr biol .2001;11:569578 .marko jf , poirier mg .micromechanics of chromatin and chromosomes .biochem cell biol .2003;81:209220 .kim y , holland aj , lan w , cleveland dw .aurora kinases and protein phosphatase 1 mediate chromosome congression through regulation of cenp - e . cell .2010;142:444455 .barisic m , silva e sousa r , tripathy sk , magiera mm , zaytsev av , pereira al , et al .microtubule detyrosination guides chromosomes during mitosis . science .available from : http://www.sciencemag.org/content/348/6236/799.abstract .maresca tj , salmon ed .welcome to a new kind of tension : translating kinetochore mechanics into a wait - anaphase signal .j cell sci .2010;123:825 834 .cimini d , wan x , hirel cb , salmon ed .aurora kinase promotes turnover of kinetochore microtubules to reduce chromosome segregation errors .curr biol .2006;16:17111718 .lampson ma , cheeseman i m .sensing centromere tension : aurora b and the regulation of kinetochore function . trends cell biol .2011;21:133128 .stumpff j , von dassow g , wagenbach m , asbury c , wordeman l. the kinesin-8 motor kif18a suppresses kinetochore movements to control mitotic chromosome alignment . dev cell .2008 feb;14(2):25262 .stumpff j , du y , english ca , maliga z , wagenbach m , asbury cl , et al .a tethering mechanism controls the processivity and kinetochore - microtubule plus - end enrichment of the kinesin-8 kif18a . mol cell .2011 sep;43(5):76475 .mcdonald kl , otoole et , mastronarde dn , mcintosh jr .kinetochore microtubules in ptk cells .j cell biol .1992;118:369383 .salmon ed , saxton wm , leslie rj , karow ml , mcintosh jr .spindle microtubule dynamics in sea urchin embryos : analysis using a fluorescein - labeled tubulin and measurements of fluorescence redistribution after laser photobleaching .j cell biol .1984;99:2157 .brouhard gj , hunt aj .microtubule movements on the arms of mitotic chromosomes : polar ejection forces quantified in vitro .2005;102:1390313908 .kim y , heuser je , waterman cm , cleveland dw .combines a slow , processive motor and a flexible coiled coil to produce an essential motile kinetochore tether .. 2008;181:411419 .mallik r , carter bc , lex sa , king sj , gross sp .cytoplasmic dynein functions as a gear in response to load .2004;427:649 .rusan nm , fagerstrom cj , yvon ac , wadsworth p. cell cycle - dependent changes in microtubule dynamics in living cells expressing green fluorescent protein - alpha tubulin .mol biol cell .2001;12:971 .tanenbaum me , macurek l , van der vaart b , galli m , akhmanova a , medema rh .a complex of kif18b and mcak promotes microtubule depolymerization and is negatively regulated by aurora kinases .curr biol .2011 aug;21(16):135665 .ganguly a , yang h , cabral f. overexpression of mitotic centromere - associated kinesin stimulates microtubule detachment and confers resistance to paclitaxel .mol cancer ther .2011 jun;10(6):92937 .bond j , roberts e , springell k , lizarraga sb , lizarraga s , scott s , et al . a centrosomal mechanism involving cdk5rap2 and cenpj controls brain size .nat genet .2005 apr;37(4):3535 .kohlmaier g , loncarek j , meng x , mcewen bf , mogensen mm , spektor a , et al . overly long centrioles and defective cell division upon excess of the sas-4-related protein cpap .curr biol .2009 jun;19(12):10128 .schmidt ti , kleylein - sohn j , westendorf j , le clech m , lavoie sb , stierhof yd , et al .control of centriole length by cpap and cp110 .curr biol .2009 jun;19(12):100511 .thornton gk , woods cg .primary microcephaly : do all roads lead to rome ?trends genet .2009 nov;25(11):50110 .hunter aw , caplow m , coy dl , hancock wo , diez s , wordeman l , et al .the kinesin - related protein mcak is a microtubule depolymerase that forms an atp - hydrolyzing complex at microtubule ends . mol cell .2003 feb;11(2):44557 .zhu c , zhao j , bibikova m , leverson jd , bossy - wetzel e , fan jb , et al .functional analysis of human microtubule - based motor proteins , the kinesins and dyneins , in mitosis / cytokinesis using rna interference .mol biol cell .2005 jul;16(7):318799 .nakamura y , tanaka f , haraguchi n , mimori k , matsumoto t , inoue h , et al .clinicopathological and biological significance of mitotic centromere - associated kinesin overexpression in human gastric cancer .br j cancer .2007 aug;97(4):5439 .gnjatic s , cao y , reichelt u , yekebas ef , nlker c , marx ah , et al .ny - co-58/kif2c is overexpressed in a variety of solid tumors and induces frequent t cell responses in patients with colorectal cancer .int j cancer .2010 jul;127(2):38193 .shimo a , tanikawa c , nishidate t , lin ml , matsuda k , park jh , et al . involvement of kinesin family member 2c / mitotic centromere - associated kinesin overexpression in mammary carcinogenesis .cancer sci .2008 jan;99(1):6270 .ertych n , stolz a , stenzinger a , weichert w , kaulfu s , burfeind p , et al . increased microtubule assembly rates influence chromosomal instability in colorectal cancer cells .nat cell biol .2014 08;16(8):779791 . available from : http://dx.doi.org/10.1038/ncb2994 .* s1 video . congression of scattered chromosomes . * representative example of the congression process in the case in which some of the chromosomes are initially scattered beyond the poles .
|
faithful segregation of genetic material during cell division requires alignment of chromosomes between two spindle poles and attachment of their kinetochores to each of the poles . failure of these complex dynamical processes leads to chromosomal instability ( cin ) , a characteristic feature of several diseases including cancer . while a multitude of biological factors regulating chromosome congression and bi - orientation have been identified , it is still unclear how they are integrated so that coherent chromosome motion emerges from a large collection of random and deterministic processes . here we address this issue by a three dimensional computational model of motor - driven chromosome congression and bi - orientation during mitosis . our model reveals that successful cell division requires control of the total number of microtubules : if this number is too small bi - orientation fails , while if it is too large not all the chromosomes are able to congress . the optimal number of microtubules predicted by our model compares well with early observations in mammalian cell spindles . our results shed new light on the origin of several pathological conditions related to chromosomal instability .
|
one of the most challenging issues of modern cosmology is to describe the positive late time acceleration through a single self - consistent theoretical scheme .indeed , the physical origin of the measured cosmic speed up is not well accounted on theoretical grounds , without invoking the existence of an additional fluid which drives the universe dynamics , eventually dominating over the other species .any viable fluid differs from standard matter by manifesting negative equation of state parameters , capable of counterbalancing the gravitational attraction at late times .thus , since no common matter is expected to behave anti - gravitationally , one refers to such a fluid as dark energy .the simplest candidate for dark energy consists in introducing within einstein equations a vacuum energy cosmological constant term , namely . the corresponding paradigm , dubbed the model , has been probed to be consistent with almost all experimental constraints , becoming the standard paradigm in cosmology .one of the main advantages of is the remarkably small number of cosmological parameters that it introduces , which suggests that any modifications of einstein s gravity reduce to at small redshift .however , recent measurements of the hubble expansion rate at redshift and an analysis of linear redshift space distortions reside outside the expectations at and confidence level , respectively . due to these facts and to the need of accounting for the ultraviolet modifications of einstein s gravity , extensions of general relativity have been proposed so far .moreover , the standard cosmological model is plagued by two profound shortcomings .first , according to observations , it is not clear why matter and magnitudes appear to be extremely close to each other , indicating an unexpected coincidence problem .second , cosmological bounds on indicate a value which differs from quantum field calculations by a factor of 123 orders of magnitude , leading to a severe fine - tuning problem .standard cosmology deems that the universe dynamics can be framed assuming that dark energy evolves as a perfect fluid , with a varying equation of state , i.e. , with total pressure and density .so , in a friedmann - robertson - walker ( frw ) picture , the universe dynamics is depicted through a pressureless matter term , a barotropic evolving dark energy density and a vanishing scalar curvature , i.e. . in lieu of developing a theory which predicts the dark energy fluid, cosmologists often try to reconstruct the universe expansion history , by parameterizing the equation of state of dark energy . for example ,polynomial fits , data depending reconstructions and cosmographic representations are consolidated manners to reconstruct .all cosmological recontructions are based on inferring the properties of dark energy without imposing _ a priori _ a form for the equation of state .in fact , any imposition would cause misleading results , as a consequence of the strong degeneracy between cosmological models .therefore , it turns out that a reconstruction of should be carried out as much as possible in a model independent manner . to this regard ,a well established method is to develop a model independent parametrization by expanding into a truncated taylor series and fixing the corresponding free parameters through current data .however , even though taylor series are widely used to approximate known functions with polynomials around some point , they provide bad convergence results over a large interval , since is expanded around , while data usually span over intervals larger than the convergence radius . a more sophisticated technique of approximation , the pad approximation , aims to approximate functions by means of a ratio between two polynomials .pad approximation is usually best suited to approximate diverging functions and functions over a whole interval , giving a better approximation than the corresponding truncated taylor series . in , pad approximations have been introduced in the context of cosmography , whereas applications have been discussed and extended in , but the authors have focused principally on writing the dark energy equation of state as a pad function . in this work we want to propose a new approach to cosmography , based on approximating the luminosity distance by means of pad functions , instead of taylor polynomials . in this waywe expect to have a better match of the model with cosmic data and to overcome possible divergences of the taylor approach at .indeed , using the pad approximation of the luminosity distances , we also show that one can improve the quality of the fits with respect to the standard re - parametrizations of the luminosity distances by means of auxiliary variables .we also propose how to deal numerically with such approximations and how to get the most viable pad expansions . as a result, we will obtain a refined statistical analysis of the cosmographic parameters .a large part of the work will be devoted to outline the drawbacks and the advantages of this technique and compare it to more standard approaches as taylor series and the use of auxiliary variables .moreover , we also include a discussion about the most adequate pad types among the wide range of possibilities .finally , we obtain a reconstruction of the dark energy equation of state which is only based on the observational values of the luminosity distance , over the full range for the redshift in which data are given . in this way , we demonstrate that pad approximations are actually preferred to fit high redshift cosmic data , thus representing a valid alternative technique to reconstruct the universe expansion history at late times .the paper is structured as follows : in sec .[ model ] we highlight the role of cosmography in the description of the present time dynamics of the universe .in particular , we discuss connections with the cosmographic series and the frw metric . in sec .[ sec : padeandcosmography ] we introduce the pad formalism and we focus on the differences between standard taylor expansions and rational series in the context of cosmography , giving a qualitative indication that a pad approximation could be preferred .we also enumerate some issues related to cosmography in the context of the observable universe . for every problem ,we point out possible solutions and we underline how we treat such troubles in our paper , with particular attention to the pad formalism .all experimental results have been portrayed in secs .[ sect : dataset ] and [ parameterestimations ] , in which we present the numerical outcomes derived both from using the pad technique and standard cosmographic approach . in sec .[ applicationspade ] we give an application of the pad recipe , that is , we use the pad technique to estimate the free parameters of some known models . in sec .[ universeeos ] , we discuss the consequences on the equation of state for the universe which can be inferred from our numerical outcomes derived by the use of pad approximants . moreover , in sec .[ consequencespade ] we discuss our numerical outcomes and we interpret the bounds obtained .finally , the last section , sec . [ conclusions ] , is devoted to conclusions and perspectives of our approach .in this section we briefly introduce the role of cosmography and its standard - usage techniques to fix cosmographic constraints on the observable universe .the great advantage of the cosmographic method is that it permits one to bound present time cosmology without having to assume any particular model for the evolution of dark energy with time .the _ cosmographic method_ stands for a coarse grained model independent technique to gather viable limits on the universe expansion history at late times , provided the cosmological principle is valid .the corresponding requirements demanded by cosmography are homogeneity and isotropy with spatial curvature somehow fixed .common assumptions on the cosmological puzzle provide a whole energy budget dominated by , ( or by some sort of dark energy density ) , with cold dark matter in second place and baryons as a small fraction only .spatial curvature in case of time - independent dark energy density is actually constrained to be negligible . however , for evolving dark energy contributions , observations are not so restrictive .more details will be given later , as we treat the degeneracy between scalar curvature and variation of the acceleration .from now on , having fixed the spatial curvature to be zero , all cosmological observables can be expanded around present time . moreover , comparing such expansions to cosmological data allows one to fix bounds on the evolution of each variable under exam .this strategy matches cosmological observations with theoretical expectations . by doing so, one gets numerical outcomes which do not depend on the particular choice of the cosmological model , since only taylor expansions are compared with data. indeed , cosmography relates observations and theoretical predictions , and it is able to alleviate the degeneracy among cosmological models .cosmography is therefore able to distinguish between models that are compatible with cosmographic predictions and models that have to be discarded , since they do not fit the cosmographic limits .hence , according to the cosmological principle , we assume the universe to be described by a friedmann - robertson - walker ( frw ) metric , i.e. where we use the notation . as a first example of cosmographic expansions , we determine the scale factor as a taylor series around present time .we have which recovers signal causality if one assumes . from the above expansion of , one defines [ csdef ] such functions are , by construction , model independent quantities , i.e. they do not depend on the form of the dark energy fluid , since they can be directly bounded by observations .they are known in the literature as the hubble rate ( ) , the acceleration parameter ( ) , the jerk parameter ( ) , the snap parameter ( ) and the lerk parameter ( ) .once such functions are fixed at present time , they are referred to as the _ cosmographic series _ ( cs ) .this is the set of coefficients usually derived in cosmography from observations .rewriting in terms of the cs gives where we have normalized the scale factor to . by rewriting eq .( ) as in eq .( ) , one can read out the meaning of each parameter .in fact , each term of the cs displays a remarkable dynamical meaning .in particular , the snap and lerk parameters determine the shape of hubble s flow at higher redshift regimes. the hubble parameter must be positive , in order to allow the universe to expand and finally and fix kinematic properties at lower redshift domains .indeed , the value of at a given time specifies whether the universe is accelerating or decelerating and also provides some hints on the cosmological fluid responsible for the dynamics .let us focus on first .we can distinguish three cases , splitting the physical interval of viability for : 1 . , shows an expanding universe which undergoes a deceleration phase .this is the case of either a matter dominated universe or any pressureless barotropic fluid .observations do not favor at present times , which however appears relevant for early time cosmology , where dark energy did not dominate over matter .2 . , represents an expanding universe which is currently speeding up .this actually represents the case of our universe .the universe is thought to be dominated by some sort of anti - gravitational fluid , as stressed in sec .[ intro ] . in turn, cosmography confirms such characteristics , without postulating any particular form of dark energy evolution .3 . , indicates that all the whole cosmological energy budget is dominated by a de sitter fluid , i.e. a cosmic component with constant energy density which does not evolve as the universe expands .this is the case of inflation at the very early universe .however at present - time this value is ruled - out by observations .besides , the variation of the acceleration provides a way to understand whether the universe passes or not through a deceleration phase . precisely , the variation of acceleration , i.e. , is related to as at present time , we therefore have and since we expect , we get that .thus if , then is linked to the sign of the variation of .we will confirm from observations that it actually lies in the interval .accordingly we can determine three cases : 1 . the universe does not show any departure from the present time accelerated phase .this would indicate that dark energy influences early times dynamics , without any changes throughout the universe evolution . even though this may be a possible scenario ,observations seem to indicate that this does not occur and it is difficult to admit that the acceleration parameter does not change its sign as the universe expands . indicates that the acceleration parameter smoothly tends to a precise value , without changing its behavior as .no theoretical considerations may discard or support this hypothesis , although observations definitively show that a model compatible with zero jerk parameter badly fits current cosmological data . implies that the universe acceleration started at a precise time during the evolution .usually , one refers to the corresponding redshift as the _ transition redshift _ , at which dark energy effects actually become significative . as a consequence, indicates the presence of a further cosmological _resource_. by a direct measurement of the transition redshift , one would get relevant constraints on the dark energy equation of state .it turns out that the sign of corresponds to a change of slope of universe s dynamics .rephrasing it differently , a positive definitively forecasts that the acceleration parameter should change sign at .a useful trick of cosmography is to re - scale the cs by means of the hubble rate . in other words , it is possible to demonstrate that if one takes into account cosmographic coefficients , only are really independent . from definitions ( [ csdef ] ) , one can write [ hpunto ] \,,\\ \ddddot{h } = & h^5 \left [ l - 5s + 10 ( q + 2 ) j + 30 ( q + 2 ) q + 24\right ] \,,\end{aligned}\ ] ] and we immediately see the correspondence between derivatives of the hubble parameter and the cs ( note in particular the degeneracy , due to the fact that all these expressions are multiplied by ) . as a consequence of the above discussion , one can choose a particular set of observable quantities and , expanding it , as well as the scale factor , it is possible to infer viable limits on the parameters . to better illustrate this statement , by means of eqs .( [ hpunto ] ) , one can infer the numerical values of the cs using the well - known luminosity distance .in fact , keeping in mind the definition of the cosmological redshift in terms of the cosmic time , that is then the luminosity distance in flat space can be expressed as where is the comoving distance traveled by a photon from redshift to us , at .the can be written as where and further expansions up to order five in that will be used in this work are reported in the appendix [ appa ] . here , for brevity we reported in eq .( [ jhgkfh ] ) the expansion up to the second order in .it is worth noticing that eq .( [ dltaylor ] ) is general and applies to any cosmological model , provided it is based on a flat frw metric .thus , by directly fitting the cosmological data for , one gets physical bounds on , , and _ for any cosmological model _ ( see ) .the above description , based on common taylor expansions , represents only one of the possible approximations that one may use for the luminosity distance .it may be argued that such an approximation does not provide adequate convergence for high redshift data .thus , our aim is to propose possible extensions of the standard taylor treatment , i.e. pad approximants , that could accurately resolve the issues of standard cosmography . in the next section we will present a different approximation for the luminosity distance , given by rational pad functions instead of taylor polynomials, we will analyze the relationship with the usual taylor expansion and argue that the rational approximation may be preferred from a theoretical point of view .later , in secs . [ sect : dataset ] and [ parameterestimations ] we will also perform the numerical comparison with observational data , in order to show that one can get improved results from this novel approach .in this section we introduce the concept of pad approximants and we describe the applications of pad treatment to cosmography . to do so ,let us define the pad approximant of a generic function , which is given by the rational function with degree ( numerator ) and ( denominator ) that agrees with and its derivatives at to the highest possible order , i.e. such that .pad approximants for given and are unique up to an overall multiplicative constant . as a consequence , the first constant in the denominatoris usually set to one , in order to face this scaling freedom .hereafter , we follow this standard notation and indicate as the pad approximant of degree at the numerator and in the denominator . as we see , in cosmology one may use direct data through the distance modulus of different astronomical objects , such as e.g. supernovae . in the usual applications of cosmography , the luminosity distance , which enters , is assumed to be a ( truncated ) taylor series with respect to around present time .a problem with such a procedure occurs when one uses data out of the interval .in fact , due to the divergence at high redshifts of the taylor polynomials , this can possibly give non - accurate numerical results .consequently , data taken over are quite unlikely to accurately fit taylor series .pad approximants can resolve this issue .in fact let us consider the general situation when one has to reconstruct a function , supposing to know the values of such a function , taken in the two limits where the independent variable is very small and very large respectively . hence , let us consider two different approximate expansions of .the first for small values of ( around ) , the second for large values of ( around ) .the two approximations can be written as [ sus ] in this way , provided we construct a function that behaves as when and as as , we are sure that in both limits such a function remains finite ( when and respectively ). given such a property , the most natural function able to interpolate our data between those two limits is naturally given by a _ rational function _ of .pad approximants are therefore adequate candidates to carry on this technique . in the next subsectionwe describe some problems associated to cosmography and to the pad formalism .later , we also propose feasible solutions that we will adopt throughout this work .we introduce this subsection to give a general discussion about several drawbacks plaguing the standard cosmographic approach . for every single problem, we describe the techniques of solutions in the framework of pad approximations , showing how we treat pad approximants in order to improve the cosmographic analysis .degeneracy between coefficients : : each cosmographic coefficient may be related to , as previously shown .this somehow provides that the whole list of independent parameters is really limited to .however , one can think of measuring through cosmography in any case , assuming to be a cosmographic coefficient , without loss of generality .the problem of degeneracy unfortunately leads to the impossibility of estimating alone by using measurements of the distance modulus , in the case of supernova observations , as we will see later . from eq .( [ dltaylor ] ) it follows that can be factorized into two pieces : and .since , therefore it depends only on , thus becoming an additive constant in which can not be estimated , its only effect is to act as a lever to the logarithm of ) .+ in other words , degenerates with the rest of the parameters . to alleviate such problem , we here make use of two different data sets , together with supernovae , i.e. the hubble measurements and the hubble telescope measure . in this way we employ direct measures of , thus reducing the errors associated to the degeneracy between cosmographic parameters .degeneracy with scalar curvature : : spatial curvature of the frw model enters the luminosity distance , since the metric directly depends on it .thus , geodesics of photons correspondingly change due to its value .therefore , any expansion of depends on as well , degenerating the values of the cs with respect to .the jerk parameter is deeply influenced by the value of the scalar curvature and degenerates with it . in our work, we overcome this problem through geometrical bounds on , determined by early time observations .according to recent measurements , the universe is considered to be spatially flat and any possible small deviations will not influence the simple case .this is the case we hereafter adopt , except for the last part in which we extensively investigate the role of in the framework of the model .dependence on the cosmological priors : : the choice of the cosmological priors may influence the numerical outcomes derived from our analyses .this turns out to be dangerous in order to determine the signs of the cosmographic coefficients . however , to alleviate this problem we may easily enlarge all the cosmological priors , showing that within convergence ranges the cs are fairly well constrained .the corresponding problem would indicate possible departures from convergence limits , if ranges are outside the theoretical expectations .hence , we found a compromise for each cosmological interval , and we report the whole list of numerical priors in table [ tab : priors ] .systematics due to truncated series : : slower convergence in the best fit algorithm may be induced by choosing truncated series at a precise order , while systematics in measurements occur , on the contrary , if series are expanded up to a certain order . in other words , introducing additional terms would decrease convergence accuracy , although lower orders may badly influence the analysis itself . to alleviate this problem, we will constrain the parameters through different orders of broadening samples . in this way, different orders will be analyzed and we will show no significative departures from our truncated series order .dependence on the friedmann equations : : dark energy is thought to be responsible of the present time acceleration . however , cosmography is able to describe the current universe speeding up without the need of postulating a precise dark energy fluid _ a priori_.this statement is clearly true only if a really barotropic fluid is responsible for the dark energy effects .+ in case there is no significative deviations from a constant equation of state for pressureless matter and dark energy is provided by some modification of gravitation , cosmography should be adjusted consequently .+ this leads to the implicit choice of assuming general relativity as the specific paradigm to get constraints on the cosmographic observables .one may therefore inquire to what extent cosmography is really independent of the friedmann equations . rephrasing it differently , to reveal the correct cosmological model we do not fix further assumptions , e.g. geometrical constraints , lorentz invariance violations , and so forth , since we circumscribe our analysis to general relativity only .any possible deviations from the standard approach would need additional theoretical bounds and the corresponding cs should be adjusted accordingly . however , this problem does not occur in this work and we can impose limits without the need of particular assumptions at the beginning of every analysis .convergence : : the convergence problem probably represents the most spinous issue of cosmography .as we have previously stated , the problem of truncated series is intimately intertwined to the order chosen for determining the particular taylor expansion under exam .unfortunately , almost all cosmological data sets exceed the bound , which represents the value around which one expands into a series . in principle, all taylor series are expected to diverge when , as a consequence of the fact that they are polynomials .thus , finite truncations get problems to adapt to data taken at , leading to possible misleading outcomes . for example , this often provides additional systematic errors because it is probable that the increase of bad convergence may affect numerical results . here , we improve accuracy by adopting the union 2.1 supernovae data set and the two additional surveys based on measurements of , i.e. direct hubble measures and hubble space telescope measurements . combining these data togethernaturally eases the issue of systematics , whereas to overcome finite truncation problems we manage to develop the so called pad approximation for different orders . by construction , since pad approximants represent a powerful technique to approximate functions by means of ratios of polynomials , one easily alleviates convergence problems for .as such , we expect pad approximants can better approximate the luminosity distance with respect to standard taylor treatments , especially when high redshift data sets are employed in the analysis . + on the other hand , in order to overcome the problem of divergence , precision cosmology employs the use of several re - parametrizations of the redshift , in terms of _ auxiliary variables _ ( ) , which enlarge the convergence radius of the taylor expansion to a sphere of radius .rephrasing it differently , supposing that data lie within , any auxiliary variable restricts the interval in a more stringent ( non - divergent ) range .a prototype of such an approach is for example given by ( see e.g. ) , whose limits in the past universe ( i.e. ) read ] ) read ] .besides , we can also compute different taylor and pad approximations for this function , and graph all the results , to show that the approximation is generally improved with the use of rational functions . in fig .[ lcdmdl ] , we present the plots of the exact luminosity distance , compared with its approximations obtained using a taylor polynomial and pad functions , for different orders of approximation . in particular , the taylor polynomial of degree three is plotted together with the pad approximants of degree , and , the polynomial of fourth degree together with the pad approximants , and and finally the fifth order taylor polynomial is compared with the pad functions , , and .we remind that e.g. the taylor polynomials of third degree and the pad approximants and have the same number of free parameters and they agree by definition up to the third order of derivatives at present time .the same holds to higher orders of both taylor and pad approximations .therefore , the situation described by the taylor and pad approximants can also be seen as having two different models which give approximately the same values for the cs parameters , albeit providing different evolutions over the whole interval considered .as one can immediately notice from all plots in fig .[ lcdmdl ] , taylor approximations ( in blue in the figures ) are really accurate until stays small , whereas they rapidly diverge from the exact curve ( in red ) as . on the contrary , we can see from the first plot that the rational approximant keeps very close to the exact function over the complete interval analyzed .moreover , as we see in the second and third plots , the situation is the same as we increase the order of the approximants .in fact , the pad functions and fairly approximate the exact luminosity distance over all the interval considered .in particular , we remark also that the correctness of the approximation not necessarily increases by increasing the order of the approximants ( as expected , since all the possible pad functions have completely different behaviors , depending on the degrees of numerator and denominator ) and that , and seem to be the best approximations , within the ones considered , giving excellent results .[ ht ] as a good check for our conclusions , we repeat such considerations by using a different model , i.e. the model , probably representing the first step beyond the model . * the model : * the hubble parameter resulting from the model reads where and is a free parameter of the model that lies in the interval .the exact integral of involves here a hypergeometric function .we plot it over the interval of interest , which is again $ ] .besides , we can also compute different taylor and pad approximations for this function , plotting all the results , to show that the approximation is generally improved with the use of rational functions as well as in the case .moreover , all the comments presented for the model also apply for the case , showing that the pad approximants give a better description of the exact luminosity distance over the full interval considered , as one can see in fig .[ wcdmdl ] .[ ht ] exactly as in the case , the best approximations are given by , and . to conclude , figs .[ lcdmdl ] and [ wcdmdl ] clearly show that , provided we are given data over a large interval of values for the cosmological redshift , it would be better to fit the observed luminosity distances with a rational function , in order to get a more realistic function that fits such data over the whole interval . by the same reasoning , the use of pad approximants seems to be also more convenient in order to infer the evolution of from knowledge of the cs . in particular, it seems that the pad approximants , and give the best approximations , which strongly suggests that the order of the numerator and that of the denominator for these models should be very close to each other , with the former possibly being greater than the latter .given this fact , in the next section we will give a quantitative analysis of different pad approximations for the luminosity distance , by comparing them with the astronomical data . in this waywe get the best values for the cs parameters by a direct fit using different forms .as we will see , this novel approach can give better bounds on the parameters , and takes better account of more distant objects , as the pad approximation over a large interval is more reliable than taylor s technique .the above considerations suggest some theoretical conclusions to build up viable pad rational functions . here, we formalize a possible recipe to determine which pad rational functions are favored with respect to others .first , the pad function should smoothly evolve in the redshift range chosen for the particular cosmographic analysis .naively , this suggests that any possible pad approximant should not have singularities in the observable redshift intervals .moreover , any pad approximant for must be positive definite and can not show negative regions , otherwise the definition of magnitude would not hold at all . finally , we expect that the degree of the numerator and that of the denominator should be close , with the former a little greater than the latter . keeping this in mind , we are ready to perform our experimental analyses .to do so , we consider some pad expansions as reported in the following sections .in this section we present the main aspects of our experimental analysis .we illustrate how we directly fit general expressions of , in terms of different types of approximations , i.e. taylor ( standard cosmographic approach ) , auxiliary variables and pad expansions ( our novel cosmographic technique ) .in general all pad approximations , due to their rational forms , may show spurious singularities for certain values of the redshift lying in the interval of data .in other words , the need of constructing precise pad approximations which are not plagued by divergences due to poles , is actually one of the tasks of our analysis .in particular , a simple manner to completely avoid such a problem consists in the choice of suitable priors for the free parameters , built up _ad hoc _ , shifting any possible poles to future time cosmological evolution .we show that data are confined inside intervals of the form , whereas possible divergences of pad functions are limited to future times , i.e. , and hence do not influence our experimental analysis. moreover , the cosmological priors adopted here are perfectly compatible with the ones proposed in several previous papers and do not influence the numerical outcomes .this shows that the pad method does not reduce the accuracy in fitting procedures and it is a good candidate to improve standard methods of cosmographic analyses .thus , let us investigate the improvements of the pad treatments with respect to standard techniques . to do so, we denote the cosmographic parameters by a suitable vector , whose dimension changes depending on how many coefficients we are going to analyze in a single experimental test .estimations of the cosmographic parameters have been performed through bayesian techniques and best fits have been obtained by maximizing the likelihood function , defined as where is the common _ ( pseudo ) -squared function _ , whose form is explicitly determined for each data set employed . maximizing the likelihood function leads to minimizing the pseudo--squared function andit can be done by means of a direct comparison with each cosmological data set . for our purposes , we describe three statistical data sets , characterized by different maximum order of parameters , providing a hierarchy among parameters .this procedure leads to a broadening of the sampled distributions if the whole set of parameters is wider , i.e. if the dimension of is higher . as a consequence, the numerical outcomes may show deeper errors , which may be healed by means of the above cited priors .we make use of the supernova union 2.1 compilation from the _ supernovae cosmology project _ , i.e. free available data of the most recent and complete supernova survey .further , we employ a gaussian prior on the present - time hubble parameter , i.e. implied by the _ hubble space telescope _( hst ) measurements , and we also consider the almost model independent baryonic acoustic oscillation , _ bao ratio _ , as proposed in .in addition , we use relevant measurements of the hubble parameter at 26 different redshifts spanning from to , commonly named _ observational hubble data _ ( ohd ) or differential hubble measurements .the cosmological priors that we have employed here are summarized in tab .[ tab : priors ] , in which we report the largest numerical interval developed for any single variable .ccccc cccccc .priors imposed on the free parameters involved in the bayesian analysis for all cosmographic tests here employed .the parameter is the normalized hubble rate , while indicates a generic cosmographic coefficient ( ) .we also report geometrical consequences on scalar curvature and the whole matter density . [ cols=">,^,<,^,<,>,^,<",options="header " , ] + now , we are ready to investigate whether and how much pad approximants are favored for estimating bounds on the late time universe . to better illustrate the procedure, we report below the function for each of the data sets adopted in the numerical analysis .type ia supernovae observations have been extensively analyzed during the last decades for parameter - fitting of cosmological models .they are considered as standard candles , i.e. quantities whose luminosity curves are intimately related to distances . in our work , we employ the most recent survey of supernovae compilations , namely union 2.1 , which extends previous versions union and union 2 data sets . here , systematics is reduced and does not influence numerical outcomes , as for previous surveys .the standard fitting procedure relies on using a gaussian _ -squared _ function , evaluating differences between theoretical and observational distance modulus .nevertheless , the presence of nuisance parameters as the hubble factor and absolute magnitude enforces to marginalize over .straightforward calculations provide where we defined \ , .\nonumber\end{aligned}\ ] ] here represents the covariance matrix of observational data , including statistical and systematic errors as well , and the -th component of the vector given by the ratio of baryonic acoustic oscillation ( bao ) is slightly model dependent , since acoustic scales actually depend on the redshift ( drag time redshift ) , inferred from a first order perturbation theory . however , baryonic acoustic oscillations determined from have been found in terms of a model independent quantity , i.e. where the volumetric distance is defined as ^{1/3}\,.\ ] ] the bao ratio -squared function is simply given by we describe below the procedure to compute by means of the pad expansions for .first , one needs to compute the approximation of in terms of by inverting eq .( [ defdl ] ) .it follows ^{-1}\,.\ ] ] afterwards , inserting from eq .( ) and the pad expressions for into eq .( [ defdv ] ) , one obtains the corresponding approximations for , as reported in appendix a. we use 26 independent ohd data from , as reported in the appendix of this work .we use those data , following , in which a novel approach to track the universe expansion history has been proposed , employing massive early type galaxies as cosmic chronometers .the technique allows one to estimate the quantity , sometimes referred to as _ differential time _, which is related to the hubble rate by bearing in mind eq .( [ njvnjfk ] ) , a preliminary list of 19 numerical outcomes has been found , whereas the other 7 data have been determined from the study of galaxy surveys : 2 from , 4 from the wiggle z collaboration and one more from .all hubble estimates are uncorrelated , therefore the - function is simply given by in the appendix [ appa ] , as already stressed , we provide tab . [table : ohd ] , where we summarize the ohd data used in this paper . due to the fact that the different data sets are uncorrelated ,the total - function is given by the best fit to the data is given by those parameters that maximize the likelihood function .we obtain them and their respective confidence intervals by using a metropolis - hasting markov chain monte carlo ( mcmc ) algorithm with the publicly available cosmomc code .we run several independent chains and to probe their convergence we use the gelman - rubin criteria with .we accurately modify the priors for each , within the interval of values reported in tab . [tab : priors ] .the approximations enclosed in the triangle give conclusive results.,width=326 ] [ ht ] for the parameters estimation we will use the cs combined in three sets with different maximum order of parameters : for the parameters set the corresponding pad approximants are and , as shown in fig . [fig : pa ] . of those approximants ,only gives conclusive results .for we obtain conclusive results for and and for for , and . in tables[ table:1dresults1 ] , [ table:1dresults2 ] , and [ table:1dresults3 ] we show the best fits and their -likelihoods for the parameters sets , and respectively .we also show the estimated cs obtained by the standard cosmography ( ) or taylor approach .we worked out the model , which is for our purposes and the redshifts involved sufficiently described by two parameters : and . here is defined as 100 times the ratio of the sound horizon to the angular diameter distance at recombination , while as usual is the abundance of matter density ( both baryonic and dark matter ) , and is the dimensionless hubble parameter , as reported in tab .[ tab : priors ] .the best fits , using the same data sets as above , are given by and . from these values and the formulas which are valid only for flat model, we have also estimated the cosmographic parameters and we report them in tables [ table:1dresults1 ] , [ table:1dresults2 ] , and [ table:1dresults3 ] .c|c|c|c parameter & & & derived + & & & + & & & + & & & + notes .a. is given in km / s / mpc units .[ table:1dresults1 ] c|c|c|c|c parameter & & & & derived + & & & & + & & & & + & & & & + & & & & + notes .a. is given in km / s / mpc units .[ table:1dresults2 ] c|c|c|c|c|c parameter & & & & & derived + & & & & & + & & & & & + & & & & & + & & & & & + & & & & & + notes .a. is given in km / s / mpc units .[ table:1dresults3 ] as we can observe from figs .[ fig:1dimmodela ] , [ fig:1dim2modelb ] and [ fig:1dim2modelc ] , the pad approximants give similar results to standard cosmography , with the advantage of the convergence properties discussed in the previous sections .we note that in particular , and draw better samples with narrower dispersion . for this reason ,we plot the contours for those approximants in figs .[ fig : pade21 ] , [ fig : pade31 ] and [ fig : pade32 ] .it is remarkable that the same degeneracies among the parameters are found in all cases , even in other cases which have not been investigated here , see , e.g. .in flat frw metric ( [ frw ] ) , the friedmann equation for the energy density reads , where the sum is over all cosmic species contributing to the whole energy budget .afterwards , recovering bianchi identities , one gets the continuity equation in the form , in the absence of energy transfer among the different components . in order to determine a specific model, one must specify the cosmic fluids and their equations of state .below we test some models by means of our cosmographic results , inferred from the pad formalism .we deal with implicit propagation of errors , since it is convenient to work with expected values and variances of the cosmographic parameters , instead of their probability distribution functions .thus , for example where and is the non - normalized posterior distribution found in section [ parameterestimations ] by the mcmc analysis .the variance is and similar equations hold for the other cosmological parameters . for the pad approximants , and we obtain * pad approximant : * pad approximant : * pad approximant : where the reported error values are the standard deviations of the probability distributions , . using these resultswe can approximate the probability distributions as gaussians centered around their mean values and with variance .now we are ready to investigate the implications of the results obtained using pad on some relevant cosmological models . concerning the flat model , the parameters to estimate are only and .it is easy to demonstrate that , while is actually one of the cs , the matter density can be related to as .we have found by the mcmc algorithm the distribution functions for , obtained using the results for the pad approximant . the expected value for is given by and its variance is , obtaining using the results for the pad approximant we obtain , from , see eqs .( [ lcdmcs ] ) , this procedure leads to a projection from a 4-parameter model ( the pad ) to a 2-parameter model ( late time flat- model ) , providing a broadening on the estimated parameters . in analogy to the case in which , it is easy to show that . keeping in mind this expression ,we obtain : the combination of the two results ( [ lcdmom1 ] ) and ( [ lcdmom2 ] ) should give tighter constraints .if the probability distribution functions of and are independent , the distribution function of is simply the product of the two distributions and .if we further assume gaussian distributions , all the statistical information is given by eqs .( [ lcdmom1 ] ) and ( [ lcdmom2 ] ) , obtaining a rough estimate : in this way , we did a 3-parameter to 2-parameter projection .we can not do anything better than eq .( [ mah ] ) for the flat- model due to the fact that is fixed to . as an example , to go beyond the case , one can consider a generic additional cosmic component , relevant at late times .to do so , its equation of state parameter should lie in the interval ; but to avoid large degeneracies with a cosmological constant or with dust fluids we can not be very close to or to .a possible example is offered by the scalar curvature , that we neglected in all our previous numerical outcomes . in such case, one can choose the equation of state , thus the corresponding hubble rate takes the simple form with .the process of measurement indeed differs , since has a different expressions for flat and non - flat cases . in general , the luminosity distance equation is with the comoving distance to redshift given by eq .( [ kjjgjhg ] ) . for sufficiently small , the second equality in equation ( [ lumdistnonflat ] ) is a good approximation . for illustration purposes, we can consider and assume that the estimated values for the parameters are good for small and definitively identify with curvature .the cosmographic parameters up to are in this case from the second equation and the results for the pad approximant ( eq . ( [ p31expectedv ] ) ) , we have . in the case of , using the first and second equations , , and using the second and third equations , . joining these resultswe obtain the chevallier - polarski - linder ( cpl ) dark energy parametrization assumes that the universe is composed by baryons , cold dark matter and dark energy with an evolving equation of state of the form : the background cosmology can not distinguish between dark matter and baryons , thus we write where and . using ( [ cplh ] ) and eqs .( [ hpunto ] ) we obtain \,.\end{aligned}\ ] ] thus , if we use the results , we have to estimate 3 parameters out of other 3 parameters .this is done numerically by propagating errors in eqs .( [ p31expectedv ] ) obtaining one relevant approach to dark energy suggests that the universe is composed by a single fluid , which unifies dark matter and dark energy in a single description .a barotropic perfect fluid with vanishing adiabatic sound speed reproduces the behavior at the background , as proposed in and it is compatible with small perturbations , as shown in .the corresponding equation of state reads while the total equation of state for the total dark fluid in the model reads thus , both models , i.e. and the negligible sound speed model , exactly behave in the same way and hence they are degenerate .there are several other options for a unified dark fluid which does not degenerate with .one of these frameworks is represented by the chaplygin gas and its generalizations and constant adiabatic speed of sound models , among others .therefore , we parameterize the dark fluid equation of state by a taylor series : where knowing the value of , in order to estimate up to , we need to use the first cosmographic parameters .if we use pad approximants up to , then we truncate the expansion series at third order .the hubble rate easily reads where and we define . \label{fofz}\end{aligned}\ ] ] the parameters to estimate are given implicitly by which reduce to the flat- values when , , and by considering . from several independent observations we have measurements of the baryon species in the universe . in this sectionwe will take the best - fit from the planck collaboration .we report the estimated values from pad approximants , and : 1 . pad approximant : 2 . pad approximant : 3 .pad approximant : these results should be compared with the best fit values for the model , , , and , obtained by substituting in equation ( [ teoslcdm ] ) the values , , estimated in sec .[ parameterestimations ] for the model , and the value from planck .now , let us consider an arbitrary collection of fluids ( baryons , cold dark matter , dark energy , ... ) with total energy density which comprises all possible species present in the universe .the friedmann equation is thus , as already mentioned .we want to estimate the total equation of state of the universe given by the friedmann equation can be recast as where is given again by eq .( [ fofz ] ) .the cosmographic parameters are equal to eqs ( [ dfq0])-([dfs0 ] ) , by imposing . at late times , the total equation of state of the universe is given by we report the estimated values from pad approximants , and : 1 . pad approximant : 2 . pad approximant : 3 .pad approximant : these results should be compared with the best fit values for the model , , , and , obtained by substituting in equation ( [ teoslcdm2 ] ) the values , , estimated in sec .[ parameterestimations ] for the model .c|c|c|c|c parameter & & & & + & set & set & set & set + & & & & + & & & & + & & & & + & & & & + & & & & + notes .a. is given in km / s / mpc units .[ table:1dotherredshifts ]we showed that the use of pad approximants in cosmography provides a new model - independent technique for reconstructing the luminosity distance and the hubble parameter .this method is particulary valid since standard constructions in cosmography require to develop the luminosity distance as a taylor series and then to match the data with this approximation .in particular , when data are taken over , pad functions work better than truncated taylor series . to make the argument consistent ,we have performed in sec .[ sect : dataset ] a detailed analysis of our models derived from pad approximants with respect to the data taken from different observations .the results have been elaborated in secs .[ sect : dataset ] and [ parameterestimations ] and compared with the standard cosmographic approach and to the values inferred from assuming the model .as expected , not all the pad approximants work properly .for example , we have commented that one has to take special care of the possible spurious divergences that may appear in when approximating with pad , due to the fact that such functions are rational functions .moreover , not all pad models can fit the data in the appropriate way .indeed , we have seen both theoretically and numerically that approximants whose degrees of the numerator and of the denominator are similar seem to be preferred ( see figs .[ lcdmdl]-[fig : pa ] and tabs .[ table:1dresults1]-[table:1dresults3 ] ) .this fact suggests that the increase of the luminosity distance with has to be indeed slower than the one depicted by a taylor approximation .interestingly , our numerical analysis has singled out the pad functions , and , which are the ones that draw the best samples , with narrowest dispersion ( see figs . [ fig:1dim2modelb]-[fig : pade32 ] ) .as one can see from tabs .[ table:1dresults1]-[table:1dresults3 ] , the best fit values and errors for the cs parameters estimated using the approximants , and are in good agreement with the sc results . in particular , the approximant gives smaller relative errors than the corresponding sc analysis , thus suggesting that enlarging the approximation order , the analysis by means of pad are increasingly more appropriate than the standard one .the estimated values of the cs parameters , through the use of the pad approximants , and seem to indicate that the value of is smaller than the one derived by means of the standard ( taylor ) approximation .our results therefore agree with planck results , which show smaller values of than previous estimations . on the contrary, seems to be larger than the result obtained by standard cosmography , while for the situation is less clear ( and indicate a smaller value , while a larger one ) . in any cases ,the sign of is positive at a of confidence level .this fact , according to sec .ii , provides a universe which starts decelerating at a particular redshift , named the transition redshift . from the above considerations , a comparison of our results with the ones obtained previously using pad expansions is essential .in particular , in the authors employed a pad approximant , motivating their choice by noticing that for the requirement could be appropriate to describe the behavior of .their idea was to propose this pad prototype and to use it for higher redshift domains .their heuristic guess has not been compared in that work with respect to other approximants .hence , the need of extending their approach has been achieved in the present paper , where we analyzed thoroughly which extensions work better .moreover , the authors adopted the pad approximant as a first example to describe the convergence radius in terms of the pad formalism , providing discrepancies with respect to standard taylor treatments .their numerical analyses were essentially based on sneia data only , while in our paper we adopted different data sets , i.e. baryonic acoustic oscillation , hubble space telescope measurements and differential age data , with improved numerical accuracies developed by using the cosmomc code . as a consequence , we found that the cosmographic results obtained using are significantly different from the ones obtained using .indeed , in the authors employed the approximant only , whereas in our paper we reported in fig .4 the plots of , which definitively provide the differences between and . in general , our results seem to be more accurate and general than the numerical outcomes of . however , we showed a positive jerk parameter , for sets and , which is compatible with their results , albeit not strictly constrained to , as they proposed .numerical outcomes for and lie in similar intervals with respect to . summing up ,although the use of is possible _ a priori _ , we demonstrated here that considering different models one can find parameterizations that work better than and therefore are more natural candidates for further uses in upcoming works on cosmography .further , it is of special interest to look at the comparison of the numerical results obtained for the cases of , cpl and unified dark energy models by inserting the values estimated by fitting the pad functions , and ( see sec .[ applicationspade ] ) . from this analysisit turns out that all of them suggest small departures from , as expected .moreover , is the one that better reproduces the results of .however , we expect from fig . [ lcdmdl ] that and should match even better the predictions .therefore , we consider that it is needed to repeat the analysis with a larger set of data in the region to get more reliable results in this sense .this indication will be object of extensive future works .finally , let us comment the fact that results of tab .[ table:1dotherredshifts ] , compared with the ones in tabs .[ table:1dresults1]-[table:1dresults3 ] , show that the approximants , and give values for the cs parameters that are much closer to the ones estimated by standard cosmography and by the model , than the results provided by the introduction of auxiliary variables in the standard cosmographic approach .this definitively candidates pad approximants to represent a significative alternative to overcome the issues of divergence in cosmography , without the need of any additional auxiliary parametrization .in this work we proposed the use of pad approximations in the context of observational cosmology . in particular , we improved the standard cosmographic approach , which enables to accurately determine refined cosmographic bounds on the dynamical parameters of the models .we stressed the fact that the pad recipe can be used as a relevant tool to extend standard taylor treatments of the luminosity distance .our main goal was to introduce a class of pad approximants able to overcome all the problems plaguing modern cosmography .to do so , we enumerated the basic properties and the most important demands of the pad treatment and we matched theoretical predictions with modern data .in particular , the main advantage of the rational cosmographic method is that pad functions reduce the issue of convergence of the standard cosmographic approach based on truncated taylor series , especially for data taken over a larger redshift range . in other words , usual model independent expansions performed at from divergences due to data spanning cosmic intervals with .since pad approximants are rational functions , thence they can naturally overcome this issue .in particular , in our numerical treatment , we have considered all the possible pad approximants of the luminosity distance whose order of the numerator and denominator sum up to three , four and five and compared them with the corresponding taylor polynomials of degree three , four and five in . among these models , it turned out that the pad technique can give results similar to those obtained by standard cosmography and also improve the accuracy .in addition , the pad technique overcomes the need of introducing auxiliary variables , as proposed in standard cosmography to reduce divergences at higher redshifts .to do so , we compared pad results also with taylor re - parameterized expansions .in all the cases considered here , our pad numerical outcomes appear to improve the standard analyses .furthermore , we also considered to overcome the degeneracy problem by employing additional data sets .in particular , we assumed union 2.1 type ia supernovae , baryonic acoustic oscillation , hubble space telescope measurements and direct observations of hubble rates , based on the differential age method . moreover , all cosmographic drawbacks have also been investigated and treated in terms of pad s recipe , proposing for each problem a possible solution to improve the experimental analyses .afterwards , we guaranteed our numerical outcomes to lie in viable intervals and we demonstrated that the refined cosmographic bounds almost confirm the standard cosmological paradigm , thus forecasting the sign of the variation of acceleration , i.e. the jerk parameter .however , although the model passes our experimental tests , we can not conclude that evolving dark energy terms are ruled out .indeed , we compared our pad results with a class of cosmological models , namely the model , the chevallier - polarski - linder parametrization and the unified dark energy models , finding a good agreement with those paradigms .furthermore , we also investigated the consequences of pad s bounds on the universe equation of state . to conclude ,we have proposed and investigated here the use of pad approximants in the field of precision cosmology , with particular regards to cosmography .future perspectives will be clearly devoted to describe the pad approach in other relevant fields .for example , early time cosmology is expected to be more easily described in our framework , as well as additional epochs related to high redshift data . collecting all these results one could in principle definitively reconstruct the universe expansion history , matching late with early time observations and also permitting to understand whether the dark energy fluid evolves or remains a pure cosmological constant at all times .sc and ol are grateful to manuel scinta for important discussions on numerical and theoretical results .ab and ol want to thank hernando quevedo and christine gruber for their support during the phases of this work .aa is thankful to jaime klapp for discussions on the numerical outcomes of this work .sc is supported by infn through iniziative specifiche na12 , og51 .ol is supported by the european pona3 00038f1 km3net ( infn ) project .aa is supported by the project conacyt - edomex-2011-c01 - 165873 .ab wants to thank the a. della riccia foundation ( florence , italy ) for support .in this appendix we give the formulas for the approximants of the luminosity distance used to fit the data , for every taylor and pad approximant considered in this work .moreover , we provide also a table of the observational hubble data ( ohd ) used in the analysis .the taylor polynomials around of degree , and for the luminosity distance ( [ defdl ] ) are given by \,,\nonumber\\ \nonumber\\ t4&= & \frac{-z}{24 ( h'_{0})^4 } \big[6 z^3 ( h'_{0})^3 - 2 h_{0 } z^2 h'_{0 } \left(3 z h''_{0}+4 ( z+1 ) h'_{0}\right ) + h_{0}^2 \left(h^{(4)}_{0 } z^3 + 4 ( z+1 ) z ( z h^{(3)}_{0}+3 h''_{0})\right)-24 h_{0}^3 ( z+1)\big]\,,\nonumber\\ \nonumber\\ t5&= & \frac{-z}{120 h_{0}^5 } \big\{-24 z^4 ( h'_{0})^4 + 6 h_{0 } z^3 ( h'_{0})^2 \left(6 z h''_{0}+5 ( z+1 ) h'_{0}\right)-2 h_{0}^2 z^2 \big[3 z^2 ( h''_{0})^2 + 20 ( z+1 ) ( h'_{0})^2\nonumber\\ & + & z h'_{0 } \left(4 h^{(3)}_{0 } z+15 ( z+1 ) h''_{0}\right)\big]+h_{0}^3 \left[h^{(4)}_{0 } z^4 + 5 ( z+1 ) z \left(12 h'_{0}+z \left(h^{(3)}_{0 } z+4 h''_{0}\right)\right)\right]-120 h_{0}^4 ( z+1)\big\}\,.\nonumber\end{aligned}\ ] ] therefore , using eqs .( [ hpunto ] ) and ( [ cosmoz ] ) , one can rewrite the taylor approximations for the luminosity distance in terms of the cs parameters , that are \,,\nonumber\\ \nonumber\\ t4&=&\frac{z}{24 h_{0 } } \left[z^3 ( 5 j ( 2 q+1)-q ( 3 q+2 ) ( 5 q+1)+s+2)-4 z^2 ( j - q ( 3 q+1)+1)-12 ( q-1 ) z+24\right]\,,\nonumber\\ \nonumber\\ t5&=&\frac{z}{120 h_{0 } } \big[z^4 \left(10 j^2-j ( 5 q ( 21 q+22)+27)-l+q ( q ( q ( 105 q+149)+75)-15 s+6)-11 s-6\right)\nonumber\\ & + & 5 z^3 ( 5 j ( 2 q+1)-q ( 3 q+2 ) ( 5 q+1)+s+2)-20 z^2 ( j - q ( 3 q+1)+1)-60 ( q-1 ) z+120\big]\,,\nonumber\end{aligned}\]]where all the cs parameters are assumed to be evaluated at . at the same time, we can write all the pad approximants used in this work for the luminosity distance , which read \big\}\nonumber\\ & \,&\big\{6 h_{0}^2 \left(-6 z ( h'_{0})^3 + 2 h_{0 } h'_{0 } \left(3 z h''_{0}+4 ( z-1 ) h'_{0}\right)+h_{0}^2 \left(-h^{(3)}_{0 } z-4 ( z-1 ) h''_{0}+12 h'_{0}\right)\right)\big\}^{-1}\,,\nonumber\\ \nonumber\\ p_{14}&= & { 720 h_{0}^3 z } \big\{-19 z^4 ( h'_{0})^4 + 2 h_{0 } z^3 ( h'_{0})^2 \left(23 z h''_{0}-15 ( z-1 ) h'_{0}\right)-2 h_{0}^2 z^2 \big[8 z^2 ( h''_{0})^2 + 30 ( ( z-1 ) z+1 ) ( h'_{0})^2\nonumber\\ & + & 3 z h'_{0 } \left(3 h^{3}_{0 } z-10 ( z-1 ) h''_{0}\right)\big]-6 h_{0}^3 z \left(-h^{4}_{0 } z^3 + 60 ( z-1 ) \left(z^2 + 1\right ) h'_{0}+5 z \left(h^{3}_{0 } ( z-1 ) z-4 ( ( z-1 ) z+1 ) h''_{0}\right)\right)\nonumber\\ & + & 720 h_{0}^4 \left((z-1 ) z \left(z^2 + 1\right)+1\right)\big\}^{-1}\,,\nonumber\\ \nonumber\\ p_{41}&= & \big\{z \big[-12 z^3 ( h'_{0})^6 + 24 h_{0 } z^2 ( h'_{0})^4 \left(z h''_{0}+2 ( z+1 ) h'_{0}\right)+24 h_{0}^5 ( z+1 ) \left(h^{4}_{0 } z+5 h^{3}_{0 } ( z-1)-20 h''_{0}\right)\nonumber\\ & -&4 h_{0}^2 z ( h'_{0})^2 \left(3 z^2 ( h''_{0})^2 + 2 ( z+1 ) ( 5 z+27 ) ( h'_{0})^2+z h'_{0 } \left(h^{3}_{0 } z+18 ( z+1 ) h''_{0}\right)\right)\nonumber\\ & + & h_{0}^4 \big[960 ( z+1 ) ( h'_{0})^2+z \left(5 ( h^{(3)}_{0})^2 z^2 + 16 ( z+1 ) ( 5 z-9 ) ( h''_{0})^2 + 4 z \left(5 h^{3}_{0 } ( z+1)-h^{4}_{0 } z\right ) h''_{0}\right)\nonumber\\ & -&12 ( z+1 ) h'_{0 } \left(20 ( 2 z-3 ) h''_{0}+z \left(h^{4}_{0 } z+h^{3}_{0 } ( 5 z+11)\right)\right)\big]+4 h_{0}^3 \big(6 z^3 ( h''_{0})^3 + 60 ( z-3 ) ( z+1 ) ( h'_{0})^3\nonumber\\ & -&z^2 h'_{0 } h''_{0 } \left(7 h^{3}_{0 } z+12 ( z+1 ) h'_{0}\right)+2 ( h'_{0})^2 \left(h^{4}_{0 } z^3+(z+1 ) z \left(7 h''_{0 } z+(5 z+63 ) h''_{0}\right)\right)\big)\big]\big\}\nonumber\\ & \ , & \big\{24 h_{0}^3 \big[-24 z ( h'_{0})^4 + 6 h_{0 } ( h'_{0})^2 \left(6 z h''_{0}+5 ( z-1 ) h'_{0}\right)+h_{0}^3 \left(h^{3}_{0 } z+5 h^{3}_{0 } ( z-1)-20 h''_{0}\right)\nonumber\\ & + & h_{0}^2 \left(-6 z ( h''_{0})^2 + 40 ( h'_{0})^2 + 2 h'_{0 } \left(-4 h^{3}_{0 } z-15 ( z-1 ) h''_{0}\right)\right)\big]\big\}^{-1}\,,\nonumber\\ \nonumber\\ p_{32}&= & \big\{z \big[2 z^2 ( h'_{0})^4 - 2 h_{0 } z ( h'_{0})^2 \left(z h''_{0}+6 ( z+1 ) h'_{0}\right)\nonumber\\ & + & h_{0}^2 \left(-4 z^2 ( h''_{0})^2 + 12 ( z-4 ) ( z+1 ) ( h'_{0})^2 + 3 z h'_{0 } \left(h^{3}_{0 } z+8 ( z+1 ) h''_{0}\right)\right)\nonumber\\ & -&6 h_{0}^3 ( z+1 ) \left(h^{3}_{0 } z+4 ( z-1 ) h''_{0}-12 h'_{0}\right)\big]\big\ } \big\{6 h_{0}^2 \big[-6 z ( h'_{0})^3 + 2 h_{0 } h'_{0 } \left(3 z h''_{0}+4 ( z-1 ) h'_{0}\right)\nonumber\\ & + & h_{0}^2 \left(-h^{3}_{0 } z-4 ( z-1 ) h''_{0}+12 h'_{0}\right)\big]\big\}^{-1}\,,\nonumber\\ \nonumber\\ p_{23}&= & \big\{6 z \left(-z ( h'_{0})^3 - 2 h_{0 } h'_{0 } \left((z+1 ) h'_{0}-z h''_{0}\right)+h_{0}^2 \left(-h^{3}_{0 } z+4 ( z+1 ) h''_{0}-12 ( z+1 ) h'_{0}\right)+24 h_{0}^3 ( z+1)\right)\big\}\nonumber\\ & \,&\big\{-2 z^2 ( h'_{0})^4 + 2 h_{0 } z ( h'_{0})^2 \left(z h''_{0}+6 ( z-1 ) h'_{0}\right)+6 h_{0}^3 \left(h^{3}_{0 } ( z-1 ) z+4 \left(z^2 + 1\right ) h''_{0}+12 ( z-1 ) h'_{0}\right)\nonumber\\ & -&h_{0}^2 \left(-4 z^2 ( h''_{0})^2 + 12 ( z ( z+3)+1 ) ( h'_{0})^2 + 3 z h'_{0 } \left(h^{3}_{0 } z+8 ( z-1 ) h''_{0}\right)\right)+144 h_{0}^4\big\}^{-1}h^{4}_{0}\,.\nonumber\end{aligned}\ ] ] + + again , using eqs .( [ hpunto ] ) and ( [ cosmoz ] ) , one can rewrite the pad approximants for the luminosity distance in terms of the cs parameters , that are \big\}^{-1}\nonumber\ , , \\ \nonumber\\ p_{31}&= & \big\{z \big[z^2 \left(-4 j^2+j ( q ( 23 - 6 q)+7)+q \left(q \left(9 q^2 - 30 q-13\right)-3 s-4\right)+3 s+2\right)+6 z ( j ( 8 q+7)\nonumber\\ & -&q ( q ( 9 q+17)+6)+s+4)+24 ( j - q ( 3 q+1)+1)\big]\big\}\big\{6 h_{0 } ( z ( 5 j ( 2 q+1)-q ( 3 q+2 ) ( 5 q+1)+s+2)\nonumber\\ & + & 4 ( j - q ( 3 q+1)+1))\big\}^{-1}\nonumber\ , , \\\nonumber\\ p_{14}&= & -720 z \big\{h_{0 } \big[z^4 \left(40 j^2 - 2 j ( 5 q ( 30 q+59)+221)-6 l+q ( q ( 3 q ( 75 q+188)+610)-60 s+646)-96 s-251\right)\nonumber\\ & + & 30 z^3 ( j ( 6 q+9)-q ( 6 q ( q+2)+19)+s+9)+60 z^2 ( -2 j+q ( 3 q+8)-5)-360 ( q-1 ) z-720\big]\big\}^{-1}\nonumber\ , , \\\nonumber\\ p_{41}&= & \big\{z \big[4 z^2 \big(5 j^2 ( 4 q+11)+j ( q ( 5 q ( 18 q-35)-234)+5 s-46)+3 l ( q-1)+q \big(2 q \left(q \left(-45 q^2 + 69 q+121\right)+15 s+61\right)\nonumber\\ & -&17 s+16\big)-4 ( 7 s+2)\big)+12 z \left(20 j^2-j ( 5 q ( 32 q+49)+79)-2 l+q ( q ( q ( 135 q+308)+205)-25 s+32)-27 s-22\right)\nonumber\\ & + & z^3 \big[-\big(40 j^3+j^2 ( 20 q ( 1 - 2 q)+57)+j ( -4 l+2 q ( q ( q ( 90 q+143)-103)+4 ( 5 s-26))+6 s-32)-4 ( l-2 q+6 s+1)\nonumber\\ & + & q ( 4 l ( 3 q+1)+q ( q ( 184 - 3 q ( q ( 45 q+86)-23))+108))+2 q ( q ( 15 q+31)-18 ) s+5 s^2\big)\big]\nonumber\\ & -&120 ( 5 j ( 2 q+1)-q ( 3 q+2 ) ( 5 q+1)+s+2)\big]\big\ } \big\{24 h_{0 } \big[z \big(10 j^2-j ( 5 q ( 21 q+22)+27)-l+q ( q ( q ( 105 q+149)+75)\nonumber\\ & -&15 s+6)-11 s-6\big)-5 ( 5 j ( 2 q+1)-q ( 3 q+2 ) ( 5 q+1)+s+2)\big]\big\}^{-1}\nonumber\ , , \\\nonumber\\ p_{32}&= & \big\{z \big[z^2 \left(-4 j^2+j ( q ( 23 - 6 q)+7)+q \left(q \left(9 q^2 - 30 q-13\right)-3 s-4\right)+3 s+2\right)+6 z ( j ( 8 q+7)\nonumber\\ & -&q ( q ( 9 q+17)+6)+s+4)+24 ( j - q ( 3 q+1)+1)\big]\big\ } \big\{6 h_{0 } ( z ( 5 j ( 2 q+1)-q ( 3 q+2 ) ( 5 q+1)+s+2)\nonumber\\ & + & 4 ( j - q ( 3 q+1)+1))\big\}^{-1}\nonumber\ , , \\\nonumber\\ p_{23}&= & \big\{6 z ( z ( j ( 6 q+9)-q ( 6 q ( q+2)+19)+s+9)+2 ( 2 j - q ( 3 q+8)+5))\big\ } \big\{h_{0 } \big[z^2 \big(4 j^2+j ( q ( 6 q-23)-7)\nonumber\\ & + & q \left(q \left(-9 q^2 + 30 q+13\right)+3 s+4\right)-3 s-2\big)+6 z ( j ( 8 q+7)-q ( q ( 9 q+17)+6)+s+4)+12 ( 2 j - q ( 3 q+8)+5)\big]\big\}^{-1}\,,\nonumber\end{aligned}\ ] ] & & & reference + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + & & & + to conclude , we also include here all the approximations for the functions and corresponding to the pad approximations for used in this paper up to order , following the prescription indicated in sec .[ subsec : bao ] . starting from the expressions above and using eq .( [ hfromdl ] ) , one obtains the corresponding functions as \big[24 ( 24 \nonumber\\ & + & z^2 ( 2 - 4 j + 4 q + 6 q^2 + 2 ( -1 + j ( 5 + 6 q ) - 3 q ( 1 + 2 q ( 1 + q ) ) + s ) z + 3 ( 9 + j ( 9 + 6 q ) - q ( 19 + 6 q ( 2 + q ) ) + s ) z^2))\big]^{-1 } \nonumber\ , , \\ \nonumber\\ h_{22}&= & -\big[(1 + z)^2 ( 12 ( 5 + 2 j - q ( 8 + 3 q ) ) + 6 ( 4 + j ( 7 + 8 q ) - q ( 6 + q ( 17 + 9 q ) ) + s ) z + ( -2 + 4 j^2 + j ( -7 + q ( -23 + 6 q ) ) \nonumber\\ & - & 3 s + q ( 4 + q ( 13 + 30 q - 9 q^2 ) + 3 s ) ) z^2)^2 h_{0}\big ] \big[6 ( -24 ( 5 + 2 j - q ( 8 + 3 q))^2 - 24 ( 5 + 2 j - q ( 8 + 3 q ) ) ( 9 + j ( 9 + 6 q ) \nonumber\\ & - & q ( 19 + 6 q ( 2 + q ) ) + s ) z + 2 ( -268 + 8 j^3 - 9 j^2 ( 23 + 4 q ( 11 + 4 q ) ) - q ( -1056 + q ( 384 + q ( 920 + 27 q ( 49 + q ( 22 + 5 q)))))\nonumber\\ & + & 6 j ( -76 + q ( 89 + q ( 236 + 9 q ( 18 + 5 q ) ) - 6 s ) - 9 s ) - 54 s + 6 q ( 19 + 6 q ( 2 + q ) ) s - 3 s^2 ) z^2 \nonumber\\ & + & 4 ( 5 + 2 j - q ( 8 + 3 q ) ) ( -2 + 4 j^2 + j ( -7 + q ( -23 + 6 q ) ) - 3 s + q ( 4 + q ( 13 + 30 q - 9 q^2 ) + 3 s ) ) z^3\nonumber\\ & + & ( 9 + j ( 9 + 6 q ) -q ( 19 + 6 q ( 2 + q ) ) + s ) ( -2 + 4 j^2 + j ( -7 + q ( -23 + 6 q ) ) - 3 s + q ( 4 + q ( 13 + 30 q - 9 q^2 ) + 3 s ) ) z^4)\big]^{-1 } \nonumber\ , , \\ \nonumber\\ h_{31}&= & -\big[6 ( 1 + z)^2 ( 4 ( 1 + j - q ( 1 + 3 q ) ) + ( 2 + 5 j ( 1 + 2 q ) - q ( 2 + 3 q ) ( 1 + 5 q ) + s ) z)^2 h_{0}\big ] \big[-96 ( 1 + j - q ( 1 + 3 q))^2 \nonumber\\ & - & 48 ( 1 + j - q ( 1 + 3 q ) ) ( 4 + j ( 7 + 8 q ) - q ( 6 + q ( 17 + 9 q ) ) + s ) z + 6 ( 8 j^3 - j^2 ( 49 + 4 q ( 39 + 23 q ) ) \nonumber\\ & - & q ( -56 + q ( -128 + q ( 112 + q ( 509 + 462 q + 81 q^2 ) ) ) ) + 2 j ( -34 + q ( -2 + q ( 205 + q ( 281 + 78 q ) ) - 6 s ) - 9 s ) \nonumber\\ & + & 2 q ( 10 + 3 q ( 7 + q ) ) s - s^2 - 4 ( 5 + 3 s ) ) z^2 + 2 ( 6 + j ( 9 + 10 q ) - q ( 6 + 5 q ( 5 + 3 q ) ) + s ) ( -2 + 4 j^2 + j ( -7 + q ( -23 + 6 q ) ) \nonumber\\ & - & 3 s + q ( 4 + q ( 13 + 30 q - 9 q^2 ) + 3 s ) ) z^3 + ( 2 + 5 j ( 1 + 2 q ) - q ( 2 + 3 q ) ( 1 + 5 q ) + s ) ( -2 + 4 j^2 + j ( -7 + q ( -23 + 6 q ) ) \nonumber\\ & - & 3 s + q ( 4 + q ( 13 + 30 q - 9 q^2 ) + 3 s ) ) z^4\big]^{-1 } \nonumber\,,\end{aligned}\ ] ] where all the cosmographic parameters have been evaluated at . afterwards , according to eq .( [ defdv ] ) , the corresponding functions are also reported : \big[h_0 ^ 3 ( z+1)^4 ( j z - q ( 3 q z+z+3)+z+3)^4\big]^{-1}\big)^{1/3 } \nonumber\ , , \\\nonumber ( d_{v})_{13}&= & 24 \big[\big(z^3 ( 24 + z^2 ( 2 - 4 j + 4 q + 6 q^2 + 2 ( -1 + j ( 5 + 6 q ) - 3 q ( 1 + 2 q ( 1 + q ) ) + s ) z + 3 ( 9 + j ( 9 + 6 q)\nonumber\\ & - & q ( 19 + 6 q ( 2 + q ) ) + s ) z^2))\big ) \big((1 + z)^4 ( 24 + 12 ( -1 + q ) z + 2 ( 5 + 2 j - q ( 8 + 3 q ) ) z^2 - ( 9 + j ( 9 + 6 q ) \nonumber\\ & - & q ( 19 + 6 q ( 2 + q ) ) + s ) z^3)^4 h_{0}^3\big)^{-1}\big]^{1/3 } \nonumber\ , , \\\nonumber ( d_{v})_{22}&= & -6 \big[\big(z^3 ( 2 ( 5 + 2 j - q ( 8 + 3 q ) ) + ( 9 + j ( 9 + 6 q ) - q ( 19 + 6 q ( 2 + q ) ) + s ) z)^2 ( -24 ( 5 + 2 j - q ( 8 + 3 q))^2 \nonumber\\ & - & 24 ( 5 + 2 j - q ( 8 + 3 q ) ) ( 9 + j ( 9 + 6 q ) - q ( 19 + 6 q ( 2 + q ) ) + s ) z + 2 ( -268 + 8 j^3 - 9 j^2 ( 23 + 4 q ( 11 + 4 q ) ) \nonumber\\ & - & q ( -1056 + q ( 384 + q ( 920 + 27 q ( 49 + q ( 22 + 5 q ) ) ) ) ) + 6 j ( -76 + q ( 89 + q ( 236 + 9 q ( 18 + 5 q ) ) - 6 s ) - 9 s)\nonumber\\ & - & 54 s + 6 q ( 19 + 6 q ( 2 + q ) ) s - 3 s^2 ) z^2 + 4 ( 5 + 2 j - q ( 8 + 3 q ) ) ( -2 + 4 j^2 + j ( -7 + q ( -23 + 6 q ) ) - 3 s \nonumber\\ & + & q ( 4 + q ( 13 + 30 q - 9 q^2 ) + 3 s ) ) z^3 + ( 9 + j ( 9 + 6 q ) - q ( 19 + 6 q ( 2 + q ) ) + s ) ( -2 + 4 j^2 + j ( -7 + q ( -23 + 6 q ) ) \nonumber\\ & - & 3 s + q ( 4 + q ( 13 + 30 q - 9 q^2 ) + 3 s ) ) z^4)\big ) \big((1 + z)^4 ( 12 ( 5 + 2 j - q ( 8 + 3 q ) ) + 6 ( 4 + j ( 7 + 8 q ) \nonumber\\ & - & q ( 6 + q ( 17 + 9 q ) ) + s ) z + ( -2 + 4 j^2 + j ( -7 + q ( -23 + 6 q ) ) - 3 s + q ( 4 + q ( 13 + 30 q - 9 q^2 ) + 3 s ) ) z^2)^4 h_{0}^3\big)^{-1}\big]^{1/3 } \nonumber\ , , \\ \nonumber ( d_{v})_{31}&= & -\frac{1}{6 } \big[\big(z^3 ( 24 ( 1 + j - q ( 1 + 3 q ) ) + 6 ( 4 + j ( 7 + 8 q ) - q ( 6 + q ( 17 + 9 q ) ) + s ) z - ( -2 + 4 j^2 + j ( -7 + q ( -23 + 6 q))\nonumber\\ & - & 3 s + q ( 4 + q ( 13 + 30 q - 9 q^2 ) + 3 s ) ) z^2)^2 ( -96 ( 1 + j - q ( 1 + 3 q))^2 - 48 ( 1 + j - q ( 1 + 3 q ) ) ( 4 + j ( 7 + 8 q ) \nonumber\\ & - & q ( 6 + q ( 17 + 9 q ) ) + s ) z + 6 ( 8 j^3 - j^2 ( 49 + 4 q ( 39 + 23 q ) ) - q ( -56 + q ( -128 + q ( 112 + q ( 509 + 462 q + 81 q^2))))\nonumber\\ & + & 2 j ( -34 + q ( -2 + q ( 205 + q ( 281 + 78 q ) ) - 6 s ) - 9 s ) + 2 q ( 10 + 3 q ( 7 + q ) ) s - s^2 - 4 ( 5 + 3 s ) ) z^2 \nonumber\\ & + & 2 ( 6 + j ( 9 + 10 q ) - q ( 6 + 5 q ( 5 + 3 q ) ) + s ) ( -2 + 4 j^2 + j ( -7 + q ( -23 + 6 q ) ) - 3 s \nonumber\\ & + & q ( 4 + q ( 13 + 30 q - 9 q^2 ) + 3 s ) ) z^3 + ( 2 + 5 j ( 1 + 2 q ) - q ( 2 + 3 q ) ( 1 + 5 q ) + s ) ( -2 + 4 j^2 + j ( -7 + q ( -23 + 6 q ) ) \nonumber\\ & - & 3 s + q ( 4 + q ( 13 + 30 q - 9 q^2 ) + 3 s ) ) z^4)\big ) \big((1 + z)^4 ( 4 ( 1 + j - q ( 1 + 3 q ) ) + ( 2 + 5 j ( 1 + 2 q ) \nonumber\\ & - & q ( 2 + 3 q ) ( 1 + 5 q ) + s ) z)^4 h_{0}^3\big)^{-1}\big]^{1/3 } \nonumber\ , , \\ \nonumber\end{aligned}\ ] ] k. bamba , s. capozziello , s. nojiri , s. d. odintsov , astrophys .space sci ., 342 , 155 - 228 , ( 2012 ) ; r. maartens , k. koyama , liv .relat . , 13 , 5 , ( 2010 ) ; v. sahni , a. a. starobinsky , int . j. modd , * 15 * , 2105 , ( 2006 ) ; r. bousso , rev .phys . , * 74 * , 825 - 874 , ( 2002 ) .g. iannone , o. luongo , europh .lett . , * 94 * , 49002 , ( 2011 ) ; c. armendriz - picn , v. mukhanov , and p. j. steinhardt , phys . rev . lett . *85 * , 4438 , ( 2000 ) ; c. armendriz - picn , v. mukhanov , and p. j. steinhardt , phys . rev .d * 63 * , 103510 , ( 2001 ) ; a. aviles , l. bonanno , o. luongo , h. quevedo , phys .d , * 84 * , 103520 , ( 2011 ) ; c. armendriz - picn , t. damour , and v. mukhanov , phys . lett .b * 458 * , 209 , ( 1999 ) ; j. garriga and g. r. dvali , g. gabadadze and m. porrati , phys .b * 485 * , 208 , ( 2000 ) ; s. capozziello , v. f. cardone , s. carloni and a. troisi , int . j. modd * 12 * , 1969 , ( 2003 ) .m. seikel , c. clarkson , m. smith , jcap , * 06 * , 036 , ( 2012 ) ; t. holsclaw , u. alam , b. sanso , h. lee , k. heitmann , s. habib , d. higdon , phys .lett . * 105 * , 241302 , ( 2010 ) ; t. holsclaw , u. alam , b. sanso , h. lee , k. heitmann , s. habib , d. higdon , phys .d , * 82 * , 103502 , ( 2010 ) ; e. piedipalumbo , e. della moglie , m. de laurentis , p. scudellaro , arxiv[astro - ph]:1311.0995 , ( 2013 ) ; m. demianski , e. piedipalumbo , c. rubano , p. scudellaro , mon . not .soc . , * 426 * , 1396 - 1415 , ( 2012 ) ._ sur la representation approche dune fonction par des fractions rationnelles _ , ann .sci ecole norm ., * 9 * , 93 , 1892 ; v. nestoridis , journal of contemporary mathematics analysis , * 47 * , 4 , ( 2012 ) ; g. a. baker , jr . , j. math . phys . * 10 * , 814 , ( 1969 ) ; m. barnsley , j. math . phys .* 14 * , 299 , ( 1973 ) .m. della morte , b. jger , a. jttner and h. wittig , jhep , * 1203 * , 055 , ( 2012 ) ; w. b. jones and w. j. thron . , _ continued fractions . analytic theory and applications _ ,volume 11 of encyclopedia of mathematics and its applications .addison - wesley publishing co. , 1980 ; s. g. krantz and h. r. parks , _ a primer of real analytic functions _ , birkuser , 1992 .
|
we propose a novel approach for parameterizing the luminosity distance , based on the use of rational `` pad '' approximations . this new technique extends standard taylor treatments , overcoming possible convergence issues at high redshifts plaguing standard cosmography . indeed , we show that pad expansions enable us to confidently use data over a larger interval with respect to the usual taylor series . to show this property in detail , we propose several pad expansions and we compare these approximations with cosmic data , thus obtaining cosmographic bounds from the observable universe for all cases . in particular , we fit pad luminosity distances with observational data from different uncorrelated surveys . we employ union 2.1 supernova data , baryonic acoustic oscillation , hubble space telescope measurements and differential age data . in so doing , we also demonstrate that the use of pad approximants can improve the analyses carried out by introducing cosmographic auxiliary variables , i.e. a standard technique usually employed in cosmography in order to overcome the divergence problem . moreover , for any drawback related to standard cosmography , we emphasize possible resolutions in the framework of pad approximants . in particular , we investigate how to reduce systematics , how to overcome the degeneracy between cosmological coefficients , how to treat divergences and so forth . as a result , we show that cosmic bounds are actually refined through the use of pad treatments and the thus derived best values of the cosmographic parameters show slight departures from the standard cosmological paradigm . although all our results are perfectly consistent with the model , evolving dark energy components different from a pure cosmological constant are not definitively ruled out . finally , we use our outcomes to reconstruct the effective universe equation of state , constraining the dark energy term in a model independent way .
|
most of the studies on the phenomenon of stochastic resonance ( sr ) in dynamical systems have been devoted to systems driven by sinusoidal terms ( see for reviews ) .several analytical approximations have been put forward to explain sr . in the approach of mcnamara and wiesenfeld ,the langevin dynamics is replaced by a reduced two - state model that neglects the intra - well dynamics .the general ideas of linear response theory ( lrt ) have been applied to situations where the input amplitude is small . in floquet theory has been applied to the corresponding fokker - planck description .for very low input frequencies , an adiabatic ansatz has been invoked .even though these alternative analytical approaches provide an explanation of sr for different regions of parameter values , their precise limits of validity remain to be determined . in recent work , we have explored the validity of lrt for sinusoidal and multifrequency input signals with low frequency .our results indicate a breakdown of the lrt description of the average behavior for low frequency , subthreshold amplitude inputs .several quantifiers have been used to characterize sr in noisy , continuous , systems .the average output amplitude , or the spectral amplification ( spa ) , has been studied in refs . and the phase of the output average in refs . , respectively .those parameters as well as the signal - to - noise ratio ( snr ) , exhibit a non - monotonic behavior with the noise strength which is representative of sr .an important quantity is the sr - gain , defined as the ratio of the snr of the output over the input snr .it has been repeatedly pointed out that the sr - gain can not exceed unity as long as the system operates in a regime described by lrt . beyond lrtthere exists no physical reason that prevents the sr - gain to be larger than , as it has been demonstrated in for super - threshold sinusoidal input signals , and in analog experiments in for subthreshold input signals with many fourier components and a small duty cycle . in this work, we will make use of numerical solutions of the langevin equation following the methodology in , to analyze sr in noisy bistable systems , driven by periodic piecewise constant signal with two amplitude values of opposite signs ( rectangular signal ) ( see fig .[ fuerza ] ) .there are several relevant time scales in the dynamics of these systems : i ) , the time interval within each half period of the driving force , during which the diffusing particle sees an asymmetric constant two - well potential ; ii ) , the time scale associated with the inter - well transitions in both directions ; and iii ) , the time scale associated to intra - well dynamics .the inter - well and intra - well time scales depend basically on the noise strength and the amplitude of the driving term .the dependence of these two time scales with those parameters is certainly very different , being more pronounced for .typically , for the range of parameter values associated with sr , the intra - well time scale is shorter than the inter - well one .we will evaluate the long - time average behavior of the output and the second cumulant .these two quantities were studied some time ago by two of us for periodic rectangular driving signals . here , we will further extend our work to the analysis of the correlation function and its coherent and incoherent parts .the knowledge of all these quantities provides a very useful information for the explanation of sr as indicated by the non - monotonic behavior with the noise strength of the output amplitude and the snr .in particular , the knowledge of the incoherent part of the correlation function is of outmost importance for a correct determination of the snr .furthermore , for a given subthreshold amplitude , we will demonstrate that , if there exist a range of noise values such that is shorter than , then it is possible to observe stochastic amplification and , simultaneously , sr - gains larger than unity .this is strictly forbidden by linear response theory , as we have recently shown .thus , a simultaneous appearance of stochastic amplification and sr - gains above implies a strong violation of linear response theory .let us consider a system characterized by a single degree of freedom , , subject to the action of a zero average gaussian white noise with and driven by an external periodic signal with period . in the langevin description ,its dynamics is generated by the equation +f(t)+\xi(t).\ ] ] the corresponding linear fokker - planck equation ( fpe ) for the probability density reads where .\ ] ] in the expressions above , represents the derivative of the potential . in this work, we will consider a bistable potential .the periodicity of the external driving allows its fourier series expansion in the harmonics of the fundamental frequency , i.e. , ,\ ] ] with the fourier coefficients , and , given by here , we are assuming that the cycle average of the external driving over its period equals zero . in this work, we will focus our attention to multi - frequency input forces with `` rectangular '' shape given by as sketched in fig .[ fuerza ] .the external force remains constant at a value during each half - period and changes sign for the second half of the period .the duty cycle of the signal is defined by the time span the signal is nonzero over the total period of the signal ; thus , the rectangular signal in eq .( [ pulse ] ) consequently possesses a duty cycle of unity .the two - time correlation function in the limit is given by where is the time - periodic , asymptotic long time solution of the fpe and the quantity denotes the two - time conditional probability density that the stochastic variable will have a value near at time if its value at time was exactly .it can been shown that , in the limit , the two - time correlation function becomes a periodic function of with the period of the external driving .then , we define the one - time correlation function , , as the average of the two - time correlation function over a period of the external driving , i.e. , the correlation function can be written exactly as the sum of two contributions : a coherent part , , which is periodic in with period , and an incoherent part which decays to for large values of .the coherent part is given by where is the average value evaluated with the asymptotic form of the probability density , .according to mcnamara and wiesenfeld , the output snr is defined in terms of the fourier transform of the coherent and incoherent parts of . asthe correlation function is even in time and we evaluate its time dependence for , it is convenient to use its fourier cosine transform , defined as the value of the output snr is then obtained from : note that this definition of the snr differs by a factor , stemming from the same contribution at , from the definitions used in earlier works .the periodicity of the coherent part gives rise to delta peaks in the spectrum .thus , the only contribution to the numerator in eq .( [ snr ] ) stems from the coherent part of the correlation function .the evaluation of the snr requires the knowledge of the fourier components of and at the fundamental frequency of the driving force .thus , rather than knowledge of the entire fourier spectrum , only two well defined numerical quadratures are needed .these are : where and the snr for an input signal is given by the sr - gain is consequently defined as the ratio of the snr of the output over the snr of the input , namely , trajectories , , are generated by numerically integrating the langevin equation [ eq . ( [ langev ] ) ] for every realization of the white noise , starting from a given initial condition .the numerical solution is based on the algorithm developed by greenside and helfand ( consult also the appendix in ref . ) . after allowing for a relaxation transient stage ,we start recording the time evolution of each random trajectory for many different trajectories .then , we construct the long time average value , and the second cumulant , ^ 2 - \left [ \frac 1n \sum_{j=1}^n x^{(j)}(t ) \right ] ^2,\ ] ] where is the number of stochastic trajectories considered .we also evaluate the two - time ( and ) correlation function , i.e. , as well as the product of the averages \left [ \frac 1n \sum_{j=1}^n x^{(j)}(t ) \right ] .\ ] ] the correlation function and its coherent part are then obtained using their definitions in eqs .( [ ctau ] ) and ( [ chtau ] ) , performing the cycle average over one period of .the difference between the values of and allows us to obtain the values for .it is then straightforward to evaluate the fourier component of and the fourier transform of at the driving frequency by numerical quadrature . with that information , the numerator and the denominator for the output snr [ cf .eqs.([snr1 ] ) , ( [ num ] ) and ( [ den ] ) ] , as well as the sr - gain [ cf .( [ gain ] ) ] , are obtained .consider an external driving of the type sketched in fig .[ fuerza ] with parameter values , .this amplitude is well below its threshold value defined , for each driving frequency , as the minimum amplitude that can induce repeated transitions between the minima of in the absence of noise . for the input considered here ,the threshold amplitude is .note that this threshold value for the amplitude increases with increasing driving frequency . in fig .[ wp1 ] we depict with several panels the behavior of the first two cumulants , and , for several representative values of [ from top to bottom ( panel a ) , ( panel b ) , ( panel c ) , ( panel d ) and ( panel e ) ] .notice that due to the transients , the time at which we start recording data , in the graphs , does not necessarily coincide with the start of an external cycle .the average is periodic with the period of the driving force , while the second cumulant , due to the reflection symmetry of the potential , is periodic with a period half of the period of the forcing term .next we consider the case of small noise intensity ( say , as in panel a ) .the noise induces jumps between the wells . in each random trajectory ,a jump between the wells has a very short duration , but the instants of time at which they take place for the different stochastic trajectories are randomly distributed during a half - cycle . at this small noise strength ,the jumps are basically towards the lowest minimum .thus , because of this statistical effect , the average behavior depicts the smooth evolution depicted in panel a of fig .[ wp1 ] , without sudden transitions between the wells .the evolution of the second cumulant adds relevant information .the fact that it is rather large during most of a period indicates that the probability density is basically bimodal during most of the external cycle .it is only during very short time intervals around each half - period that the probability distribution becomes monomodal around one of the minima , and , consequently , the second cumulant is small .the bimodal character arises from the fact that the noise is so small in comparison with the barrier heights that jumps over the barrier are rather infrequent during each half a period . as the noise strength increases, the time evolution of follows closely the shape of the external force , ( cf .see panels b , c , d in fig .[ wp1 ] for ) .this behavior indicates that , for these parameter values , the jumps in the different random trajectories are concentrated within short time intervals around the instants of time at which the driving force switches sign .the second cumulant remains very small during most of a period , except for short time intervals around the switching times of the external driver .thus , for these intermediate values of , the probability distribution , is basically monomodal , except for small time intervals around the switching instants of time of the periodic driver . finally , as the noise strengthis further increased ( ) , the probability distribution remains very broad most of the time .even though a large majority of random trajectories will still jump over the barrier in synchrony with the switching times of the driver , the noise is so large in comparison with the barrier heights , that the probability of crossings over the barrier in both directions can not be neglected at any time during each half cycle .the probability distribution remains bimodal during a whole period , but asymmetric : the larger fraction of the probability accumulates around the corresponding lower minima of the potential during each cycle .therefore , the average output amplitude decreases , while the second cumulant depicts plateaus at higher values than for smaller noise strengths ( compare panels e and c in fig .[ wp1 ] ) . in fig .[ wp2 ] , we plot the coherent ( left panels ) , , and incoherent ( right panels ) , , components of the correlation function for the same parameter values as in fig .the coherent part shows oscillations with a period equal to that of the driving force .its shape changes with .the amplitude of the coherent part does not grow monotonically with .rather , it maximizes at , which is consistent with the observed behavior of in fig .this is expected as the evaluation of involves only the time behavior of at two different instants of time .two features of the behavior of are relevant : its initial value and its decay time .the initial value of the incoherent contribution , , is given by the cycle average of the second cumulant .it has a non - monotonic behavior with . for , is large , consistent with the fact that the second cumulant at this noise strength is appreciably different from during a substantial part of a period . as the value of increases , ( ) , the value of decreases .this is expected as the second cumulant is large just during those small time intervals where most of the forward transitions take place every half - period . for still larger values of are frequent forward and backward jumps that keep the stationary probability bimodal , and therefore , the initial value increases . for ,the decay of is very slow , although the decay time is still shorter than the duration of half a period of the driving force .as increases , the decay time of the incoherent part becomes shorter .it is worth to point out that the intra - well noisy dynamics manifests itself in the behavior of .this is most clearly confirmed noticing the fast initial decay observed in panel d. for smaller values of , this feature is masked by the long total relaxation time scale , while for , the noise strength is so large that there is not a clear - cut separation between inter and intra - well time scales .the above considerations allow us to rationalize the behavior of the several quantifiers used to characterize sr .their behaviors with for , are depicted in fig .it should be noticed that the lowest value of the noise strength used in the numerical solution of the langevin equation is . for this noise strength ,the values of and are very small , although not zero . for even lower noise strengthsthe task becomes computationally very demanding and expensive , due to the extremely slow decay of the correlations . for small , however , one does expect to be larger than , and , consequently , an increase of the numerical as is lowered . the quantity defined in eq .( [ num ] ) depicts a non - monotonic dependence on typical of the sr phenomenon .its behavior is expected from the dependence of the amplitude of with in fig .[ wp2 ] . a non - monotonic behavior with for the numerically evaluated is also observed . the initial value , and the decay time of are important in the evaluation of [ see eq .( [ den ] ) ] .for , the decay time of the incoherent part of the correlation function is longer than half a period of the driving force , while for , it is somewhat shorter than . consistently with eq .( [ den ] ) , the value of the integral for is smaller than for . as is further increased , the influence of the cosine factor in eq .( [ den ] ) becomes less important as the decay time is much shorter than .the drastic fall in the values observed for is due to the decrease of with ( see panels a - c in fig . [ wp2 ] ) and the shortening of the decay time . as is increased further , increases and , consequently , also increases slightly . taking into account the definition of the snr , [ cf .( [ snr1 ] ) ] , its behavior with is not surprising .the numerically obtained snr peaks at , a slightly different value of from the one at which peaks .the sr - gain is defined in eq .( [ gain ] ) .the numerically determined sr - gain shows a most interesting feature : we observe a non - monotonic behavior versus , with values for the gain exceeding unity ( ! ) for a whole range of noise strengths .this is strictly forbidden within lrt ; therefore , the fact that the sr - gain can assume values larger than unity reflects a manifestation of the inadequacy of lrt to describe the system dynamics for the parameter values considered . to rationalize this anomalous sr - gain behavior, we notice that the role of the noise in the dynamics is twofold . on the one hand, it controls the decay time of . on the other hand ,the noise value is relevant to ellucidate whether the one - time probability distribution is basically monomodal or bimodal during most of the cycle and , consequently , it controls the initial value . as discussed above ,if is small , the decay time is very large compared to , and the one - time probability distribution is essentially bimodal . for large values of , the decay of is fast enough , and the distribution is also bimodal .the large sr - gain obtained here requires the existence of a range of intermediate noise values such that : i ) decays in a much shorter time scale than and , ii ) the one - time probability distribution remains monomodal during most of the external cycle .as mentioned before , there are several time scales which are important for the phenomenon of stochastic amplification and gain . in the previous subsectionwe have considered an input frequency small enough so that the inequality holds for a range of noise values .next , we shall analyze the system response to a driving force with fundamental frequency , ten times larger than in the previous case .we will take the same input amplitude as in the previous case , , which is still subthreshold . for this input frequency ,the threshold value for the amplitude strength is determined by numerically solving the deterministic equation to yield .the behavior of the first two cumulants for several values of the noise strength is depicted in fig .[ wp11 ] [ from top to bottom ( panel a ) , ( panel b ) , ( panel c ) , ( panel d ) and ( panel e ) ] . for all values of ,the second cumulant remains large for most of each half - period .by contrast with the lower frequency case , we detect no values of for which the probability distribution is monomodal for a significant fraction of each half period . in fig .[ wp22 ] the behavior of the coherent ( left panels ) and incoherent ( right panels ) parts of the correlation function are presented for ( from top to bottom ) ( panel a ) , ( panel b ) , ( panel c ) , ( panel d ) and ( panel e ) .the amplitude of the coherent oscillations shows a nonmonotonic behavior with .the incoherent part has initial values which remain very large in comparison with the corresponding ones for ( compare with fig .[ wp2 ] ) , consistently with the large value of the second cumulant .the decay times are roughly the same for both frequencies . in fig .[ wp33 ] we show the behavior of the several sr quantifiers as a function of . the comparison of figs .[ wp3 ] and [ wp33 ] indicates that , and have the same qualitative behavior for both frequencies .the non - monotonic dependence on of and are indicative of the existence of sr ( for both frequencies ) for the subthreshold input amplitude and in the ranges of values considered .the most relevant quantitative difference is that for the sr - gain remains less than unity .with this work , we have analyzed the phenomenon of sr within the context of a noisy , bistable symmetric system driven by time periodic , rectangular forcing possessing a duty cycle of unity .the numerical solution of the langevin equation allows us to analyze the long - time behavior of the average , the second cumulant and the coherent and incoherent parts of the correlation function . for subthreshold input signals we determined the snr , together with itsnumerator and denominator evaluated separately , for a wide range of noise strengths . as a main result we find the simultaneous existence of a typical non - monotonic behavior versus the noise strength of several quantifiers associated to sr ; in particular sr - gains larger than unityare possible for a subthreshold rectangular forcing possessing a duty cycle of unity .this finding is at variance with the recent claim in refs . that pulse - like signals with small duty cycles are needed to obtain sr - gains larger than unity .this unexpected result occurs indeed whenever the inequality holds .this is most easily achieved with low frequency inputs . as the input frequency increases , that inequality is not satisfied for sufficiently small values of : even though sr then still exists , it is not accompanied by sr - gains exceeding unity . furthermore , in refs . , sr - gains larger than unity are only obtained with input amplitudes larger than . by contrast , in this work , we have shown that such a large value for the input amplitude is not needed ( we have used ) .the simultaneous occurrance of sr and sr - gains larger than unity is associated to the fact that , for some range of noise values , the decay time of the incoherent part of the correlation function is much shorter than and also the probability distribution is basically monomodal during most of the cycle of the driving force .we acknowledge the support of the direccin general de enseanza superior of spain ( bfm2002 - 03822 ) , the junta de andaluca , the daad program `` acciones integradas '' ( p.h . , m. m. ) and the sonderforschungsbereich 486 ( project a10 ) of the deutsche forschungsgemeinschaft .a. r. bulsara and l. gammaitoni , physics today * 49 * , no . 3 , 39 ( 1996 ) . l. gammaitoni , p. hnggi , p. jung , and f. marchesoni , rev .phys . * 70 * , 223 ( 1998 ) . k. wiesenfeld and f. jaramillo , chaos * 8 * , 539 ( 1998 ) .v. s. anishchenko , a. b. neiman , f. moss , and l. schimansky - geier , usp .nauk * 169 * , 7 ( 1999 ) .p. hnggi , chemphyschem * 3 * , 285 ( 2002 ) .b. mcnamara and k. wiesenfeld , phys .a * 39 * , 4854 ( 1989 ) .p. hnggi , helv .acta , * 51 * , 202 ( 1978 ) .p. hnggi , and h. thomas , phys. rep . * 88 * , 207 ( 1982 ) .p. jung and p. hnggi , europhys* 8 * , 505 ( 1989 ) .l. gammaitoni , e. menichella - saetta , s. santucci , f. marchesoni , and c. presilla , phys .a * 40 * , 2114 ( 1989 ) .p. jung and p. hnggi , phys .a * 44 * , 8032 ( 1991 ) .j. casado - pascual , j. gmez - ordez , m. morillo , and p. hnggi , europhys .lett . * 58 * , 342 ( 2002 ) .j. casado - pascual , j. gmez - ordez , m. morillo , and p. hnggi , fluct .noise lett .* 2 * , l127 ( 2002 ) .m. i. dykman , r. mannella , p. v. e. mcclintock , and n. g. stocks , phys .* 68 * , 2985 ( 1992 ) .p. jung and p. hnggi , z. physik b * 90 * , 255 ( 1993 ) .m. deweese and w. bialek , il nuovo cimento * 17d * , 733 ( 1995 ) .jess casado - pascual , claus denk , jos gmez - ordez , manuel morillo , and peter hnggi , phys .e * 67 * , 036109 ( 2003 ) .p. hnggi , m. inchiosa , d. fogliatti and a. r. bulsara , phys .e * 62 * , 6155 ( 2000 ) .z. gingl , r. vajtai , and p. makra , in `` noise in physical systems and 1/f fluctuations '' , icnf 2001 , g. bosman , editor ( world scientific , 2002 ) , pp .545 - 548 .z. gingl , p. makra , and r. vajtai , fluct .noise lett .* 1 * , l181 , ( 2001 ) .m. morillo and j. gmez - ordez , phys .e * 51 * , 999 ( 1995 ) .e. helfand , bell sci .j. , * 58 * , 2289 ( 1979 ) .h. s. greenside and e. helfand , bell sci .j. , * 60 * , 1927 ( 1981 ) .p. makra , z. gingl and l. b. kish , fluct .noise lett .* 1 * , l147 , ( 2002 ). time behavior of the average ( solid lines ) and the second cumulant ( dashed lines ) for a rectangular driving force with duty cycle 1 , fundamental frequency and subthreshold amplitude for several values of the noise strength : ( panel a ) , ( panel b ) , ( panel c ) , ( panel d ) , ( panel e ) .notice that due to the transients , in the graphs , does not necessarily coincide with the start of an external cycle.,width=377 ] time behavior of ( left panels ) and ( right panels ) for a rectangular driving force with duty cycle 1 , fundamental frequency and subthreshold amplitude for several values of the noise strength : ( panel a ) , ( panel b ) , ( panel c ) , ( panel d ) , ( panel e).,width=377 ] dependence with of several sr quantifiers : the numerator of the snr ( ) , its denominator ( ) , the output snr ( ) and the sr - gain ( ) for a rectangular driving force with duty cycle 1 , fundamental frequency and subthreshold amplitude .,width=377 ]
|
the main objective of this work is to explore aspects of stochastic resonance ( sr ) in noisy bistable , symmetric systems driven by subthreshold periodic rectangular external signals possessing a _ large _ duty cycle of unity . using a precise numerical solution of the langevin equation , we carry out a detailed analysis of the behavior of the first two cumulant averages , the correlation function and its coherent and incoherent parts . we also depict the non - monotonic behavior versus the noise strength of several sr quantifiers such as the average output amplitude , i.e. the spectral amplification ( spa ) , the signal - to - noise ratio ( snr ) and the sr - gain . in particular , we find that with _ subthreshold _ amplitudes and for an appropriate duration of the pulses of the driving force the phenomenon of stochastic resonance ( sr ) , is accompanied by sr - gains exceeding unity . this analysis thus sheds new light onto the interplay between nonlinearity and the nonlinear response which in turn yields nontrivial , unexpected sr - gains above unity .
|
this paper deals with _ isostatic _ frameworks , i.e. , pin - jointed bar assemblies , commonly referred to in engineering literature as truss structures , that are both kinematically and statically determinate .such systems are minimally infinitesimally rigid and maximally stress free : they can be termed ` just rigid ' .our ultimate goal is to answer the question posed in the title : when are symmetric pin - jointed frameworks isostatic ? as a first step ,the present paper provides a series of _ necessary _ conditions obeyed by isostatic frameworks that possess symmetry , and also summarizes conjectures and initial results on _ sufficient _ conditions .frameworks provide a model that is useful in applications ranging from civil engineering ( graver , 2001 ) and the study of granular materials ( donev et al . , 2004 ) to biochemistry ( whiteley 2005 ) .many of these model frameworks have symmetry . in applications ,both practical and theoretical advantages accrue when the framework is isostatic . in a number of applications ,point symmetry of the framework appears naturally , and it is therefore of interest to understand the impact of symmetry on the rigidity of the framework .maxwell ( 1864 ) formulated a necessary condition for infinitesimal rigidity , a counting rule for 3d pin - jointed structures , with an obvious counterpart in 2d ; these were later refined by calladine ( 1978 ) .laman ( 1970 ) provided sufficient criteria for infinitesimal rigidity in 2d , but there are well known problems in extending this to 3d ( graver et al . , 1993 ) .the maxwell counting rule , and its extensions , can be re - cast to take account of symmetry ( fowler and guest , 2000 ) using the language of point - group representations ( see , e.g. , bishop , 1973 ) .the symmetry - extended maxwell rule gives additional information from which it has often been possible to detect and explain ` hidden ' mechanisms and states of self - stress in cases where the standard counting rules give insufficient information ( fowler and guest , 2002 ; 2005 , schulze 2008a ) .similar symmetry extensions have been derived for other classical counting rules ( ceulemans and fowler , 1991 ; guest and fowler , 2005 ) . in the present paper, we will show that the symmetry - extended maxwell rule can be used to provide necessary conditions for a finite framework possessing symmetry to be stress - free and infinitesimally rigid , i.e. , isostatic .it turns out that symmetric isostatic frameworks must obey some simply stated restrictions on the counts of structural components that are fixed by various symmetries . for 2d systems ,these restrictions imply that isostatic structures must have symmetries belonging to one of only six point groups . for 3d systems , all point groups are possible , as convex triangulated polyhedra ( isostatic by the theorems of cauchy and dehn ( cauchy 1813 , dehn 1916 ) ) can be constructed in all groups ( section [ sec:3diso ] ) , although restrictions on the placement of structural components may still apply . for simplicity in this presentation, we will restrict our configurations to realisations in which all joints are distinct .thus , if we consider an abstract representation of the framework as a graph , with vertices corresponding to joints , and edges corresponding to bars , then we are assuming that the mapping from the graph to the geometry of the framework is injective on the vertices .complications can arise in the non - injective situation , and will be considered separately ( schulze , 2008a ) .the structure of the paper is as follows : maxwell s rule , and its symmetry extended version , are introduced in section [ sec : back ] , where a symmetry - extended version of a necessary condition for a framework to be isostatic is given , namely the equisymmetry of the representations for mechanisms and states of self - stress . in section [ sec : calc ]the calculations are carried out in 2d , leading to restrictions on the symmetries and configurations of 2d isostatic frameworks , and in 3d , leading to restrictions on the placement of structural components with respect to symmetry elements . in section [ sec : laman ] we conjecture sufficient conditions for a framework realized generically for a symmetry group to be isostatic , both in the plane and in 3d .maxwell s rule ( maxwell , 1864 ) in its modern form ( calladine , 1978 ) , expresses a condition for the determinacy of an unsupported , three - dimensional pin - jointed frame , in terms of counts of structural components . in equation ( [ eq : calladine ] ) , is the number of bars , is the number of joints , is the number of infinitesimal internal mechanisms and is the number of states of self - stress .a statically determinate structure has ; a kinematically determinate structure has ; isostatic structures have .the form of ( [ eq : calladine ] ) arises from a comparison of the dimensions of the underlying vector spaces that are associated with the equilibrium , or equivalently the compatibility , relationships for the structure ( pellegrino and calladine , 1986 ) .firstly , the equilibrium relationship can be written as where is the _ equilibrium _ matrix ; is a vector of internal bar forces ( tensions ) , and lies in a vector space of dimension ; is an assignment of externally applied forces , one to each joint , and , as there are possible force components , lies in a vector space of dimension ( this vector space is the tensor product of a -dimensional vector space resulting from assigning a scalar to each joint , and a -dimensional vector space in which a 3d force vector can be defined ) . hence is a matrix .a state of self - stress is a solution to , i.e. , a vector in the nullspace of ; if has rank , the dimension of this nullspace is further , the compatibility relationship can be written as where is the _ compatibility _ matrix ; is a vector of infinitesimal bar extensions , and lies in a vector space of dimension ; is a vector of infinitesimal nodal displacements , there are possible nodal displacements and so lies in a vector space of dimension . hence is a matrix .in fact , it is straightforward to show ( see e.g. , pellegrino and calladine , 1986 ) that is identical to .the matrix is closely related to the rigidity matrix commonly used in the mathematical literature : the rigidity matrix is formed by multiplying each row of by the length of the corresponding bar .of particular relevance here is that fact that the rigidity matrix and have an identical nullspace .a _ mechanism _ is a solution to , i.e. , a vector in the left - nullspace of , and the dimension of this space is .however , this space has a basis comprised of internal mechanisms and rigid - body mechanisms , and hence eliminating from ( [ eq : s ] ) and ( [ eq : m ] ) recovers maxwell s equation ( [ eq : calladine ] ) .the above derivation assumes that the system is -dimensional , but it can be applied to -dimensional frameworks , simply replacing by : the scalar formula ( [ eq : calladine ] ) has been shown ( fowler and guest , 2000 ) to be part of a more general symmetry version of maxwell s rule . for a framework with point group symmetry , 3d : ( ) where each is known in applied group theory as a _ representation _ of ( bishop , 1973 ) , or in mathematical group theory as a _ character _ ( james and liebeck , 2001 ) . for any set of objects , can be considered as a vector , or ordered set , of the traces of the transformation matrices that describe the transformation of under each symmetry operation that lies in . in this way , ( [ eq : sm ] )may be considered as a set of equations , one for each class of symmetry operations in .alternatively , and equivalently , each can be written as the sum of irreducible representations / characters of ( bishop , 1973 ) . in ( [ eq : sm ] ) the various sets are sets of bars , joints , mechanisms and states of self - stress ; and are the translational and rotational representations , respectively . calculations using ( [ eq : sm ] ) can be completed by standard manipulations of the character table of the group ( atkins , child and phillips , 1970 ; bishop , 1973 ; altmann and herzig , 1994 ) .the restriction of ( [ eq : sm ] ) to -dimensional systems ( assumed to lie in the -plane ) is made by replacing with and with , as appropriate to the reduced set of rigid - body motions .2d : ( ) examples of the application of ( [ eq : sm ] ) , ( [ eq : sm2 ] ) , with detailed working , can be found in fowler and guest ( 2000 ; 2002 ; 2005 ) , and further background , giving explicit transformation matrices , will be found in kangwai and guest ( 2000 ) . in the context of the present paper , we are interested in isostatic systems , which have , and hence obey the symmetry condition .in fact , the symmetry maxwell equation ( [ eq : sm ] ) , ( [ eq : sm2 ] ) gives the necessary condition , as it can not detect the presence of paired equisymmetric mechanisms and states of self stress .the symmetry - extended maxwell equation corresponds to a set of scalar equations , where is the number of irreducible representations of ( the number of rows in the character table ) , or equivalently the number of conjugacy classes of ( the number of columns in the character table ) .the former view has been used in previous papers ; the latter will be found useful in the present paper for deriving restrictions on isostatic frameworks .that existence of symmetry typically imposes restrictions on isostatic frameworks can be seen from some simple general considerations .consider a framework having point - group symmetry .suppose that we place all bars and joints freely ( so that no bar or joint is mapped onto itself by any symmetry operation ) . both and must then be multiples of , the order of the group : , such a framework be isostatic ?any isostatic framework obeys the scalar maxwell rule with as a necessary condition . in three dimensions, we have , and hence : 3d : in two dimensions , we have , and hence : 2d : as and are integers , is restricted to values , , and in 3d , and and in 2d .immediately we have that if the point group order is not one of these special values , it is impossible to construct an isostatic framework with all structural components placed freely : any isostatic framework with ( 2d ) or ( 3d ) must have some components in _ special positions _ ( components that are unshifted by some symmetry operation ) . in the schoenflies notation ( bishop , 1973 ) , the point groups of orders 1 , 2 , 3 and 6 are [ cols= "< , < " , ] ; ( b ) for alone . ] ; ( b ) for . ] from table [ tab:3d ] , the symmetry treatment of the 3d maxwell equation reduces to scalar equations of six types .if , then : ( ) : ( ) : ( ) : ( ) : ( ) : ( ) where a given equation applies when the corresponding symmetry operation is present in .some observations on 3d isostatic frameworks , arising from the above , are : 1 . from ( [ eq:3de ] ) , the framework must satisfy the scalar maxwell rule ( [ eq : calladine ] ) with .2 . from ( [ eq:3dv ] ) , each mirror that is present contains the same number of joints as bars that are unshifted under reflection in that mirror .3 . from ( [ eq:3di ] ) , a centro - symmetric framework has neither a joint nor a bar centered at the inversion centre .4 . for a axis ,( [ eq:3dc2 ] ) has solutions the count refers to both bars that lie along , and those that lie perpendicular to , the axis .however , if a bar were to lie along the axis , it would contribute to and to thus generating a contradiction of ( [ eq:3dc2 ] ) , so that in fact all bars included in must lie perpendicular to the axis .equation ( [ eq:3dcn ] ) can be written , with , as with .the non - negative integer solution , , is possible for all . for factor is rational at , but generates a further distinct solution only for : + : : + and so here , but is unrestricted . : : + implies about the same axis , and hence , and . : : + implies and about the same axis , and hence , and .+ thus is for any , and only in the case may depart from .6 . likewise ,equation ( [ eq:3dsn ] ) can be written , with , as with .the integer solution , , is possible for all . for factor is rational at , but generates no further solutions : + : : + and so . : : + and so . : : + and but implies and hence also . 7 . for a framework with icosahedral ( or ) symmetry ,the requirement that for each 5-fold axis implies that the framework must include a single orbit of vertices that are the vertices of an icosahedron .similarly , for a framework with a or symmetry , the requirement that implies that the framework must include a single orbit of vertices that are the vertices of an octahedron .in contrast to the 2d case , in 3d the symmetry conditions do not exclude any point group . for example , a fully triangulated convex polyhedron , isostatic by the theorem of cauchy and dehn ( cauchy 1813 ; dehn 1916 ) can be constructed to realize any 3d point group .beginning with the regular triangulated polyhedra ( the tetrahedron , octahedron , icosahedron ) , infinite families of isostatic frameworks can be constructed by expansions of these polyhedra using operations of truncation and capping . for example , to generate isostatic frameworks with only the rotational symmetries of a given triangulated polyhedron , we can ` cap ' each face with a twisted octahedron , consistent with the rotational symmetries of the underlying polyhedron : the resultant polyhedron will be an isostatic framework with the rotational symmetries of the underlying polyhedron , but none of the reflection symmetries .an example of the capping of a regular octahedron is shown in figure [ fig : twistcap ] .similar techniques can be applied to create polyhedra for any of the point groups .+ ( a ) ( b ) one interesting possibility arises from consideration of groups that contain axes .equation ( [ eq:3dcn ] ) allows an unlimited number of joints , though not bars , along a 3-fold symmetry axis .thus , starting with an isostatic framework , joints may be added symmetrically along the 3-fold axes . to preserve the maxwell count ,each additional joint is accompanied by new bars .thus , for instance , we can ` cap ' every face of an icosahedron to give the compound icosahedron - plus - dodecahedron ( the second stellation of the icosahedron ) , as illustrated in figure [ fig : ico ] , and this process can be continued ad infinitum adding a pile of ` hats ' consisting of a new joint , linked to all three joints of an original icosahedral face ( figure [ fig : cap ] ) .similar constructions starting from cubic and trigonally symmetric isostatic frameworks can be envisaged .addition of a single ` hat ' to a triangle of a framework is one of the hennenberg moves ( tay & whiteley 1985 ) : changes that can be made to an isostatic framework without introducing extra mechanisms or states of self stress . + ( a ) ( b )for a framework with point - group symmetry the previous section has provided some necessary conditions for the realization to be isostatic .these conditions included some over - all counts on bars and joints , along with sub - counts on special classes of bars and joints ( bars on mirrors or perpendicular to mirrors , bars centered on the axis of rotation , joints on the centre of rotation etc . ) . here , assuming that the framework is realized with the joints in a configuration as generic as possible ( subject to the symmetry conditions ) , we investigate whether these conditions are sufficient to guarantee that the framework is isostatic .the simplest case is the identity group ( ) .for this basic situation , the key result is laman s theorem . in the following ,we take to define the connectivity of the framework , where is the set of joints and the set of bars , and we take to define the positions of all of the joints in 2d .[ laman ] ( laman , 1970 ) for a generic configuration in 2d , , the framework is isostatic if and only if satisfies the conditions : 1 . ; 2 . for any non - emptyset of bars , which contacts just the joints in , with and , .our goal is to extend these results to other symmetry groups . with the appropriate definition of ` generic ' for symmetry groups ( schulze 2008a ), we can anticipate that the necessary conditions identified in the previous sections for the corresponding group plus the laman condition identified in theorem [ laman ] , which considers subgraphs that are not necessarily symmetric , will be sufficient . for three of the plane symmetry groups ,this has been confirmed .we use the previous notation for the point groups and the identification of special bars and joints , and describe a configuration as ` generic with symmetry group ' if , apart from conditions imposed by symmetry , the points are in a generic position ( the constraints imposed by the local site symmetry may remove 0,1 or 2 of the two basic freedoms of the point ) .( schulze 2008b ) if is a plane configuration generic with symmetry group , and is a framework realized with these symmetries , then the following necessary conditions are also sufficient for to be isostatic : .1 in and for any non - empty set of bars , and 1 . for : ; 2 . for : , 3 . for : for the remaining groups, we have a conjecture .if is a plane configuration generic with symmetry group , and is a framework realized with these symmetries , then the following necessary conditions are also sufficient for to be isostatic : .1 in and for any non - empty set of bars , and 1 . for : and for each mirror 2 . for : and for each mirror . an immediate consequence of this theorem ( and the conjecture )is that there is ( would be ) a polynomial time algorithm to determine whether a given framework in generic position modulo the symmetry group is isostatic .although the laman condition of theorem [ laman ] involves an exponential number of subgraphs of , there are several algorithms that determine whether it holds in steps where is a constant . the pebble game ( hendrickson and jacobs , 1997 ) is an example .the additional conditions for being isostatic with the symmetry group trivially can be verified in constant time . in 3d ,there is no known counting characterization of generically isostatic frameworks , although we have the necessary conditions : and for all subgraphs with ( graver 2001 ) .there are , however a number of constructions for graphs which are known to be generically isostatic in 3d ( see , for example , tay and whiteley 1985 , whiteley 1991 ) .if we assume that we start with such a graph , then it is natural to ask whether the additional necessary conditions for a realization that is generic with point group symmetry to be isostatic are also sufficient .in contrast to the plane case , where we only needed to state these conditions once , for the entire graph , in 3d for all subgraphs of whose realizations are symmetric with a subgroup of , with the full count , we need to assert the conditions corresponding to the symmetry operations in as well .these conditions are clearly necessary , and for all reflections , half - turns , and -fold rotations in , they do not follow from the global conditions on the entire graph ( as they would in the plane ) .see schulze ( 2008c ) for details . + all of the above conditions combined , however , are still not sufficient for a -dimensional framework which is generic with point group symmetry to be isostatic , because even if satisfies all of these conditions , the symmetry imposed by may force parts of to be ` flattened ' so that a self - stress of is created . for more details on how ` flatness ' caused by symmetrygives rise to additional necessary conditions for -dimensional frameworks to be isostatic , we refer the reader to schulze , watson , and whiteley ( 2008 ) .altmann , s.l . and herzig , p. , 1994. point - group theory tables .clarendon press , oxford .+ atkins , p.w . , child , m.s .and phillips , c.s.g . ,1970 . tables for group theory .oxford university press , oxford .+ bishop , d.m . , 1973 .group theory and chemistry .clarendon press , oxford .+ calladine , c.r . ,buckminster fuller s ` tensegrity ' structures and clerk maxwell s rules for the construction of stiff frames. international journal of solids and structures 14 , 161172 .+ cauchy , a.l . , 1813 .recherche sur les polydres premier mmoire , journal de lecole polytechnique 9 , 6686 .+ ceulemans , a. and fowler , p.w . , 1991 .extension of euler s theorem to the symmetry properties of polyhedra .nature 353 , 5254 .+ dehn , m. , 1916 .ber die starreit konvexer polyeder , mathematische annalen 77 , 466473 .+ donev a. , torquato , s. , stillinger , f. h. and connelly , r. , 2004 .jamming in hard sphere and disk packings .journal of applied physics 95 , 989999 .+ fowler , p.w . and guest , s.d .a symmetry extension of maxwell s rule for rigidity of frames .international journal of solids and structures 37 , 17931804 .+ fowler , p.w . and guest , s.d . , 2002 .symmetry and states of self stress in triangulated toroidal frames .international journal of solids and structures 39 , 43854393 .+ fowler , p.w . andguest , s.d . , 2005 . a symmetry analysis of mechanisms in rotating rings of tetrahedra .proceedings of the royal society : mathematical , physical & engineering sciences .461(2058 ) , 1829 - 1846 .+ guest , s.d . and fowler , p.w .2005 . a symmetry - extended mobility rule . mechanism and machine theory .40 , 1002 - 1014 .+ graver , j.e , servatius , b. , and servatius , h. , 1993 .combinatorial rigidity .graduate studies in mathematics , ams , providence .+ graver , j.e , 2001 .counting on frameworks : mathematics to aid the design of rigid structures .the mathematical association of america , washington , dc .+ hendrickson , b. and jacobs , d. , 1997 .an algorithm for two - dimensional rigidity percolation : the pebble game .journal of computational physics , 137 , 346365 + james , g. and liebeck , m. , 2001 .representations and characters of groups , 2nd edition . cambridge university press .+ kangwai , r.d . and guest , s.d . ,. symmetry - adapted equilibrium matrices. international journal of solids and structures 37 , 15251548 .+ laman , g. , 1970 . on graphs and rigidity of plane skeletal structures .journal of engineering mathematics 4 , 331340 .+ maxwell , j.c ., 1864 . on the calculation of the equilibrium and stiffness of frames , philosophical magazine 27 , 294299 . also : collected papers , xxvi . cambridge university press , 1890 .+ pellegrino , s. and calladine , c.r . , 1986 .matrix analysis of statically and kinematically indeterminate structures. international journal of solids and structures 22 , 409428 .+ schulze , b. , 2008a .injective and non - injective realizations with symmetry .preprint , arxiv:0808.1761 + schulze , b. , 2008b .symmetrized laman s theorems , in preparation , york university , toronto , canada .+ schulze , b. , 2008c .combinatorial and geometric rigidity with symmetry constraints , phd thesis in preparation , york university , toronto , canada .+ schulze , b. , watson , a. , and whiteley , w. , 2008 .symmetry , flatness , and necessary conditions for independence , in preparation .+ tay , t - s . and whiteley , w. , 1985 .generating isostatic frameworks .structural topology 11 , 2069 .+ whiteley , w. , 1991 vertex splitting in isostatic frameworks , structural topology 16 , 2330 .+ whiteley , w. , 2005 .counting out the flexibility of proteins .physical biology 2 , 116126 .
|
maxwell s rule from 1864 gives a necessary condition for a framework to be isostatic in 2d or in 3d . given a framework with point group symmetry , group representation theory is exploited to provide further necessary conditions . this paper shows how , for an isostatic framework , these conditions imply very simply stated restrictions on the numbers of those structural components that are unshifted by the symmetry operations of the framework . in particular , it turns out that an isostatic framework in 2d can belong to one of only six point groups . some conjectures and initial results are presented that would give sufficient conditions ( in both 2d and 3d ) for a framework that is realized generically for a given symmetry group to be an isostatic framework .
|
quantum fault tolerance is a framework designed to allow accurate implementations of quantum algorithms despite inevitable errors . while the construction of this framework is important and necessary to demonstrate the possibility of successful quantum computation , the realization of a quantum computer which utilizes the complete edifice of quantum fault tolerance requires huge numbers of qubits and is a monumental task .thus , it is worthwhile to explore whether it is possible to achieve successful quantum computation without using the full toolbox of quantum fault tolerance techniques .initial studies along these lines have been done for logical zero encoding in the steane [ 7,1,3 ] quantum error correction code . in this paperwe extend these results by determining the accuracy with which an arbitrary state can be encoded into the steane code in a non - equiprobable pauli operator error environment . encoding an arbitrary state in the steane codecan not be done in a fault tolerant manner , and we explore whether the accuracy achieved for a non - fault tolerant encoding is sufficient for use in a realistic quantum computation . we then apply ( noisy ) single qubit clifford gates and error correction to this encoded state , again concentrating on the accuracy of the implementation .the first step in any fault tolerant implementation of quantum computation is to encode the necessary quantum information into logical states of a quantum error correction ( qec ) code . herewe make use of the [ 7,1,3 ] steane code which can completely protect one qubit of quantum information by encoding the information into seven physical qubits .encoding information into the steane code can be done via the gate sequence originally designed in . however, this method is not fault tolerant as an error on a given qubit may spread to other qubits .a fault tolerant method exists for encoding only the logical zero and one states .here we study the accuracy of the arbitrary state encoding in attempt to determine whether it could be useful for practical implementations of quantum computation .we assume a non - equiprobable error model in which qubits taking part in any gate are subject to a error with probability , a error with probability , and a error with probability .thus , an attempted implementation of a single qubit transformation on qubit described by density matrix would produce : where is the identity matrix , and . a two - qubit controlled - not ( cnot ) gate with control qubit and target qubit ( ) would cause the following evolution of an initial two - qubit density matrix : after noisy encoding of an arbitrary state , we apply ideal error correction as a means of determining whether the errors that occurred during encoding can , at least in principal , be corrected , thus making the encoded state useful for practical instantiations of quantum computation .in addition , we apply each of three logical clifford gates ( hadamard , not , phase ) to the noisy encoded state , and measure the accuracy of their implementation in the non - equiprobable error environment .we apply ideal error correction after each of these gates .finally , we apply noisy qec to the encoded states , to model the reliability with which data can be encoded and maintained in a realistic noisy environment . in this work ,we utilize three different measures of fidelity to evaluate accuracy .the first fidelity quantifies the accuracy of the seven - qubit state after the intended operations have been performed compared to the desired seven - qubit state .the second fidelity measures the accuracy of the logical qubit , the single qubit of quantum information stored in the qec code . to obtain this measure, we perfectly decode the seven - qubit state and partial trace over the six ( non - logical ) qubits .the final fidelity measure compares the state of the seven qubits after application of perfect error correction to the state that has undergone perfect implementation of the desired transformations .this fidelity reveals whether the errors that occur in a noisy encoding process can be corrected for use in fault tolerant quantum computation . ] , reflects the accuracy with which the general state encoding process is achieved and is given ( to second order ) in the appendix , eq .[ fidenc7 ] .this expression reveals that no first order error terms are dependent on , and the only first order term dependent on the state to be encoded is .this indicates a relatively small dependence on initial state in general . encoding with an initial state of or results in the highest fidelity , while encoding an intial state with results in a lower fidelity .we further note that there is no dominant error term , in that the magnitudes of the coefficients of the first order terms are similar .the seven - qubit fidelity quantifies the accuracy with which the seven physical qubits are in the desired state .however , certain errors may not impact the single logical qubit of quantum information stored in the qec code .we would like to determine the accuracy of that single logical qubit of information . to do this we perfectly decode the seven - qubit system , and partial trace over qubits 2 through 7 .we compare the resulting one - qubit state to the starting state of the logical qubit .this fidelity is then given by : , and is given ( to second order ) in eq .[ fidenc1 ] .we find that this fidelity is highly dependent on the initial state .all terms , including the first order terms , are dependent on .in addition , dependence on appears in all terms but and .as above , no single type of error dominates the loss of fidelity , as indicated by the similar coefficients of the first order probability terms .the initial states that result in the highest fidelity occur at and , while the state that results in the lowest fidelity occurs at .contour plots of the seven qubit and single qubit fidelities are shown in fig .[ generalfidelity ] . ] encoding procedure as a function of initial state parameterized by ( is set to 0 ) , and error probabilities , and ( is set to zero ). left : the seven qubit fidelity for the general encoding procedure .right : the single data qubit fidelity for the general encoding procedure .the contours for the seven qubit fidelity values are ( from top to bottom ) 0.85 , 0.90 , 0.95 , and 0.99 .the contours for the single qubit fidelities are ( from top to bottom ) 0.97 , 0.985 , and 0.995.,title="fig:",width=151 ]we now apply the one logical qubit clifford gates to the encoded arbitrary state using the non - equiprobable error model described above , and determine the accuracy of their implementation .css codes in general , and the steane code in particular , allow logical clifford gates to be implemented bitwise , so that only one ( physical ) qubit gates are required .the clifford gates applied in this study are the hadamard ( h ) , not ( x ) , and phase ( p ) gates : the logical hadamard gate is implemented by applying a single qubit hadamard gate on each of the seven qubits in the encoding . the logical not gate is implemented by applying a single qubit not gate to the first three qubits of the encoded state . the logical phase gate is implemented by applying an inverse phase gate to each of the seven qubits .each of these gates is applied to the noisily encoded arbitrary state of the last section in the non - equiprobable error environment such that a qubit acted upon by a gate evolves via eq .[ transformequation ] .we quantify the accuracy of the gate implementation by calculating the fidelities as a comparison of the state after application of the clifford gate with the state of a noisily encoded arbitrary state that undergoes perfect application of the clifford transformation : ] , where represents , , or .plots of these fidelities are given in fig .[ cliffordfidelity ] , and the expressions of these fidelities are given in the appendix , eqs .[ had7qubitfidelity ] - [ phase1qubitfidelity ] . ] code as a function of the initial state paramaterized by ( is set to 0 ) , and error probabilities , and ( is set to zero ) . left : seven qubit fidelities , right : single logical qubit fidelities for the hadamard , not , and phase gates ( top to bottom ) .the contours for the seven qubit fidelity values are ( from top to bottom ) 0.85 , 0.90 , 0.95 , and 0.99 .the contours for the single qubit fidelities are ( from top to bottom ) 0.97 , 0.985 , and 0.995 .note that the contour plots of the seven qubit fidelities for the hadamard and phase gates ( top - left and bottom - left ) are nearly indistinguishable.,title="fig:",width=151 ] ] code as a function of the initial state paramaterized by ( is set to 0 ) , and error probabilities , and ( is set to zero ) .left : seven qubit fidelities , right : single logical qubit fidelities for the hadamard , not , and phase gates ( top to bottom ) .the contours for the seven qubit fidelity values are ( from top to bottom ) 0.85 , 0.90 , 0.95 , and 0.99 .the contours for the single qubit fidelities are ( from top to bottom ) 0.97 , 0.985 , and 0.995 .note that the contour plots of the seven qubit fidelities for the hadamard and phase gates ( top - left and bottom - left ) are nearly indistinguishable.,title="fig:",width=151 ] ] code as a function of the initial state paramaterized by ( is set to 0 ) , and error probabilities , and ( is set to zero ) . left : seven qubit fidelities , right : single logical qubit fidelities for the hadamard , not , and phase gates ( top to bottom ) .the contours for the seven qubit fidelity values are ( from top to bottom ) 0.85 , 0.90 , 0.95 , and 0.99 .the contours for the single qubit fidelities are ( from top to bottom ) 0.97 , 0.985 , and 0.995 .note that the contour plots of the seven qubit fidelities for the hadamard and phase gates ( top - left and bottom - left ) are nearly indistinguishable.,title="fig:",width=151 ] the encoded state ( seven qubit ) fidelities for the three clifford gates follow several general trends .the fidelities of the hadamard and phase gates , eqs .[ had7qubitfidelity ] and [ phase7qubitfidelity ] , are nearly identical and slightly lower than the fidelity of the not gate , eq .[ not7qubitfidelity ] .this is likely due to the fact that the same number of gates are applied to implement the logical hadamard and phase gates which is more than needed to implement the logical not gate .in addition , all three fidelities have similar dependence on the initial state as paramaterized by and ( though dependence on is small ) .while the coefficient of the first order term is larger than the coefficients of the first order and terms for all three fidelities , the difference is only slight .no specific error dominates the loss in fidelity .different trends are apparent in the fidelities of the single logical qubit .first , the fidelity of the hadamard state , eq . [ had1qubitfidelity ] , is lower than the fidelities of the not and phase states , eq .[ not1qubitfidelity ] and [ phase1qubitfidelity ] .furthermore , all three fidelities exhibit relatively large dependence on initial state , in that appears in every first order term , and appears in nearly all first order terms .the fidelities after application of the not gate and hadamard gate exhibit similar magnitude dependences on , while the phase fidelity changes more significantly as the initial state varies . for all gates , the most stable initial states are , in that a higher fidelity occurs at the same , , and values when compared with fidelities of other initial states .we note as well that the single data qubit fidelities of the not gate and phase gate become independent of errors for these values of . in both fidelity measuresthere are first order terms which would ideally be suppressed to second order through the application of quantum error correction .the purpose of encoding a state via the ] code . applying these three syndrome measurements will reveal the presence of bit flips . subsequently applying hadamard gates to the data qubits and applying the same three syndrome measurementswill reveal the presence of phase flips .box : a useful equality that allows us to simplify the error correction procedure .we can avoid implementing hadamard gates by reversing the control and target qubits of the cnot gates.,width=302 ] the fidelity resulting from implementing perfect error correction on the encoded arbitrary state is given by eq .while perfect error correction on the encoded arbitrary state does improve the fidelity , there is a remaining first order term ( the errors associated with the first order and have been suppressed by perfect error correction ) .this would suggest that the gate sequence encoding scheme is not appropriate for practical quantum computation .however , for certain initial states the first order error term drops out and thus this process can be used to create logical and states .we observe similar trends when perfect qec is performed on the states that have undergone clifford transformations , eqs .[ pqechad ] , [ pqecnot ] , and [ pqecphase ] . in all three cases , the error probabilities in the fidelities are suppressed to second order in and , while is suppressed to second order only for the initial states and .this implies that this encoding process and the application of a logical one - qubit clifford gates can be used for practical quantum computation only for the initial states and . real error correction will be noisy .thus , we apply a fault tolerant error correction scheme in the non - equiprobable error environment to the arbitrary encoded state to determine what might occur in a more realistic quantum computation . in adhering to the rules of fault tolerance we utilize shor states as syndrome qubits ( fig .[ fullqec ] ) .the shor states themselves are prepared in a noisy environment such that the hadamard evolution is properly described by eq .[ transformequation ] , and the evolution of the cnot gates is described by eq .we verify the shor state by performing two parity checks ( also done in the non - equiprobable error environment ) .if the state is not suitable , it is thrown away and a new state is prepared . after applying the necessary cnots between the data and ancilla qubits , we measurethe ancilla qubits the parity of which determines the syndrome value .the phase flip syndrome measurements are performed in a similar manner ( fig .[ fullqec ] ) .we analyze only the case where all four ancilla qubits are measured as zero for each of the syndrome measurements .we apply each syndrome check twice to confirm the correct parity measurement , as errors may occur while implementing the syndrome measurements themselves . .we apply hadamard rotations the the ancilla bits , cancelling the hadamards at the end of the shor state construction , and flip the direction of the cnots .measuring in the -basis allows us to eliminate the final hadamards on the ancilla qubits ., title="fig:",width=226 ] .we apply hadamard rotations the the ancilla bits , cancelling the hadamards at the end of the shor state construction , and flip the direction of the cnots .measuring in the -basis allows us to eliminate the final hadamards on the ancilla qubits ., title="fig:",width=340 ] ] code as a function of the initial state paramaterized by ( is set to 0 ) , , and ( is set to zero .the contours for the fidelity values are ( from top to bottom ) 0.85 , 0.90 , 0.95 , and 0.99 .right : the shaded region indicates where in probability space the seven qubit fidelity of a noisily encoded arbitrary initial state increases after applying qec .we find that the and of the initial state do not significantly affect this region . in this plot and both set to .,title="fig:",width=151 ] if error correction is to perform as desired and maintain an encoded state with high reliability , the fidelity measurement of the post error - prone qec state should be higher than the fidelity of the noisily encoded state with no error correction applied .the fidelity should furthermore be greater than the fidelity expected for unencoded states , which would indicate that going through the [ 7,1,3 ] encoding process and qec procedure effectively maintains information more reliably than undergoing no encoding at all .the fidelity for the ( noisily ) error corrected arbitrary encoded state is given in eq .[ qec ] .we find that the fidelity after an error prone error correction code is quite comparable to the fidelity of the encoded state prior to error correction .this fidelity still includes first order terms in , , and .there is little dependence on the initial state , as only appears in the term , though the initial states with and result in slightly higher fidelities than other initial states .we note that the term is now dominant . furthermore , while the term in the fidelity of the error - prone qec state is higher than the corresponding term in the fidelity of the pre - qec encoded state , eq .[ fidenc7 ] , the and terms of the qec state are both significantly lower than the corresponding terms of the encoded state fidelity .thus , in cases where is lower than and , the noisy error correction scheme does indeed improve fidelity . however , when the is high , the noisy error correction can significantly lower the fidelity of the encoded state . fig .[ figqec ] displays a contour plot of the seven qubit fidelity with noisy error correction , and a plot showing the region in probability - space for which the post - qec states attain a higher fidelity than the noisily encoded states ( prior to error correction ) .the fidelity of the single logical qubit for the qec code is given in eq .we find that the single logical qubit fidelity after error - prone qec is comparable to the fidelity of the single logical qubit before qec was applied .this fidelity contains first order terms in , , and .furthermore , the fidelity is highly dependent on initial state , as appears in all first order terms .the initial states and result in higher fidelities than other initial states .we note that in this measure of fidelity , the error is dominant as well .when noisy error correction is implemented a second time on the system , the fidelities remain nearly the same , as seen in eqs .[ qec2x ] and [ qec2x1 ] .the seven - qubit fidelity of the state after applying noisy error correction to a noisy arbitrary encoded state that has undergone a logical not gate is given by eq .[ qecnot ] .this fidelity features similar characteristics to the fidelity when noisy error correction is applied to the general encoded state , eq .it contains first order terms , with the term again being dominant .the single logical qubit fidelity of this state is given by eq .[ qecnot1 ] , and bears strong resemblance to the post qec single qubit fidelity given by eq .these results indicate that it may not necessary or beneficial to perform error correction after every step in a quantum procedure , but rather that one should apply qec only at specific intervals or after specific gate sequences .in conclusion , we have explicitly evaluated the accuracy with which an arbitrary state can be encoded into the steane ] code and has then undergone error - prone quantum error correction .[ qec1 ] represents the fidelity of the single logical qubit of this state : ) p_z+\left(\frac{3111}{2}-\frac{3}{2 } \text{cos}[4 \alpha ] + 3 \text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_x^2\nonumber\\ & + & \left(226 + 2 \text{cos}[4 \alpha ]-2 \text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_x p_y-\left(\frac{1079}{4}+\frac{1}{4 } \text{cos}[4 \alpha ] -\frac{1}{2 } \text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_y^2\nonumber\\ & + & ( 330 - 103 \text{cos}[4 \alpha ] ) p_x p_z-\left(\frac{1293}{2}+\frac{21}{2 } \text{cos}[4 \alpha ] \right ) p_y p_z-\left(\frac{297}{2}+\frac{5}{2 } \text{cos}[4 \alpha ] \right ) p_z^2 \label{qec}\end{aligned}\ ] ] -\frac{19}{2}\text{cos}[2\beta]\text{sin}[2\alpha]^{2}\right ) p_x-\left(\frac{13}{4}-\frac{1}{4 } \text{cos}[4 \alpha ] -\frac{1}{2}\text{cos}[2\beta]\text{sin}[2\alpha]^{2}\right ) p_y \nonumber\\ & -&\left(\frac{7}{2}-\frac{7}{2 } \text{cos}[4 \alpha ] \right ) p_z + ( 210 + 70 \text{cos}[4 \alpha ] -140 \text{cos}[2\beta]\text{sin}[2\alpha]^{2 } ) p_x^2\nonumber\\ & + & \left(\frac{169}{4}+\frac{43}{4 } \text{cos}[4 \alpha ] -\frac{109}{2}\text{cos}[2\beta]\text{sin}[2\alpha]^{2}\right ) p_x p_y-\left(\frac{175}{2}+\frac{37}{2 } \text{cos}[4 \alpha ] -34\text{cos}[2\beta]\text{sin}[2\alpha]^{2}\right ) p_y^2\nonumber\\ & + & \left(\frac{51}{4}-\frac{271}{4 } \text{cos}[4 \alpha ] -\frac{209}{2}\text{cos}[2\beta]\text{sin}[2\alpha]^{2}\right ) p_x p_z-\left(\frac{829}{4}+\frac{127}{4 } \text{cos}[4 \alpha ] -\frac{137}{2}\text{cos}[2\beta]\text{sin}[2\alpha]^{2}\right ) p_y p_z\nonumber\\ & -&(42 - 42 \text{cos}[4 \alpha ] ) p_z^2 . \label{qec1}\end{aligned}\ ] ] eq .[ qec2x ] represents the fidelity of an arbitrary state that has been encoded via the error - prone ] code , and has then undergone noisy application of the logical single qubit not gate , followed by error - prone quantum error correction .[ qecnot1 ] represents the fidelity of the single logical qubit of this procedure : ) p_z+\left(\frac{6193}{4}-\frac{9}{4 } \text{cos}[4 \alpha ] + \frac{9}{2 } \text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_x^2\nonumber\\ & + & \left(\frac{451}{2}+\frac{5}{2 } \text{cos}[4 \alpha ] -2 \text{cos}[2 \beta ]\text{sin}[2 \alpha ] ^2\right ) p_x p_y-\left(\frac{1079}{4}+\frac{1}{4 } \text{cos}[4 \alpha ] -\frac{1}{2 } \text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_y^2\nonumber\\ & + & ( 330 - 103 \text{cos}[4 \alpha ] ) p_x p_z-\left(\frac{1309}{2}+\frac{21}{2 } \text{cos}[4 \alpha ] \right ) p_y p_z-\left(\frac{311}{2}+\frac{1}{2 } \text{cos}[4 \alpha ] \right ) p_z^2 \label{qecnot}\end{aligned}\ ] ] -\frac{19}{2}\text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_x-\left(\frac{13}{4}-\frac{1}{4 } \text{cos}[4 \alpha ] -\frac{1}{2}\text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_y\nonumber\\ & -&\left(\frac{7}{2}-\frac{7}{2 } \text{cos}[4 \alpha ] \right ) p_z+\left(\frac{819}{4}+\frac{273}{4 } \text{cos}[4 \alpha ] -\frac{273}{2}\text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_x^2\nonumber\\ & + & \left(\frac{167}{4}+\frac{45}{4 } \text{cos}[4 \alpha ] -\frac{109}{2}\text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_x p_y-\left(\frac{175}{2}+\frac{37}{2 } \text{cos}[4 \alpha ] -34\text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_y^2\nonumber\\ & + & \left(\frac{51}{4}-\frac{271}{4 } \text{cos}[4 \alpha ] -\frac{209}{2}\text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_x p_z-\left(\frac{847}{4}+\frac{125}{4 } \text{cos}[4 \alpha ] -\frac{139}{2}\text{cos}[2 \beta ] \text{sin}[2 \alpha ] ^2\right ) p_y p_z\nonumber\\ & -&(46 - 46 \text{cos}[4 \alpha ] ) p_z^2 . \label{qecnot1}\end{aligned}\ ] ] j. preskill , proc .a * 454 * , 385 ( 1998 ) .shor , _ proceedings of the the 35th annual symposium on fundamentals of computer science _ , ( ieee press , los alamitos , ca , 1996 ) .d. gottesman , phys .a * 57 * , 127 ( 1998 ) .p. aleferis , d. gottesman , and j. preskill , quant .* 6 * , 97 ( 2006 ) .weinstein , phys .a * 84 * , 012323 ( 2011 ) .m nielsen and i. chuang , _ quantum information and computation _( cambridge university press , cambridge , 2000 ) . p.w .shor , phys .a * 52 * , r2493 ( 1995 ) .a.r . calderbank and p.w .shor , phys .a * 54 * , 1098 ( 1996 ) ; a.m. steane , phys .lett . * 77 * , 793 ( 1996 ) .a. steane , proc .a * 452 * , 2551 ( 1996 ) . v. aggarwal , a.r .calderbank , g. gilbert , y.s .weinstein , quant .* 9 * , 541 ( 2010 ) .p. aliferis and j. preskill , phys .a * 78 * , 052331 ( 2008 ) .
|
we calculate the fidelity with which an arbitrary state can be encoded into a $ ] css quantum error correction code in a non - equiprobable pauli operator error environment with the goal of determining whether this encoding can be used for practical implementations of quantum computation . this determination is accomplished by applying ideal error correction to the encoded state which demonstrates the correctability of errors that occurred during the encoding process . we then apply single - qubit clifford gates to the encoded state and determine the accuracy with which these gates can be applied . finally , fault tolerant noisy error correction is applied to the encoded states in the non - equiprobable pauli operator error environment allowing us to compare noisy ( realistic ) and perfect error correction implementations . we note that this maintains the fidelity of the encoded state for certain error - probability values . these results have implications for when non - fault tolerant procedures may be used in practical quantum computation and whether quantum error correction should be applied at every step in a quantum protocol .
|
molecular communication is a promising approach to realise communications among nano - scale devices .there are many possible applications with these networks of nano - devices , for example , in - body sensor networks for health monitoring and therapy .this paper considers diffusion - based molecular communication networks . in a diffusion - based molecular communication network ,transmitters and receivers communicate by using signalling molecules or ligands .the transmitter uses different time - varying functions of concentration of signalling molecules ( or emission patterns ) to represent different transmission symbols .the signalling molecules diffuse freely in the medium . when signalling molecules reach the receiver , they react with chemical species in the receiver to produce output molecules .the counts of output molecules over time is the receiver output signal which the receiver uses to decode the transmitted symbols .two components in diffusion - based molecular communication system are modulation and demodulation .a number of different modulation schemes have been considered in the literature .for example , consider concentration shift keying ( csk ) where different concentrations of signalling molecules are used by the transmitter to represent different transmission symbols .other modulation techniques that have been proposed include molecule shift keying ( msk ) , pulse position modulation ( ppm ) , amplitude shift keying ( ask) , frequency shift keying ( fsk ) , and token communication .this paper assumes that the transmitter uses different chemical reactions to generate the emission patterns of different transmission symbols .the motivation to use this type of modulation mechanism is that chemical reactions are a natural way to produce signalling molecules , e.g. the papers study a number of molecular circuits ( which are sets of chemical reactions ) that can produce oscillating signals , and the paper discusses a number of signalling mechanisms in living cells .we assume the receiver consists of receptors .when the signalling molecules ( ligands ) reach the receiver , they can react with the receptors to form ligand - receptor complexes ( which are the output molecules in this paper ) .we consider the problem of using the continuous - time history of the number of complexes for demodulation assuming that the transmitter and receiver are synchronised .the ligand - receptor complex signal is a stochastic process with three sources of noise because the chemical reactions at the transmitter , the diffusion of signalling molecules and the ligand - receptor binding process are all stochastic .we derive a continuous - time markov process ( ctmp ) which models the chemical reactions at the transmitter , the diffusion in the medium and the ligand - receptor binding process . by using this model and the theory of bayesian filtering , we derive the maximum a posteriori ( map ) demodulator using the continuous - time history of the number of complexes as the input .this paper makes two key contributions : ( 1 ) we propose to use a ctmp to model a molecular communication network with chemical reactions at the transmitter , a diffusive propagation propagation medium and receptors at the receiver .the ctmp captures all three sources of noise in the communication network .( 2 ) we derive a closed - form expression for the map demodulation filter using the proposed ctmp .the closed - form expression gives insight into the important elements needed for optimal demodulation , these are the timings at which the receptor bindings occur , the number of unbound receptors and the mean concentration of signalling molecules around the receptors .the rest of the paper is organised as follows .section [ sec : related ] discusses related work .section [ sec : model ] presents the system assumptions , as well as a mathematical model from the transmitter to the ligand - receptor complex signal based on ctmp .we derive the map demodulator in section [ sec : map ] and illustrate its numerical properties in section [ sec : eval ] . finally , section [ sec : con ] concludes the paper .there is a growing interest to understand molecular communication from the communication engineering point of view . for recent surveys of the field ,see .we divide the discussion under these headings : transmitters , receivers , models and others . *transmitters . *a number of different types of transmission signals have been considered in the molecular communication literature .the papers assume that the transmitter releases the signalling molecules in a burst which can be modelled as either an impulse or a pulse with a finite duration .a recent work in assumes that the transmitter releases the molecules according to a poisson process . in this paper, we instead assume that the transmitter uses different sets of chemical reactions to generate different transmission symbols and we use ctmp to model these transmission symbols .since a poisson process can also be modelled by a ctmp , the transmission process in this paper is more general than that of .our ctmp model can also deal with an impulsive input by using an appropriate initial condition for the ctmp .the use of ctmp as an end - to - end model which includes the transmitter , the medium and the receiver does not appear to have been used before .* receivers .* demodulation methods for diffusion - based molecular communication have been studied in .both papers also use the map framework with discrete - time samples of the number of output molecules as the input to the demodulator . instead , in this paper , we consider demodulation using continuous - time history of the number of complexes .the demodulation from ligand - receptor signal has also been considered in .the key difference is that uses a linear approximation of the ligand - receptor process while we use a non - linear reaction rate .the capacity of molecular communications based on ligand - receptor binding has been studied in assuming discrete samples of the number of complexes are available . a recent work considers the capacity of such systems in the continuous - time limit . instead of focusing on the capacity ,our work focuses on demodulation .receiver design is an important topic in molecular communication and has been studied in many papers , some examples are .these papers either use one sample or a number of discrete samples on the count of a specific molecule to compute the likelihood of observing a certain input symbols .this paper takes a different approach and uses continuous - time signals .another approach of receiver design for molecular communication is to derive molecular circuits that can be used for decoding .an attempt is made in to design a molecular circuit that can decode frequency - modulated signals .however , the work does not take diffusion and reaction noise into consideration .a recent work in analyses end - to - end molecular communication biological circuits from linear time - invariant system point of view .the work in compares the information theoretic capacity of a number of different types of linear molecular circuits .this paper differs from the previous work in that it uses a non - linear ligand - receptor binding model .the noise property of ligand - receptor for molecular communication has been characterised in .the case for non - linear ligand - receptor binding does not appear to have an analytical solution and derives an approximate characterisation using a linear reaction rate assuming that the number of signalling molecules around the receptor is large .this paper uses a non - linear ligand - receptor binding model and no approximation is used in solving the filtering problem .* models . *this paper uses the reaction diffusion master equation ( rdme ) framework to model the reactions and diffusion in the molecular communication networks .rdme assumes that time is continuous while the diffusion medium is discretised into voxels .this results in a ctmp with finite number of ( discrete ) states .rdme has been used to model stochastic dynamics of cells in the biology literature .an attraction of rdme is that it has the markov property which means that one can leverage the rich theory behind markov process .the author of this paper has previously used an extension of the rdme model , called the rdme with exogenous input ( rdmex ) model , to study molecular communication networks in .the rdmex assumes that the times at which the transmitter emits signalling molecules are deterministic .this results in a stochastic process which is piecewise markov or the markov property only holds in between two consecutive emissions by the transmitter . in this paper, we assume the transmitter uses chemical reactions to generate the signalling molecules .therefore , the emission timings are not deterministic but are governed by a stochastic process . in this paper , we assume that the propagation medium is discretised in the voxels . an alternative modelling paradigm that has been used in a number of molecular communication network papers is that the transmitter or receiver has a non - zero spatial dimension ( commonly modelled by a sphere ) while the propagation medium is assumed to be continuous .( note that though does not explicitly state the dimension of the receiver , one can infer from the fact that the receiver must have a non - zero dimension because it has a non - zero probability of receiving the signalling molecules . )we believe the technique in this paper can be adapted to this alternative modelling paradigm and we do not expect this alternative modelling paradigm will change the results in this paper ; we will explain this in section [ sec : map : css ] .there is a rich literature in the modelling of biological systems discussing the difference between : ( 1 ) the particle approach which has a continuous state space because the state of a particle is its position ; and ( 2 ) the mesoscopic approach ( the approach in this paper ) which discretises the medium into discrete voxels and consider the number of molecules in the voxels as the state .the first approach is more accurate but the computation burden can be high , while the second approach is accurate for appropriate discretisation .there are also hybrid approaches too .an overview of various modelling and simulation approaches can be found in .* others : * the results of this paper may also be of interest to biologists who are interested to understand how living cells can distinguish between different concentration levels .the result of this paper can be viewed as a generalisation of which studies how cells can distinguish between two constant levels of ligand concentration .this paper considers diffusion - based molecular communication with one transmitter and one receiver in a fluid medium .figure [ fig : overall ] gives an overview of the setup considered in this paper .the transmitter uses different chemical reactions to generate the emission patterns of different transmission symbols .the transmitter acts as the source and emitter of signalling molecules .the signalling molecules diffuses in the fluid medium .the front - end of the receiver consists of a ligand - receptor binding process and the back - end consists of the demodulator with the number of complexes as its input . in this section ,we first describe the system assumptions in section [ sec : model_basic ] .we then present , in section [ sec : e2e : g ] , an end - to - end model which includes the transmitter , the transmission medium and the ligand - receptor binding process in the receiver , see the dashed box in figure [ fig : overall ] .the end - to - end model is a ctmp which includes chemical reactions in the transmitter , diffusion in the medium and the ligand - receptor binding process in the receiver .we assume that the medium ( or space ) is discretised into voxels while time is continuous .this modelling framework results in a rdme , which is a ctmp commonly used to model systems with both diffusion and reactions .in addition , we assume the communication uses only one type of signalling molecule ( or ligand ) denoted by .we divide the description of our model into three parts : transmission medium , transmitter and receiver .we begin with the transmission medium .table [ tab : notation ] summaries the frequently used notation and chemical symbols . & dimension of one side of a voxel + & diffusion constant + & diffusion rate between neighbouring voxels + & total number of voxels + & total number of receptors + & reaction rate constant for the binding reaction + & + & reaction rate constant for the unbinding reaction + & a transmission symbol + & number of complexes at time + & number of signalling molecules in voxel at time + & equation . a vector containing the number of signalling molecules in each voxel , the counts of intermediate chemical species in the transmitter and the cumulative count of the number of molecules that have left the system + & the mean number of signalling molecules in the receiver voxel at time if the transmitter sends symbol + & the cumulative number of times the receptors have switched from the unbound to bound state at time + & an unbound receptor + & a signalling molecule + & a complex + we model the transmission medium as a three dimensional ( 3-d ) space and partition the space into cubic _voxels _ of volume .figure [ fig : model ] shows an example of a medium which has a dimension of 4 voxels along both the and -directions , and 1 voxel in the -direction .( note that figure [ fig : model ] should be viewed as a projection onto the plane . ) in general , we assume the medium to have , and voxels in the , and directions where and are positive integers . in figure [fig : model ] , and . we also use to denote the total number of voxels .we refer to a voxel by a triple where , and are integers or by a single index ] . following on from the above example, one can realise amplitude shift keying ( ask ) in molecular communication by using different chemical reactions that can produce signalling molecules at different mean rates .for example , if there are four different reactions that can produce signalling molecules at four different mean rates of , , and , then one can use these four different reactions to produce 4 different symbols .note that it is possible for the four chemical reactions to produce the same emission pattern ( or realisation ) , though with different probabilities .a standard result in physical chemistry shows that the dynamics of a set of chemical reactions can be modelled by a ctmp .therefore , we will model the transmitter by a ctmp .note that , in this paper , we will not specify the sets of chemical reactions used by the transmitter except for simulation because the map demodulator does not explicitly depend on the sets of chemical reactions that the transmitter uses .we assume the receiver occupies one voxel and we use to denote the index of the voxel at which the receiver is located . in figure[ fig : model ] , we assume the receiver is at voxel 7 ( light grey ) and hence for this example .in addition , we assume that the transmitter and receiver voxels are distinct .we assume that the receiver has non - interacting receptors and we use as the chemical name for an unbound receptor .these receptors are fixed in space and do not diffuse , and they are only found in the receiver voxel .furthermore , these receptors are assumed to be uniformly distributed in the receiver voxel .the receptor can bind to a signalling molecule to form a ligand - receptor complex ( or complex for short ) , which is a molecule formed by combining and .this is known as ligand - receptor binding in molecular biology literature .the binding reaction can be written as the chemical equation : c } \label{cr : on } \end{aligned}\ ] ] where is the reaction rate constant .since the receptors are only found in the receiver voxel , the binding reaction occurs in a volume of , which is the volume of a voxel .the rate at which the complexes is formed is given by the product of , the number of signalling molecules in the receiver voxel and the number of unbound receptors this footnote explains how comes about .consider a chemical reaction where reactants s and e react to form product c. we assume the reactions are taking place within a volume of .let , and be , respectively , the concentration of s , e and c in the volume .the law of mass action says that . in the case of the ctmp or rdme in this paper, we want to keep track of the number of molecules in a volume ( the voxel ) instead .let , and be , respectively , the number of s , e and c molecules in the volume . since concentration and molecule counts are related by etc , we will , in a mathematically loose way , write . since is a discrete quantity ,the derivative is not defined but we can interpret it as the production rate of molecules .this explains how to convert the law of mass action , which is in terms of concentration , to the rate law used in rdme which is in terms molecular counts .this conversion is also discussed in . ] .we define and will use in the ctmp .note that this is equivalent to ligand - receptor binding model used in ( * ? ? ?* section v - b ) .a ligand - receptor complex can dissociate into an unbound receptor and a signalling molecule .this can be represented by the chemical equation e + s } \label{cr : off } \end{aligned}\ ] ] where is the reaction rate constant .the rate at which the complexes are dissociating is given by the product of and the number of complexes where is the concentration of the complexes .we can use the same argument in footnote [ fn : bind ] to show that the dissociation rate of is where is the number of complexes . in particular , note that no scaling by volume of the voxel is required . ] . since a receptor can either be in an unbound state or in a complex , we have the following conservation relation : the number of unbound receptors plus the number of complexes is equal to the total number of receptors . in order to derive the map demodulator , we need an end - to - end model which includes the transmitter , the medium and the ligand - binding process , see figure [ fig : overall ] . since chemical reactions ( which includes the chemical reactions in the transmitter as well as the ligand - receptor binding process in the receiver ) and diffusion can be modelled by ctmp , it is possible to use a ctmp as an end - to - end model . in this sectionwe present a general end - to - end model that includes the transmitter , diffusion and the ligand - receptor process in the receiver .an excellent tutorial introduction to the modelling of chemical reactions and diffusion by using ctmp can be found in .we have also included an example in appendix [ app : e2e_ex ] .the aim of the end - to - end model is to determine the properties of the receiver signal from the transmitter signal .the receiver signal in our case is the number of complexes over time .since the transmitter uses symbols , the transmitter signal is generated by one of the sets of chemical reactions .this means that we need end - to - end models with a model for each of the symbols or sets of chemical reactions .the principle behind building these models is identical so without loss of generality , we will assume that the model here is for symbol 0 .we begin with a few definitions .let ( where ) be the number of signalling molecules in voxel at time .in particular , since we have defined to be the index of the receiver voxel , is the number of signalling molecules in the receiver voxel .we assume the transmitter is a set of chemical reactions which uses intermediate chemical species , , ... and and these intermediate species remain in the transmitter voxel .let be the number of chemical species in the transmitter voxel at time .molecules may also be degraded or leave the system forever if absorbing boundary condition is used .we use to denote the cumulative number of molecules that have left the system . notethat since and are molecular counts , they must belong to the set of non - negative integers .we define the vector to be : ^t \label{eqn : state}\end{aligned}\ ] ] where denotes matrix transpose .let denote the number of complexes or bound receptors at time and } ] . since there are receptors , the number of unbound receptor is .the state of the end - to - end model is the tuple .we will now specify the transition probabilities from state to state .state transitions can be caused by any one of these events : a chemical reaction in the transmitter , the diffusion of a signalling molecule from a voxel to neighbouring voxel , and the binding or unbinding of a receptor in the receiver .we know from the theory of ctmp that the probability of two events taking place in an infinitesimal duration of is of the order of . intuitively , this means only one event can occur within .we can divide the transition probabilities from to into 2 groups depending on whether the number of complexes has changed or not in the time interval .if the number of complexes has changed from time to , i.e. , this means either a binding reaction or a unbinding reaction has occurred .if a binding reaction has occurred , then the number of signalling molecules in the receiver voxel is decreased by 1 and the number of complexes is increased by 1 .this reaction occurs at a mean rate of .we use to denote the standard basis vector with a ` 1 ' at the -th position .we can write the state transition probability of the receptor binding reaction as : = \lambda \ ; n_r(t ) \ ; ( m - b(t ) ) \ ; \delta t \label{eqn : tp : r1}\end{aligned}\ ] ] \nonumber \\ & = \lambda \ ; n_r(t ) \ ; ( m - b(t ) ) \ ; \delta t \label{eqn : tp : r1}\end{aligned}\ ] ] recalling that is the index of the receiver voxel and is the -th element of in , the expression is equivalent to , which means the number of signalling molecules in the receiver voxel has decreased by 1 .similarly , the expression says the number of complexes has increased by 1 .the right - hand side ( rhs ) of is the transition probability and is given by the product of mean reaction rate and .similarly , the transition probability of the unbinding reaction is given by : = \mu \ ; b(t ) \ ; \delta t \label{eqn : tp : r2 } \end{aligned}\ ] ] \nonumber \\ & = \mu \ ; b(t ) \ ; \delta t \label{eqn : tp : r2 } \end{aligned}\ ] ] where rhs of is the transition probability .we now specify the second group of transition probabilities with .these transitions are caused by either a reaction in the transmitter or diffusion of signalling molecules between neighbouring voxels .let be two valid vectors ; let also } ] .( this means the number of observations is infinite because we are considering the continuous - time signal in a non - zero time interval ] .( note that the is not a set notation . here is a realisation of the number of complexes in ] denote the posteriori probability that symbol has been sent given the history .if the demodulation decision is to be done at time , then the demodulator decides that symbol has been sent if \end{aligned}\ ] ] instead of working with ], one can consider as the concatenation of and the section of in the time interval ] ; we therefore abuse the notation and use to denote the section of in the time interval ] is the probability that there are complexes given that the transmitter has sent the symbol and the previous history .the last term on the rhs of , i.e. ] .the problem of determining the probability ] but the derivation is long , especially because of the diffusion terms ; the derivation can be found in appendix [ app : proofa ] .the result is \nonumber \\ = & \delta_{b(t+\delta t ) ,b(t ) + 1 } \ ;\lambda ( m - b(t ) ) \ ; \delta t \ ; \mathbf{e } [ n_r(t ) | s , { \cal b}(t ) ] + \delta_{b(t+\delta t ) , b(t ) - 1 } \ ; \mu b(t ) \ ; \delta t \ ; + \nonumber \\ & \delta_{b(t+\delta t ) , b(t ) } \ ; ( 1 - \lambda ( m - b(t ) ) { \mathbf e}[n_r(t ) | s , { \cal b}(t ) ] \ ; \delta t - \mu b(t ) \ ; \delta t ) \label{eqn : predictb}\end{aligned}\ ] ] \nonumber \\ = & \delta_{b(t+\delta t ) , b(t ) + 1 } \lambda ( m - b(t ) ) \ ; \delta t \ ; \mathbf{e } [ n_r(t ) | s , { \cal b}(t ) ] + \nonumber \\ & \delta_{b(t+\delta t ) , b(t ) - 1 } \mu b(t ) \ ; \delta t \ ; + \nonumber \\ & \delta_{b(t+\delta t ) , b(t ) } \times \nonumber \\ & ( 1 - \lambda ( m - b(t ) ) { \mathbf e}[n_r(t ) | s , { \cal b}(t ) ] \ ; \delta t - \mu b(t ) \ ; \delta t ) \label{eqn : predictb}\end{aligned}\ ] ] note that only one of the three terms on the rhs of equation is non - zero depending on whether the observed is one more , one less or equal to that of .the term ] . by substituting equation into equation and let go to zero , we show in appendix [ app : proofb ] that ) - \lambda ( m - b(t ) ) { \mathbf e}[n_r(t ) | s , { \cal b}(t ) ] + \tilde{l}(t ) \label{eqn : logpp_dd } \end{aligned}\ ] ] ) - \nonumber \\ & \lambda ( m - b(t ) ) { \mathbf e}[n_r(t ) | s , { \cal b}(t ) ] + \tilde{l}(t ) \label{eqn : logpp_dd } \end{aligned}\ ] ] with initialised to the logarithm of the prior probability that symbol is sent .equation is the _ optimal _ demodulation filter .the term is the cumulative number of times that the receptors have turned from the unbound to bound state .the meaning of is illustrated in figure [ fig : ut ] assuming there are two receptors .the top two pictures in figure [ fig : ut ] show the state transitions for the two receptors .the third picture shows the function which is increased by one every time a receptor switches from the unbound to bound state .the bottom picture shows which is the derivative of .note that consists of a train of impulses ( or dirac deltas ) where the timings of the impulses are the times at which a receptor binding event occurs . loosely speaking, one may also view as .the function , which is the last term on the rhs of , contains all the terms that are independent of symbol .since does not appear on the rhs of , this means that adds the same contribution to all for all .we can therefore ignore for the purpose of demodulation .the term ] and the receiver uses for demodulation .we can view as internal models that the demodulator uses .the use of internal models is fairly common in signal processing and communication , e.g. a matched filter correlates the measured data with an expected response . after making the modifications described in the last two paragraphs, we are now ready to describe the demodulator . using as the input, the demodulator runs the following continuous - time filters in parallel : where is initialised to the logarithm of the prior probability that the transmitter sends symbol .if the demodulator makes the decision at time , then the demodulator decides that symbol has been transmitted if the demodulator structure is illustrated in figure [ fig : demod ] . by comparing equations and, it can be shown that for any two symbols and .an interpretation of the demodulation filter output is that is proportional to the posteriori probability ] in equation by means that the demodulation filter is _ sub - optimal_. if ] and is .the difficulty in answering this problem is due to the fact that ligand - receptor binding is a nonlinear process . in appendix [ app : c ] ,we motivate the closeness between ], we have proposed to use internal models .an open research problem is to study sub - optimal estimation of ] , we assume that the effect of this transmission can be neglected after the time . this can be realised by appropriately choosing the transmitter and receiver parameters , and . in order to make the explanation here a bit more concrete ,we assume that the transmitter uses symbols and . over a duration of symbols , the possible sequences sent by the transmitter are 000 , 001 , 010 , 011 , ... , 111 .let denote the mean number of signalling molecules at the receiver voxel if the sequence 000 is sent .we can similarly define .consider the transmission of three consecutive symbols and .assuming that we have an estimation of the first two symbols and , then the decoding of can be done by using the demodulation filter by replacing by .for example , if and , then one can decode what is by using the demodulator filters and .although the decision feedback based method can solve the isi problem , the number of internal models increases exponentially with the memory length parameter .the reason why we need to consider all possible transmission sequences is that the ligand - receptor binding process has a non - linear reaction rate .a method to reduce the number of internal models is to design the system so that etc .can be decomposed into a sum .let ( ) be the mean number of signalling molecules at the receiver voxel if the symbol is sent for one symbol duration and in the absence of isi .if holds for all and , then one can again make use of decision feedback to decode the isi signal .however , this time , only internal models are needed .equation can be made to hold approximately if the number of receptors is large .this can be explained as follows .first of all , if ligand - receptor binding is absent , this means there is only free diffusion then equations holds because the mean number of signalling molecules obeys the diffusion equation which is linear .this means that we need to create an environment that looks like " free diffusion even when ligand - receptor binding is present .this can be realised if the number of signalling molecules that are bound to the receptors is small compared to those that are free .a method to achieve this is to increase the number of receptors .we will demonstrate this with a numerical example in section [ sec : eval ] .however , it is still an open problem to solve the isi in the general case .the aim of this section is to study the properties of the map demodulator numerically .we begin with the methodology .we assume the diffusion coefficient of the medium is 1 .the receptor parameters are = 0.005 s , , and s .these values are similar to those used in and = 100 , = 0.2 s and s .these parameters are 10100 times faster than ours and can be considered as a time - scaling .note that uses and instead of , respectively , and . ] .the above parameter values will be used for all the numerical experiments . for each experiment ,the transmitter uses either or symbols .each symbol is generated by a different sets of chemical reactions .different experiments may use different sets of chemical reactions and will be described later .the number of receptors also varies between the experiments .we use the stochastic simulation algorithm ( ssa ) to obtain realisations of which is the number of complexes over time .ssa is a standard algorithm in chemistry to simulate diffusion and reactions ; it is essentially an algorithm to simulate a ctmp . in order to use equation, we require the mean number of signalling molecules in the receiver voxel when symbol is sent .unfortunately , it is not possible to analytically compute from the ctmp because of moment closure problem which arises when the transition rate is a non - linear function of the state .we therefore resort to simulation to estimate .each time when we need an , we run ssa simulation 500 times and average the results to obtain .note that these simulations are different from those that we use to generate for the performance study .in other words , the simulations for estimating and for performance study are completely independent .once and are obtained , we use numerical integration to calculate using equation .we assume that all symbols appear with equal probability , so we initialise for all .the optimal demodulation filter requires the term ] as an internal model .the aim of this section is to compare the performance of these two demodulation filters . in this comparison, we consider a medium of 1 m m m .we assume a voxel size of ( ) ( i.e. m ) , creating an array of voxels .the voxel co - ordinates of transmitter and receiver are , respectively , ( 1,1,1 ) and ( 3,1,1 ) .a reflecting boundary condition is assumed .the reason why we have chosen to use such a small number of voxels is because of the dimensionality of the filtering problem .for example , if each voxel can have a maximum of 100 signalling molecules at a time , then there are 10 possible vectors and the filtering problem has to estimate the probability ] precisely .for this experiment , we use symbols and two values of ( the number of receptors ) : 5 and 10 . both symbols 0 and 1 use reaction such that symbols 0 and 1 causes , respectively , 10 and 50 signalling molecules to be generated per second on average by the transmitter .the simulation time is about 1.8 seconds .we first show that ] ( obtained from one realisation of ) are pretty similar .this result is obtained from using and symbol 1 .the results for other choices of , transmission symbols or other realisations of are similar .figure [ fig : prop : opt_ser ] shows the mean symbol error rates ( sers ) , for both optimal and sub - optimal demodulation filters , if the detection is done at time = 1 , 1.05 , 1.1 , ... , 1.8 .the sers is obtained from 400 realisations of .the difference in sers between the optimal and sub - optimal filter is less than 1% .we have also checked that the two demodulators make the same decoding decision on average 99.3% of the time . in the rest of this section, we will use the sub - optimal demodulation filter because of its lower computational complexity .we consider a medium of 2 m 2 m 1 m .we assume a voxel size of ( ) ( i.e. m ) , creating an array of voxels .the transmitter and receiver are located at ( 0.5,0.8,0.5 ) and ( 1.5,0.8,0.5 ) ( in ) in the medium . the voxel co - ordinates are ( 2,3,2 ) and ( 5,3,2 ) respectively .we assume an absorbing boundary for the medium and the signalling molecules escape from a boundary voxel surface at a rate of .this configuration will be used for the rest of this section . for this experiment ,we use symbols and receptors .both symbols 0 and 1 use reaction such that symbols 0 and 1 causes , respectively , 40 and 80 signalling molecules to be generated per second on average by the transmitter .the simulation time is about 3 seconds .figure [ fig : prop_demod_raw_u0 ] shows the demodulation filter outputs and if the transmitter sends a symbol 0 .it can be seen that most of the time after , which means the detection is likely to be correct after this time .the sawtooth like appearance of and is due to the fact that every time when a receptor is bound , there is a jump in the filter output according to equation .figure [ fig : prop_demod_raw_u1 ] shows the filter outputs and if the transmitter sends a symbol 1 ; the behaviour is similar .figure [ fig : prop_demod_mean_u0 ] shows the _filter outputs and if the transmitter sends a symbol 0 .the mean is computed over 200 realisations of .it can be seen that the mean filter output of is greater than that of .similarly , if symbol 1 is sent , then we expect of the mean of to be bigger .the figure is not shown for brevity .figure [ fig : prop_demod_mean_ser ] shows the mean sers for symbols 0 and 1 if the detection is done at time .the ser for symbol 1 is high initially but as more information is processed over time , the ser drops to a low value .this experiment shows that it is possible to use the analogue demodulation filter to compute a quantity that allows us to distinguish between two emission patterns at the receiver .we continue with the setting of [ sec : prop_demod ] but we vary the number of receptors between 1 and 20 .we assume the demodulator makes the decision at and calculate the mean ser for both symbols at .figure [ fig : prop_nrec ] plots the sers versus the number of receptors .it can be seen that the ser drops with increasing number of receptors .we have used symbols so far .we retain the current symbols 0 and 1 , and add a symbol 2 which is also of the form of reaction but its mean rate of production of signalling molecules is 3 times that of symbol 0 .the number of receptors used are : 1 , 10 , 20 , ... , 150 .we compute the average ser at assuming each symbol is transmitted with equal probability .we plot the logarithm of the average ser against in figure [ fig : prop_nrec_3s ] .it can be seen that the ser drops with increasing number of receptors .the plot in figure [ fig : prop_nrec_3s ] suggests that , when the number of receptors is large , the relationship between logarithm of ser and is linear .we perform a least - squares fit for between 50 and 150 .the fitted straight line is shown in figure [ fig : prop_nrec_3s ] and it has a slope of .a possible explanation is that , because the receptors are non - interacting , each receptor provides an independent observation .the empirical evidence suggests that the average ser scales according to asymptotically provided that the voxel volume can contain that many receptors .equation suggests that if the transmitter uses two sets of reactions which have almost the same mean number of signalling molecules in the receiver voxel , then it may be difficult to distinguish between these two symbols . in this study ,symbol 0 is generated by reaction with a rate of while symbol 1 is generated by : {\rm on } & < - > [ rna]_{\rm off } } \\\cee { [ rna]_{\rm on } & ->[2\kappa ] [ rna]_{\rm on } + s } \label{eqn : r1 } \end{aligned}\ ] ] where we assume that rna can be in an on or off state , and signalling molecules are only produced when the rna is in the on - state .we assume that the there is an equal probability for the rna to be in the two states and the reaction rate constant for the production of signalling molecule from }_{\rm on} ] in terms of the quantity at time . recalling that is the state of the ctmp and since only is observed , the problem of predicting from is a bayesian filtering or hidden markov model problem .the first step is to condition on the state of the system , as follows : \\ & = \sum_{i } \mathbf{p}[n(t+\delta t ) = \eta_i , b(t+\delta t ) | s , { \cal b}(t ) ] \\ & = \sum_{i } \sum_{j } \mathbf{p}[n(t+\delta t ) = \eta_i , b(t+\delta t ) | s , n(t ) = \eta_j , { \cal b}(t ) ] \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : int1 } \\ & = \sum_{i } \sum_{j } \mathbf{p}[n(t+\delta t ) = \eta_i , b(t+\delta t ) | s , n(t ) = \eta_j , b(t ) ] \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : condp}\end{aligned}\ ] ] where we have used the markov property = \mathbf{p}[n(t+\delta t ) = \eta_i , b(t+\delta t ) | s , n(t ) = \eta_j , b(t)] ] in equation .this term is the state transition probability . using the ctmp in section [ sec : model ], we have \\ \nonumber = & \delta_{b(t+\delta t),b(t ) + 1 } p_1 + \delta_{b(t+\delta t ) ,b(t ) - 1 } p_2 + \delta_{b(t+\delta t ) , b(t ) } p_3 \label{eqn : app : p123 } \end{aligned}\ ] ] where where is the -th element of , i.e. there are signalling molecules in the receiver voxel , and where by substituting equation into equation , we have = \delta_{b(t+\delta t ) , b(t ) + 1 } q_1 + \delta_{b(t+\delta t ) , b(t ) - 1 } q_2 + \delta_{b(t+\delta t ) , b(t ) } q_3 \end{aligned}\ ] ] where \end{aligned}\ ] ] we will now determine , and . for , we have \nonumber \\ & = \lambda ( m - b(t ) ) \ ; \delta t \ ; \sum_{i } \sum_{j } \delta_{\eta_i , \eta_j - \mathbb{1}_r } \eta_{j , r } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \nonumber \\ & = \lambda ( m - b(t ) ) \ ; \delta t \ ; \sum_{j \ ; s.t . \ ; \eta_{j , r } \geq 1 } \eta_{j , r } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : q1:1 } \\ & = \lambda ( m - b(t ) ) \ ; \delta t \ ; \sum_{j } \eta_{j , r } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : q1:2 } \\ & = \lambda ( m - b(t ) ) \ ; \delta t \ ; \mathbf{e } [ n_r(t ) | s , { \cal b}(t ) ] \end{aligned}\ ] ] note that in equation , the sum is over all states with at least one signalling molecule in the receiver voxel , i.e. . since the summand in equation is zero if , we get the same result if we are to sum over all possible states , that is why equation holds . for , we have \nonumber \\ & = \mu b(t ) \ ; \delta t \sum_{i } \sum_{j } \delta_{\eta_i , \eta_j + \mathbb{1}_r } \ ; \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : q2:1 } \\ & = \mu b(t ) \ ; \delta t \sum_{j } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : q2:2 } \\ & = \mu b(t ) \ ; \delta t \ ; \end{aligned}\ ] ] note that equation follows from equation because for every , there is a unique such that holds . for , we have \nonumber \\ = &\sum_{i } \sum_{j \neq i } ( d_{ij } \ ; \delta t ) \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] + \nonumber \\ & \sum_{j } ( 1 - \lambda \eta_{j , r } ( m - b(t ) ) \ ; \delta t - \mu b(t ) \ ; \delta t - d_{jj } \ ; \delta t ) \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \nonumber \\= & \sum_{j } ( 1 - \lambda \eta_{j , r } ( m - b(t ) ) \ ; \delta t - \mu b(t ) \ ; \delta t ) \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] + \nonumber \\ & ( \sum_{i } \sum_{j \neq i } d_{ij } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] - \sum_{j } d_{jj } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] ) \ ; \delta t \nonumber \\ = & ( 1 - \lambda ( m - b(t ) ) { \mathbf e}[n_r(t ) | s , { \cal b}(t ) ] \ ; \delta t - \mu b(t ) \ ; \delta t ) + \nonumber \\ & \underbrace{(\sum_{i } \sum_{j \neq i } d_{ij } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] - \sum_{j } \sum_{i \neq j } d_{ij } \mathbf{p}[n(t ) = \eta_j | s ,{ \cal b}(t)])}_{= 0 } \ ; \delta t \nonumber \\ = & ( 1 - \lambda ( m - b(t ) ) { \mathbf e}[n_r(t ) | s , { \cal b}(t ) ] \ ; \delta t - \mu b(t ) \ ; \delta t)\end{aligned}\ ] ]having obtained , and , we arrive at : = & \delta_{b(t+\delta t ) , b(t ) + 1 } \lambda ( m - b(t ) ) \ ; \delta t \ ; \mathbf{e } [ n_r(t ) | s , { \cal b}(t ) ] + \nonumber \\ & \delta_{b(t+\delta t ) , b(t ) - 1 } \mu b(t ) \ ; \delta t \ ; + \nonumber \\ & \delta_{b(t+\delta t ) , b(t ) } ( 1 - \lambda ( m - b(t ) ) { \mathbf e}[n_r(t ) | s , { \cal b}(t ) ] \ ; \delta t - \mu b(t ) \ ; \delta t ) \label{eqn : app : predictb}\end{aligned}\ ] ] note that equation is the same as equation in the main text . from equation , we have : )}{\deltat } - \lim_{\delta t \rightarrow 0 } \frac{\log ( \mathbf{p}[b(t+\delta t ) | { \cal b}(t)])}{\delta t } \label{eqn : logpp : app : prelim}\end{aligned}\ ] ] note that the second term on the rhs is independent of transmission symbol , we will focus on the first term .note that ] in terms of the quantity at time . recalling that is the state of the ctmp and since only observed , the problem of predicting from is a bayesian filtering or hidden markov model problem .the first step is to condition on the state of the system , as follows : \\= & \sum_{i } \mathbf{p}[n(t+\delta t ) = \eta_i , b(t+\delta t ) | s , { \cal b}(t ) ] \\= & \sum_{i } \sum_{j } \mathbf{p}[n(t+\delta t ) = \eta_i , b(t+\delta t ) | s , n(t ) = \eta_j , { \cal b}(t ) ] \times \nonumber \\ & \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : int1 } \\= & \sum_{i } \sum_{j } \mathbf{p}[n(t+\delta t ) = \eta_i , b(t+\delta t ) | s , n(t ) = \eta_j , b(t ) ] \times \nonumber \\ & \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : condp}\end{aligned}\ ] ] where we have used the markov property = \mathbf{p}[n(t+\delta t ) = \eta_i , b(t+\delta t ) | s , n(t ) = \eta_j , b(t)] ] in equation .this term is the state transition probability .using the ctmp in section [ sec : model ] , we have \nonumber \\ = & \delta_{b(t+\delta t ) , b(t ) + 1 } p_1 + \delta_{b(t+\delta t ) ,b(t ) - 1 } p_2 + \nonumber \\ & \delta_{b(t+\delta t),b(t ) } p_3 \label{eqn : app : p123 } \end{aligned}\ ] ] where where is the -th element of , i.e. there are signalling molecules in the receiver voxel , and where by substituting equation into equation , we have = \nonumber \\ & \delta_{b(t+\delta t ) , b(t ) + 1 } q_1 + \delta_{b(t+\delta t ) , b(t ) - 1 } q_2 + \nonumber \\ & \delta_{b(t+\delta t ) , b(t ) } q_3 \end{aligned}\ ] ] where \end{aligned}\ ] ] we will now determine , and . for , we have \nonumber \\ = & \lambda ( m - b(t ) ) \ ; \delta t \ ; \ ; \times \nonumber \\ & \sum_{i } \sum_{j } \delta_{\eta_i , \eta_j - \mathbb{1}_r } \eta_{j , r } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \nonumber \\ = & \lambda ( m - b(t ) ) \ ; \delta t \ ; \sum_{j \ ; s.t . \ ; \eta_{j , r } \geq 1 } \eta_{j , r } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : q1:1 } \\ = & \lambda ( m - b(t ) ) \ ; \delta t \ ; \sum_{j } \eta_{j , r } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : q1:2 } \\ = & \lambda ( m - b(t ) ) \ ; \delta t \ ; \mathbf{e } [ n_r(t ) | s , { \cal b}(t ) ] \end{aligned}\ ] ] note that in equation , the sum is over all states with at least one signalling molecule in the receiver voxel , i.e. . since the summand in equation is zero if , we get the same result if we are to sum over all possible states , that is why equation holds . for , we have \nonumber \\ & = \mu b(t ) \ ;\delta t \sum_{i } \sum_{j } \delta_{\eta_i , \eta_j + \mathbb{1}_r } \ ; \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : q2:1 } \\ & = \mu b(t ) \ ; \delta t \sum_{j } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \label{eqn : app : q2:2 } \\ & = \mu b(t ) \ ; \delta t \ ; \end{aligned}\ ] ] note that equation follows from equation because for every , there is a unique such that holds . for , we have \nonumber \\ = &\sum_{i } \sum_{j \neq i } ( d_{ij } \ ; \delta t ) \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] + \nonumber \\ & \sum_{j } ( 1 - \lambda \eta_{j , r } ( m - b(t ) ) \ ; \delta t - \mu b(t ) \ ; \delta t - d_{jj } \ ; \delta t ) \times \nonumber \\ & \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] \nonumber \\= & \sum_{j } ( 1 - \lambda \eta_{j , r } ( m - b(t ) ) \ ; \delta t - \mu b(t ) \ ; \delta t ) \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] + \nonumber \\ & ( \sum_{i } \sum_{j \neq i } d_{ij } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] - \sum_{j } d_{jj } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] ) \ ; \delta t \nonumber \\ = & ( 1 - \lambda ( m - b(t ) ) { \mathbf e}[n_r(t ) | s , { \cal b}(t ) ] \ ; \delta t - \mu b(t ) \ ; \delta t ) + ( \delta t ) \ ; \times \nonumber \\ & \underbrace{(\sum_{i } \sum_{j \neq i } d_{ij } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t ) ] - \sum_{j } \sum_{i \neq j } d_{ij } \mathbf{p}[n(t ) = \eta_j | s , { \cal b}(t)])}_{= 0 } \nonumber \\ = & ( 1 - \lambda ( m - b(t ) ) { \mathbf e}[n_r(t ) | s , { \cal b}(t ) ] \ ; \delta t - \mu b(t ) \ ; \delta t)\end{aligned}\ ] ] having obtained , and , we arrive at : \nonumber \\ = & \delta_{b(t+\delta t ) , b(t ) + 1 } \lambda ( m - b(t ) ) \ ; \delta t \ ; \mathbf{e } [ n_r(t ) | s , { \cal b}(t ) ] \nonumber \\ & + \delta_{b(t+\delta t ) , b(t ) - 1 } \mu b(t ) \ ; \delta t \ ; + \nonumber \\ & \delta_{b(t+\delta t ) , b(t ) } \times \nonumber \\ & ( 1 - \lambda ( m - b(t ) ) { \mathbf e}[n_r(t ) | s , { \cal b}(t ) ] \ ; \delta t - \mu b(t ) \ ; \delta t ) \label{eqn : app : predictb}\end{aligned}\ ] ] note that equation is the same as equation in the main text . from equation, we have : )}{\delta t } - \nonumber \\ & \lim_{\delta t \rightarrow 0 } \frac{\log ( \mathbf{p}[b(t+\delta t ) | { \cal b}(t)])}{\delta t } \label{eqn : logpp : app : prelim}\end{aligned}\ ] ] note that the second term on the rhs is independent of transmission symbol , we will focus on the first term .note that ] .we define the vector as ^t \label{eqn : ex : state}\end{aligned}\ ] ] where the superscript is used to denote matrix transpose . based on the definition of , the state of the systemis the tuple and a valid state must be an element of the set } ] by ] by taking expectation on both sides of equation to obtain : }{dt } = & a { \mathbf e}[x(t ) ] + b u_s(t ) \end{aligned}\ ] ] note that ] .the filtering problem for a lti system is to estimate the state vector from the continuous history of the output .a method to realise filtering is to use an observer : where is the observer gain matrix .the vector is the estimated state vector from the past history of the output .the expectation of , i.e. ] .we are interested to study the difference - { \mathbf e}[x(t)] ] and can be small if the filtering error is stable .
|
in a diffusion - based molecular communication network , transmitters and receivers communicate by using signalling molecules ( or ligands ) in a fluid medium . this paper assumes that the transmitter uses different chemical reactions to generate different emission patterns of signalling molecules to represent different transmission symbols , and the receiver consists of receptors . when the signalling molecules arrive at the receiver , they may react with the receptors to form ligand - receptor complexes . our goal is to study the demodulation in this setup assuming that the transmitter and receiver are synchronised . we derive an optimal demodulator using the continuous history of the number of complexes at the receiver as the input to the demodulator . we do that by first deriving a communication model which includes the chemical reactions in the transmitter , diffusion in the transmission medium and the ligand - receptor process in the receiver . this model , which takes the form of a continuous - time markov process , captures the noise in the receiver signal due to the stochastic nature of chemical reactions and diffusion . we then adopt a maximum a posteriori framework and use bayesian filtering to derive the optimal demodulator . we use numerical examples to illustrate the properties of this optimal demodulator . * keywords : * molecular communication networks ; modulation ; demodulation ; maximum a posteriori ; optimal detection ; stochastic models ; bayesian filtering ; molecular receivers .
|
individuals ( or agents ) employed in a naming game ( ng ) [ , ] are connected by a certain communication network .the network represents the relationships among involving agents , on which two agents can communicate directly with each other only if they are directly connected on the network .isolated agent is not allowed in the underlying network , which is not participating the game and hence can be removed , thus information can be propagated to every agent so that the whole population may eventually reach global consensus ( _ i.e. _ , convergence ) , in the sense that every agent keeps one and only one identical name to describe the object to be named .the convergence of ng may be observed via numerical simulations [ , , ] , proved theoretically [ ] , or verified empirically by humans [ ] . as to the underlying communication network , the random - graph [ ] , small - world [ ] and scale - free [ ] networks are the most widely used ones for naming games [ , , , , ] , which will also be employed in the present study .[ fig : fig1 ] shows the flowchart of the minimal ng , of which the input includes : 1 ) a population of agents with empty memory , but each agent has infinite capacity of memory ; 2 ) a connected underlying network indicating the relationships among the agents ; 3 ) an infinite ( or large enough ) external lexicon which specifies a large number of different names ; 4 ) an object ( idea , convention , or event , etc . ) to be named by the population .the output is a population of consented agents , where every agent has one and only one identical name for the object in his memory .the convergence process will be recorded for analysis , in terms of _e.g. _ the _ number of total names _ and the _ number of different names _ in the population , as well as the _ success rate_. the changes with any input item will cause different converging features ; for example , the case when all agents have a limited memory size [ ] . at each time step of the minimal ng , a pair of connected agents is randomly selected from the population , to be speaker and hearer respectively . if the object is unknown to the speaker , meaning that the speaker has no name in his memory to describe the object , then he will randomly pick a name from the external lexicon ( which is equivalent to randomly invent a new name within the certain number of words in the lexicon ) , and then utters the name to the hearer . when the object is already known to the speaker , namely the speaker has one or several names in his memory, he will randomly pick a name from the memory and then utter it .after the hearer receives the name , he will search over his memory to see if he has the same name stored therein : if not , then he will store it into the memory ; but if yes , then the hearer and the speaker reach consensus , so they both clear up all the names while keeping this common name in their respective memory .an example illustrating one time step of the pair - wise communication is given in fig .[ fig : fig2 ] .this pair - wise success is referred to as local consensus hereafter .such a pair - wise transmitting and receiving ( or teaching and learning ) process will continue to iterate until eventually the entire population of agents reach consensus , referred to as global consensus , meaning that all the agents agree to describe the object by the same name .each node of the underlying network represents an agent in ng , while each edge means that the two connected nodes can communicated to each other directly , in either pair - wise [ , , , , ] or group - wise [ , , ] communication setting . the number of connections of a node is referred to as its degree .the heterogeneity of social networks can generally be reflected by the scale - free networks [ , , ] , where a few agents have much larger degrees than most agents that have very small degrees .on the other hand , human communications are community - based , in the sense that people belonging to the same community are much more actively interacting and communicating with each other than those in different communities .recall that the multi - local - world ( mlw ) model [ , ] is a kind of scale - free network , capable of capturing the essential features of many real - world networks with community structures .the degree distribution of the mlw network is neither in a completely exponential form nor in a completely power - law form , but is somewhere between them .in particular , the mlw model shows good performance on capturing basic features of the internet at the autonomous system ( as ) level [ ] .it is quite well known that human social networks also have as - like structures .therefore , it is quite reasonable to study a naming game of a population on an mlw communication network where a local world is a community formed not only by natural barriers such as mountains , rivers and oceans , but also by folkways , dialect and cultures . in this paper , therefore , naming game is studied under an mlw network framework , with three typical topologies of human communication networks , namely random - graph , small - world and scale - free networks , respectively .the main contributions of this study include the following findings : 1 ) when the intra - community connections increase while the inter - community connections remain to be unchanged , the convergence to global consensus is slow and eventually might fail ; 2 ) when the inter - community connections are sufficiently dense , both the number and the size of the communities do not affect the convergence process ; and 3 ) for different topologies with the same average node - degree , local clustering of individuals obstruct or prohibit global consensus to take place .the simulation results reveal the role of local communities in a global naming game in social networks .the rest of the paper is organized as follows . in sectionii , the multi - local - world model is introduced , flowed by extensive simulation results with analysis in section iii .finally , section iv concludes the investigation .here and throughout , all random operations ( _ e.g. _ , random generation , selection , addition or deletion ) follow a uniform distribution . the algorithm for generating an mlw network with nodes can be summarized as follows [ ] .the initialization starts with isolated local - worlds . within each local - world , there are nodes connected by edges . at each time step , a value , is generated at random .if , perform addition of a new local - world of nodes connected by edges , which is added to the existing network . .if , perform addition of a new node to a randomly selected local - world by preferential attachment : the new node is added to the selected local - world , establishing in new connections ( edges ) .the new node is connected to nodes existing in the local - world according to the following preferential probability : where is the degree of node within the local - world and is a tunable parameter . . if , perform addition of edges within a randomly selected local - world : edges are added to this . for each new edge, one end is connected to a randomly picked node within the , while the other end is connected to a node selected also from the same according to a probability given by eq [ eq1 ] .this process repeats times . .if , perform deletion of edges within a randomly selected local - world : edges are deleted from . the purpose is to remove more edges that connect to small - degree nodes .to do so , randomly select a node from . remove the edges of this node one by one , according to the following probability where is the degree of the node at the other end of the edge : where is the number of nodes within the and is given by eq [ eq1 ] .this process repeats times . .if , perform addition of edges among local - worlds : edges are added to connect different local - worlds .first , two different local - worlds are picked at random .then , one node is selected within each local - world according to the probability given by eq [ eq1 ] .an edge is finally added between these two nodes .this process repeats times .the initial number of nodes is and the termination number is ( typically , much larger ) .the generation algorithm stops when totally nodes have been generated into the network .note that throughout the above process , the generation of repeated connections , self - loops and isolated nodes should be avoided or removed .the detailed generating algorithm of mlw networks as well as the calculation of its degree distribution can be found in [ ] . as shown above , there are totally eleven tunable parameters , among which only two parameters are of interest in the present paper , _i.e. _ , the number of local - worlds and the initial number of nodes within each local - world . according to [ ] , it is hard for a population to reach globally consensus if the underlying network has multiple communities .the underlying network used in [ ] is a combination of several scale - free networks , where the combination is generated by a reversed preferential attachment probability .specifically , the intra - connections within each community are based on a preferential attachment probability given by eq [ eq1 ] , while the inter - connections between communities are generated according the following preferential attachment probability : only bi - community and tri - community networks are studied in [ ] . in this paper ,the mlw model introduced above will be employed , in which both the number and the initial size are tunable parameters . by simply adjusting these two parameters ,the ng can be performed on a set of more generalized networks with multiple communities , more realistic to represent the real human society and language development therein . & operation ( addition of new local - worlds ) is not performed + & operation ( addition of a new node to a local - world ) is performed with probability 0.28 + & operation ( addition of edges within a local - world ) is performed with probability 0.11 ( = 0.39 - 0.28 ) + & [ cols= " < " , ] + [ tab : tab1 ]the minimal ng is studied on mlw networks for it simulates the internet as well as many social networks realistically .there are mainly eleven parameters , among which we are interest in only two , _i.e. _ , the number of local - worlds and the initial number of nodes within each local - world .the other nine out of eleven parameters are fixed , as set in [ ] , which are , , , , , , , , and .their values are presented in tab .[ tab : tab1 ] , along with their correspondence or meanings of such parameter settings .as reported in [ ] , if the underlying network is fully - connected then ng converges in the fastest speed .so , in the following simulations , all the initial local - worlds are fully - connected , so the parameter . next ,denote the number of individuals ( population size ) by , which satisfies otherwise there will be only isolated local - worlds , so the network is not connected [ ] . introduce a new parameter , as the rate of initially assigned nodes in the local - worlds : when , there is no local - world and the network degenerates to a scale - free one since every node is added by a preferential attachment ; when , it generates several isolated local - worlds without any additional nodes or edges .the purpose of introducing is to change the above inequality to be an equality : the comparing simulation is carried out by varying , , and .convergence time will be used as the measure , which refers to the number of time steps when global convergence is reached . in the following comparisons , 1 ) is fixed and the convergence time affected by the dynamics of the number and size of local - worlds is examined ; 2 ) the convergence time is studied when the rate is varying , with fixed values of and ; and 3 ) the convergence progresses of mlw networks built on three typical models are compared ; _i.e. _ , random - graph ( rg ) [ ] , small - world ( sw ) [ ] and scale - free ( sf ) [ ] networks . the population size is set and fixed to , and the cases when the population size is and are studied in si [ ] .the maximum number of iteration is set to and data are collected from independent runs and then averaged . here , iterations are empirically large enough for this study . also , denote the number of different names at time step by .simulation shows that when , it has reached global convergence , while when , it means the local - worlds have converged to different names , respectively , as can be seen in tab .[ tab : tab2 ] .in addition , with a long time period , one has that which means that the number of different words is not changed during a long time .note also that is monotonically non - increasing in this converging ( or converged ) stage .this phenomenon can be observed from fig .[ fig : fig8 ] .consider conditions shown in equations ( 5 ) and ( 6 ) together , by setting the maximum number of iteration to , the population has converged sufficiently well .c + [ tab : tab2 ] the box plot of the convergence time vs the number of initial nodes in each local - world , with a) and b ) .the number of local - worlds can be calculated by eq [ eq4 ] , and since it should be an integer , it is calculated by , where is the largest integer less than or equal to .the mean value of convergence time in both figures is concave : it first decreases when increases from 3 to 5 , and then increases as continue to increase . when and 5 , it converges the fastest in both cases . in a ) , , when , it shows occasionally non - converged behaviors ; and when , it is always not convergent within the pre - set iterations . as for b ) , , when , it show occasionally non - converged behaviors , and when , it is always not convergent within iterations.,width=302 ] the number of initial nodes of each local - world are set to 26 different values : varying from 3 to 19 with an increment 1 , and from 20 to 100 with an increment 10 , to have different scenarios .the rate of initially assigned nodes is set to and 0.7 , respectively , as shown in fig . [fig : fig4](a ) and ( b ) .it can be seen from fig .[ fig : fig4 ] that relatively small sizes of communities are beneficial for achieving convergence .since nodes within communities are fully - connected , this makes the intra - community convergence the fastest .in contrast , the strong intra -connection of different communities make the inter - community converge more difficult , especially when different communities had already converged to different words respectively . in the box plot shown in fig .[ fig : fig4 ] , the blue box indicates that the central 50% data lie in this section ; the red bar is the median value of all 30 datasets ; the upper and lower black bars are the greatest and least values , excluding outliers which are represented by the red pluses .c + [ tab : tab3 ] tab .[ tab : tab3 ] shows the average degrees , average path lengths and average clustering coefficients of all the generated mlw networks .it shows that as increases , both the average degree and average clustering coefficient increase , while the average path length decreases .this means that , on average , when increases , the networks are better connected , yet more clustered .better connections ( greater average degree and shorter average path length ) facilitate convergence in ng [ , ] , while local clustering and forming communities hinder convergence .at the extreme , one can assume that any sub - network in a fully - connected network is a local community .in this case , both intra - community and inter - community connections are maximized , thus there is no barrier existing amongst the communities in a fully - connected network .barriers preventing communities from global convergence are formed only if the intra - community connections are strong while the inter - community connections are weak .[ fig : fig5 ] shows an example illustrating how the intra - connections become stronger when the community size increases .as can be seen from the figure , the ratio of intra - connections vs inter - connections is getting smaller as the community size is getting larger ( see fig .[ fig : fig5](a ) 3:1 , fig .[ fig : fig5](b ) 6:1 , and fig .[ fig : fig5](c ) 15:1 ) .if one wishes to keep the ratio constant , _e.g. _ , 3:1 , then for a 4-node community there should be 2 nodes being connected externally , while for a 6-node community the number of inter - connections should be 5 . note that the number of inter - community connections is fixed .the inter - connections are generated during the addition of nodes , repeatedly by randomly selecting operations from to ( defined in section ii ) .as shown in fig .[ fig : fig5 ] , if the number of inter - community connection is fixed , while the size of fully - connected community is growing , then the number of external connections is becoming insufficient for convergence . in a nutshell , the inter - community connections of mlw networks should be kept constant , and the number and size of communities should be changed ( reducing the number of communities and enlarging the size of each community ) , so that intra - community connections are getting stronger . as a result , as intra - connections increase , while inter - connections are kept constant , the convergence process will be slowed down and eventually failed .this explains more clearly the incremental convergence time shown in fig .[ fig : fig4 ] . in this section ,both the number and size of local - worlds are fixed , while the rate of initially assigned nodes is varied from 0.1 to 0.9 .as can be seen from fig .[ fig : fig6 ] , a common pattern is that , when is small enough ( _ i.e. _ , in fig .[ fig : fig6](a ) ; in fig .[ fig : fig6](b ) ; in fig .[ fig : fig6](c ) ) , different values of do not affect the convergence time at all .note that the inter - community connections are generated during the addition of nodes .when is small enough for certain networks , this means that the inter - community connections are substantial and probably sufficient already to achieve global convergence .as continues to increase , deceases , thus the inter - community connections are reducing and become insufficient if reaches certain large values ( _ i.e. _ , when in fig .[ fig : fig6](a ) ; in fig .[ fig : fig6](b ) ; in fig .[ fig : fig6](c ) ) .denote the threshold value by . then , when , the convergence time is not affected by , while when , the convergence time increases drastically as increases . as can be seen from fig .[ fig : fig6 ] , as increases ( ) , decreases ( ) .this phenomenon can also be observed when the population size is 500 and 1500 , respectively [ ] . this phenomenon can be explained by the example shown in fig .[ fig : fig5 ] , where the number of intra - community connections is .this means that when is small , the number of intra - community connections is relatively small , so that the required inter - community connections become fewer , thus does not affect the convergence time , until that number becomes relatively large .in contrast , when is large , the number of intra - community connections is relatively large , thus even is relatively small , the convergence time is clearly affected , due to the large number requirement of inter - community connections .the convergence progress of mlw networks are compared on three typical topology networks , _i.e. _ , random - graph ( rg ) [ ] , small - world ( sw ) [ ] and scale - free ( sf ) [ ] networks .the comparison is implemented by the convergence progress in terms of the number of total words , the number of different words and the success rate . for fairness and also for convenience , four sets of data are chosen , with , 20 , 30 and 100 , on which the average degrees of the mlw networks are 9.41 , 16.43 , 22.91 , 72.41 , respectively .these data values are used as the connecting probabilities , exactly for generating rg networks and approximately for generating sw and sf networks .the feature statistics of the generated networks are summarized in tab . [tab : tab4 ] , together with the statistics of the mlw for reference .c + [ tab : tab4 ] as shown in tab .[ tab : tab4 ] , four types of networks have very similar values of average degrees . however , mlw has the longest average path length and the highest clustering coefficient values .sw is with the second longest average path length and the second highest clustering coefficient values .both rg and sf have smaller values on these two features . in fig .[ fig : fig7 ] , [ fig : fig8]and [ fig : fig9 ] , the four cases of different parameter settings are : ( a ) , ( b ) , ( c ) , and ( d ) , and these four types of networks of the same ( or similar ) average degree are compared in the same figure for clarity . in fig .[ fig : fig7 ] , the four sub - figures in the figure share two common phenomena : 1 ) the population with underlying network rg converges the fastest , followed by sf and sw .mlw converges the slowest infig .[ fig : fig7](a ) , but does not converge in the cases shown in fig .[ fig : fig7 ] ( b ) , ( c ) , and ( d ) ; 2 ) the curves with underlying network rg has the highest peak , followed by sf and sw , and mlw holds the lowest . as also shown in tab . [ tab : tab4 ] , rg has the smallest clustering coefficient values , followed by sf and sw , and mlw has the greatest , meaning that mlw has strongest tendency in clustering and forming communities than the other three networks .sw has relatively strong tendency in clustering .this leads to the following two phenomena : 1 ) individuals within communities reach convergence quickly , so that the number of total words in the entire network decreases fast when there are communities ; and 2 ) the inter - community convergence process is delayed or even prevented by the multi - community topology .this can be further summarized as follows : given the same average degree , the less clustered network has a convergence curve with a higher peak and sharper decline ; while the more clustered network has a flatter curve with a lower peak . note that when the underlying network is a tree ( with average degree and clustering coefficient zero ) or a globally - fully - connected network ( with average degree and clustering coefficient one ) , these two extreme cases are not investigated in the above simulations because , in these two special cases , for a given the average degree value the clustering coefficient can not be adjusted . in fig .[ fig : fig8 ] , although the ranking of the convergence is exactly the same as what is shown in fig .[ fig : fig7 ] , the peaks of the curves are similar to each other .this is because not only the lexicon but also the game rules are identical for all types of underlying networks , namely , if the picked speaker has nothing in his memory then he randomly picks a name from the external lexicon .[ fig : fig9 ] shows the success rate .it is obvious that when a network has a small clustering coefficient value , its success rate curve is generally smooth .however , for sw and mlw networks , high clustering coefficient values generate very rough success rate curves . for sw , although rough , the success rate can eventually reach 1.0 ; but for mlw , if the population does not converge as shown in fig . [fig : fig7 ] and [ fig : fig8 ] , the success rate can not reach 1.0 .this is because , in the late stage : 1 ) individuals in intra - communities have already reached convergence , so that the success rate of intra - communication is as high as one , and 2 ) individuals in inter - communities have converged to generally different names , so that the success rate of inter - communication is likely to be as low as zero . as a result , the curves are fluctuating and visually fuzzy .consider a real - life situation that there are two types of local communities : one located in a suburb of a metropolis ( denoted by ) , and the other is a primitive tribe ( denoted by ) .the has many connections to the metropolis ( as well as the world outside ) such as road paths , telephone systems and the internet , while an has probably only one trail to go outside without any other communicating connections . within both communities ,people know each other therefore they have direct communications . considering the above scenario , the first and second experimental studies show that if the size of a community is relatively small , no matter it is an or , information can be easily delivered to each individual within the community , so that they are affected by the outside world ( and finally reach global consensus ) .however , if the community size is big , a large number of external links are required . otherwise , many individuals can not receive information from outside , and hence the community ( _ e.g. _ , an of large size ) can only reach local convergence , rather than global convergence .the third experiment shows that , given a fixed number of average degree say five , namely on average each people has five friends to communicate .if people prefer communicating with local friends , then local communities are formed and so global consensus is hindered .in contrast , global consensus requires people to have sufficient chance to communicate globally .in this paper , naming game ( ng ) is implemented by employing the multi - local - world ( mlw ) model , together with three typical topologies , namely random - graph , small - world and scale - free networks , as the underlying framework for communications .the underlying networks play an important role in ng , which indicate the relationships among different individuals , since connections are the precondition for pair - wise communications . as found in this study ,community structures are essential for social communications , for which the mlw model used as the underlying network is more practical than the other commonly used network topologies .the simulation is implemented to study the effects of the number and size of local - worlds in different ng networks , with or without communities , and the results are compared against several key parameters .simulation results suggest that : 1 ) sufficiently many inter - community connections are crucial for the convergence ; thus , given constant inter - connections , when intra - connections increase , meaning that the inter - connections are relatively weakened , the convergence process will be slowed down and eventually failed ; 2 ) for sufficiently many inter - community connections , both the number and the size of communities do not affect the convergence at all ; and 3 ) given the same average degree for different underlying network topologies , different clustering degrees will distinctively affect the convergence , which also change the shapes of the convergence curves .the results of this investigation reveal the essential role of communities in ng on various complex networks , which shed new lights onto a better understanding of the human language development , social opinion forming and evolution , and even rumor epidemics alike .y. lou , g.r .chen , z.p .fan , and l.n .xiang , supplementary information for paper local communities obstruct global consensus : naming game on multi - local - world networks `` '' , http://www.ee.cityu.edu.hk//pdf/mlw-si.pdf ( 2016 ) .
|
community structure is essential for social communications , where individuals belonging to the same community are much more actively interacting and communicating with each other than those in different communities within the human society . naming game , on the other hand , is a social communication model that simulates the process of learning a name of an object within a community of humans , where the individuals can reach global consensus on naming an object asymptotically through iterative pair - wise conversations . the underlying communication network indicates the relationships among the individuals . in this paper , three typical topologies of human communication networks , namely random - graph , small - world and scale - free networks , are employed , which are embedded with the multi - local - world community structure , to study the naming game . simulations show that 1 ) when the intra - community connections increase while the inter - community connections remain to be unchanged , the convergence to global consensus is slow and eventually might fail ; 2 ) when the inter - community connections are sufficiently dense , both the number and the size of the communities do not affect the convergence process ; and 3 ) for different topologies with the same average node - degree , local clustering of individuals obstruct or prohibit global consensus to take place . the results reveal the role of local communities in a global naming game in social network studies .
|
the jagiellonian - pet ( j - pet ) collaboration is developing a prototype tof - pet detector based on plastic scintillators .the detector is a cylinder made of long scintillator strips .its large acceptance allows for full 3-d image reconstruction .the main advantage of the j - pet solution is its excellent time resolution ( see e.g. results in ) , which makes it suitable not only for medical purposes , but also for precise studies of the discrete symmetries in positronium systems .the tof - pet data processing and reconstruction are time- and resource - demanding operations , especially in case of a large acceptance j - pet detector , which works in the so - called trigerless mode , in which all events ( digitized time and amplitudes ) from the front - end electronics ( fee ) are stored to disks without any master trigger condition applied .next , the collected raw data undergoes a process of low- and high- level reconstructions .the registered data is first transformed into the hit positions in the scintillator modules , and in the next step the hits are combined to form the lines of response(lor ) . in the last stage ,the image reconstruction procedures are used to obtain the final image based on the set of lors . in order to efficiently process this high data stream ,parallel computing techniques have been applied at several levels of the data collection and reconstruction .the parallel processing can be defined as a type of computation in which the task is divided into independent subtasks , which are then calculated simultaneously , by several computing resources .the results of the individual computations are merged together .parallelization techniques can be classified according to several criteria , e.g. instruction - level parallelization corresponds to the simultaneous performance of several operations in the computer program . in the case of the data parallelization, the data set is distributed among many computing nodes , while in case of the task parallelization the code is divided into threads and executed across the computing nodes . typically , to take advantage of the parallelization, the software procedures must be designed in a special way , e.g. by using dedicated programming environments and libraries such as mpi , openmp or cuda .overview of different parallelization techniques can be found in . in the past parallel processingwas the domain of high - performance computing by means of supercomputers .however , thanks to a very fast development of the overall performance of the cpus , to keeping the prices relatively low and the introduction of new techniques such as multi - core processors , the parallelization has become more accessible and popular in many different fields .apart from the cpu processing , recently , even more efficient technologies such as graphical processor units ( gpu ) or field programmable gate array ( fpga ) gained a lot of attention . in the j - pet project ,parallelization by using multi - core cpus , gpus and fpgas are used at different stages of data processing .fpga is a programmable silicon chip which combines two important features : on one hand , the fpga is reprogrammable , therefore any logic can be implemented and changed if needed in hardware description languages such as verilog or vhdl . on the other hand , the compiled program is translated to the set of physical connections between the logical arrays , therefore it is really the hardware realization of the designed logic with the functionality of the real - time speed processing , analogically to the one offered by the dedicated asic processors . finally fpga chips are perfect for the parallelization and very cost - effective .the fpga devices are the core computing nodes of the jpet fee and data acquisition system ( daq ) .the j - pet fee was designed in view of sampling in the voltage domain of very fast signals at many levels , with a raising time of about 1 ns . a novel technique for precise measurement of time and chargeis based solely on fpga devices and few satellite discrete electronic components .one computing board ( called trigger readout board trb ) consists of five lattice ecp3 - 150 fpgas .four fpgas are used as time - to - digital converters and one as a central fpga node that steers the whole board .the multiple computing boards are interconnected via network concentrators .the global time synchronization is provided through a reference channel .the j - pet daq system allows for continuous data recording over the whole measurement period . in total , more than 500 channels with 1gb / s data rates can be read .the overall constant read - out rate is equal to 50 khz , while reducing the dead time to the level of tens of ns .the described triggerless mode of operation allows to store every event without information loss due to preliminary selection . on the other hand ,a significant amount of disk storage is needed ( about tb per measurement ) to save the data , whereas most of the currently registered events contain useless noise information only . in order to reduce the data flow and to eliminate background events a new central controller module ( ccm )is introduced as an intermediate computing node between the trb boards and the disk storage .the ccm is being developed based on xilinix zynq chip which contains fpga integrated with the arm processor .it is capable of hardware processing up to 16 gbit ethernet stream in parallel as well as online filtering of the data .moreover , it is even possible to implement some online reconstruction algorithms .finally , the online monitoring with a dedicated data substream will be added .the raw data stored on the disks , is processed in the j - pet framework , which serves as a programming environment which provides useful tools for various reconstruction algorithms , calibration procedures and which standardizes the common operations , e.g : input / output process and more .it also provides the necessary information about run conditions , geometry and electronic setups by communicating with the parameter database .the architecture of the analysis framework was already described in . in this paragraphwe will describe the important parts in the context of understanding framework parallelization . in the j - pet framework, the analysis chains are decomposed into series of standardized modular blocks .each module corresponds to a particular computing task , e.g. reconstruction algorithm or calibration procedure , with defined input and output methods .the processing chain is built by registering chosen modules in the jpetmanager , which is responsible for the synchronization of the data flow between the modules .the framework parallelization is implemented by using the proof ( parallel root facility ) extension for the root library .proof enables parallel file processing on cluster of computers or many core machines . in the case of the j - pet frameworkthe multi - core processing was tested .two options are being developed .the first solution is a realization of data parallel computing .first , a set of chosen computing tasks , in the form of processing chain , is registered in the jpetmanager as described before .the same processing chain will be multiplied and executed in parallel for every input file provided .this approach assumes that the input files can be analyzed independently . in the second mode, a single processing chain can contain modules ( subtasks ) that can operate in parallel .this solution is currently being implemented .the final output of the low - level reconstruction phase is a reconstructed set of lors that is provided as the input data for the image reconstruction procedures .the most popular approach based on iterative algorithms derived from maximum likelihood estimation method ( mlem ) has been adopted .the available time - of - flight information is incorporated to improve the accuracy and the quality of the reconstruction . in order to reduce the processing time , parallelization techniques are applied . currently two implementations are used .the first solution exploits the processing capability of graphical processing units ( gpu ) .the efficient image reconstruction using list - mode mlem algorithm with approximation kernels was implemented for gpu . here , the cuda platform was adopted .the second approach is a full 3-d reconstruction based on a multi - core cpu architecture . in this case , the most time - consuming operations such as projection and back - projections are parallelized .the code is based on the openmp library . for the current test implementation ,the time of one mlem iteration , processed on 40 cores with 128 gb , is about 70 minutes , when using the large field - of - view ( 88 cm x 88 cm x 50 cm ) with a binning of 0.5 cm and 1 degree .typically about 10 iterations are enough to reach mlem optimal reconstruction point .in order to reduce the processing time of the data flow , we use the parallel computing approach on several stages .we presented the implemented solution in the ffe and daq level based on the fpga chips .also , the multi - core cpu - based and gpu - based algorithms are used for the low - level and high - level reconstructions .currently , works are ongoing to further reduce the processing time , e.g. by implementing the online event filters . apart from the presented computing schemes , in which the data processing is performed locally , several remote processing concepts are considered as a replacement to the traditional in - site computing .the basic idea is to carry outs the resource - heavy computations remotely by using cloud or grid - computing .we acknowledge technical and administrative support by t. gucwa - ry , a. heczko , m. kajetanowicz , g. konopka - cupia , w. migda , and the financial support by the polish national center for development and research through grant no .innotech - k1/in1/64/159174/ncbr/12 , the foundation for polish science through mpd programme and the eu , mshe grant no .poig.02.03.00 - 161 00 - 013/09 and doctus the malopolska phd scholarship fund .p. moskal et al . , patent applications : pct / pl2010/00062 , pct / pl2010/00061 ( 2010 ) p. moskal et al . , bio - algorithms and med - systems 7 ( 2011 ) 73 ; [ arxiv:1305.5187 [ physics.med-ph ] ] p. moskal et al. , nuclear instruments and methods in physics research section a 764 ( 2014 ) 317 ; [ arxiv:1407.7395 [ physics.ins-det ] ] .l. raczyski et al . , nuclear instruments and methods in physics research section a 764 ( 2014 ) 186 ; [ arxiv:1407.8293 [ physics.ins-det ] ] .p. moskal et al ., nuclear instruments and methods in physics research section a 775 ( 2015 ) 54 ; [ arxiv:1412.6963 [ physics.ins-det ] ] .a. wieczorek a. et al . , ( 2015 ) , acta phys .a127 1487 - 1490 ( 2015 ) ; arxiv:1502.02901 [ physics.ins-det ] moskal p. et al ., acta phys . pol .a127 1495 - 1499 ( 2015 ) ; arxiv:1502.07886 [ physics.ins-det ] p. kowalski et al ., acta phys . pol .a127 1505 - 1512 ( 2015 ) ;arxiv:1502.04532 [ physics.ins-det ] d. kamiska et al . ,nukleonika ( 2015 ) , this issue .m. paka et al . ,bio - algorithms and med - systems 10 ( 2014 ) 41 ; [ arxiv:1311.6127 [ physics.ins-det ] ] .w. krzemien et al . , acta phys .polonica a vol .127 , no . 5 ( 2015 ) .arxiv:1503.00465 [ physics.ins-det ] w. krzemien et al . , bio - algorithms and med - systems vol .1 , ( 2014 ) 27 ; arxiv:1311.6153 [ physics.ins-det ] r. brun , f. rademakers nuclear instruments and methods in physics research section a 389 ( 1997 ) .
|
the jagiellonian - pet ( j - pet ) collaboration is developing a prototype tof - pet detector based on long polymer scintillators . this novel approach exploits the excellent time properties of the plastic scintillators , which permit very precise time measurements . the very fast , fpga - based front - end electronics and the data acquisition system , as well as , low- and high - level reconstruction algorithms were specially developed to be used with the j - pet scanner . the tof - pet data processing and reconstruction are time and resource demanding operations , especially in case of a large acceptance detector , which works in triggerless data acquisition mode . in this article , we discuss the parallel computing methods applied to optimize the data processing for the j - pet detector . we begin with general concepts of parallel computing and then we discuss several applications of those techniques in the j - pet data processing ` daq ` , ` computing ` , ` tof - pet `
|
infotaxis is an olfactory search strategy proposed in 2007 by vergassola , villermaux and shraiman to address the problem of finding the source of a volatile substance transported in the environment under turbulent or noisy conditions . in the lack of such complications , chemotaxis ,i.e. moving upwards in concentration gradient , performs well as a search strategy and many living organisms are known to use this strategy to perform their natural tasks . however ,when detections are scarce or the concentration profile is not smooth , it is no longer possible to estimate the concentration and its gradient at a given point . in this regime , chemotaxis becomes unfeasible and infotaxis reveals its true significance .some insects are known to navigate and find their targets under these scenarios , .learning from their strategies has inspired robotic devices designed to perform complicated search tasks with technological applications ( finding dangerous substances such as drugs or explosives or exploring inhospitable environments ) , for which robustness and performance of the search is of main concern .turbulent or noisy environments are usually modeled by stochastic processes . in the simplest model ,spatio - temporal correlations in the concentration profile are neglected , and the number of detections is modeled by considering a poisson process at each point of space .the rate of detections at each point depends on the position of the source and the parameters of the transport process , and is usually obtained from the solution of an advection - diffusion equation .the searcher agent has a built - in model of the environment , and it is able to calculate the estimated number of detections at its current location , given the position of the source . instead of knowing the true position of the source , the agent uses a probability distribution that expresses his belief about the position of the source .this belief function is constantly updated following bayesian inference using the built - in model and the number of detections actually registered by the sensors at a given point .the most innovative feature of infotaxis is the criterion for the motion of the agent : instead of moving towards the most probable position of the source , the agent moves in the direction where it expects to gain more information about its position . in a sense, it is a greedy search in _ information _ , as opposed to _ physical _ space .infotactic searches involving fleets of cooperative agents have been considered in .extensions of the algorithm to continuous space and time and to three dimensions have been treated in .recently , masson has proposed an information based search strategy similar to infotaxis where the searching agent does not have a global space perception , . in a previous work , we analyzed the performance of infotaxis as the initial position of the agent relative to the source and the boundary of the search domain was changed .the surprising result was that the mean search time was not always an increasing function of the distance to the source : in some cases , starting further away from the source led to shorter and more efficient search processes .this a priori counterintuitive result was explained by the fact that the first step in an infotactic search is not stochastic but deterministic , and depends only on the boundaries of the search domain and the parameters of the transport process ( rate of emission and correlation length ) , not on the position of the source .this is natural , since at the beginning of the search the agent has no information about the position of the source , the initial belief function is uniform and entropy is maximum .the search domain was shown to be partitioned into regions of constant first step , and these regions are limited by smooth curves . in this workwe extend our study of the performance of infotaxis to consider two different situations : 1 .variation in performance as a function of the environment , assuming perfect knowledge of the environment parameters . 2 .variation in performance due to an imperfect modeling of the environment . in the first casewe shall assume that the environment model used by the infotactic agent to do bayesian inference is exact , but we shall probe infotaxis under different ranges of values of the parameters of the environment . in the second casewe will explore the drop in performance caused by an imperfect modeling of the environment , i.e. when there is a mismatch between the true environment parameters and those in use by the agent .both of these problems are of great practical relevance : it is essential to know the range of parameter values in which infotaxis remains an efficient search strategy , and likewise it is important to know how much uncertainty in the estimation or measurement of the parameters of the transport process can be allowed . while some of these questions have been briefly addressed in the recent literature , a thorough and systematic analysis as the one performed in this work was absent .it should be stressed at this point that our implementation of the infotaxis algorithm includes one differential feature from the ones considered in the literature . in previous studies a _ first passage _criterion was typically used , i.e. the search terminates when the position of the agent coincides for the first time with the position of the source .instead , we have used a _ first hit _ criterion : the search terminates when the entropy falls below a given threshold , i.e. when the agent has sufficient certainty about the position of the source .the reason to use this criterion is twofold .first , the agent needs no external information about the source : it decides to halt based on its own computations and measurements .second , it allows detection at a distance , i.e. successful searches when the agent knows where the source is , even if it is a distance away from it .this criterion emphasizes vicinity in the information rather than the spatial sense .note that with our criterion it could happen that the agent passes on top of the source without actually knowing it , and the search would continue . in practice, however , it usually happens that when the agent first passes by the source , it decides not to move and entropy rapidly decreases below the threshold signaling the source detection . in some extreme cases as those studied in this work ,deviations from this standard behavior could happen . in order to assess the performance of infotaxis as an efficient search strategy, several measures can be used .the most obvious one is the _ rate of success _ , which of course involves a proper definition of successful / failed searches .we shall consider a search to be failed if the search time exceeds an upper bound , or if the maximum of the probability distribution when the entropy falls below the detection threshold does not coincide with the real position of the source .the next measure of performance is the mean search time , together with its fluctuations .the motivation of this work is geared towards applications in the development of future sniffers and their use for resolving practical problems .the paper is organized as follows : after a brief review of the infotaxis algorithm in section [ sec : infotaxis ] , we discuss its performance as a function of the parameters of the environment in section [ sec : param_analisis ] . in section [ sec : misspecifications ] we perform a quantitative analysis of the drop in performance due to an imperfect modelling of the transport process in the environment .finally , a discussion of the results is presented in section [ sec : summary ] .in this section we briefly describe the infotaxis search algorithm , and refer the interested reader to ref . for more details and insights ( see also section ii of ) .infotaxis was designed as an olfactory search strategy that is able to find the location of a target that is emitting chemical molecules to the environment which is assumed to be turbulent . by decoding the trace of detections and non detections of such chemicals , the infotactic searcher solves a bayesian inference problem to reconstruct at each time a probabilistic map for the position of the target .this map , commonly named _belief function _ in the context of information theory , is refined in time by the searcher by choosing its movements as those that maximize the local gain of information .a suitable indicator of a successful search is the shannon entropy associated to the belief function , approaching zero when the belief function becomes a delta function located at the position of the target .the infotaxis search strategy has two key elements : on one hand the average rate of detections , which is a function of the searcher s position and the assumed target s position , and on the other hand the belief function itself .the rate function models how the the chemicals emitted at a position are transported by the environment , and it is usually taken to be the solution of an advection - diffusion equation in free space . in two dimensions , the rate function becomes where is the rate of emission of chemicals , is their isotropic effective diffusivity , is the characteristic size of the searcher , is the modified bessel function of order 0 , and the _ correlation length _ , given by where is the lifetime of the emitted molecules , and the mean current or wind ( which blows , without loss of generality , in the negative -direction ) .the correlation length can be interpreted as the mean distance traveled by a volatile particle before it decays .the rate function is used by the bayesian inference analysis , weighting the actual number of detections with the expected one , to reconstruct the belief function representing the searcher s knowledge about the target s location .this function is a time - varying quantity that is updated , given the trace of detections at time , using the bayes formula . if one assumes statistical independence of successive detections ( i.e. a poisson process ) the probability function at time posterior to experiencing a trace is given by : where and is the total number of detections registered by the searcher at successive times .the searcher uses the belief function , choosing its movements not towards the most probable value of but to the position at which the expected gain of information about the target s position is maximized .assuming that the search domain is a square lattice and quantifying the uncertainty of the searcher about the target s position with the shannon entropy associated to , the maximization process means that the searcher moves from its current position at time to a neighboring position at time , for which the decrease in entropy is largest .the expected variation of entropy upon moving from to is given by \ ] ] where is the probability of having detections during the time , with the mean number of detections at position , and is the expected reduction in entropy assuming that there will be detections during the next movement .the first and second term in eq .( [ deltas ] ) evaluate respectively the reduction in entropy if the target is found or not at in the next step .therefore , eq . ( [ deltas ] ) naturally represents a balance between exploitation and exploration .the numerical experiments reported in the rest of this paper are set as follows : at time the search starts with a uniformly distributed belief function , _i.e. _ , the searcher is totally ignorant about the target s position .the initial state is therefore of maximal entropy .the search ends when the shannon entropy takes a value below a certain threshold , which we set to ( first hitting time criterion ) . during the search the associated entropy approaches zero , not necessarily monotonously , as the belief function gets narrower and under very general circumstances it becomes a delta peak centered at the target s location .we will show however that this may not always be the case .this motivates us to distinguish two different situations for an unsuccessful search : when the entropy threshold is reached but the maximum of the belief function does not coincide with the position of the source ( type i ) , and when the search exceeds the maximum time limit without reaching the entropy threshold ( type ii ) .we first study the dependence of the search time on the different parameters involved in the environment model , namely the diffusion coefficient determining the typical size of the area the searcher agent explores between successive updates of the belief function , the emission rate related to the amount of information the searcher can receive through the detections and the wind speed that breaks the symmetry of the search by distinguishing the regions of the search domain where the target is most likely located .we recall that changes in and modify the correlation length eq .[ eq : lambda ] , that roughly speaking , determines the way in which the searcher approaches the target .naturally , the modification of any of these parameters is reflected on the balance between the explorative and exploitative tendencies of infotaxis . to be precise , we consider a search domain consisting on a two - dimensional lattice of size with reflecting boundary conditions , meaning that if at any instant the agent is located on the boundary of the search domain the movement pointing outward is supressed . in the numerical experiments reported in this section , the targetis located at coordinates and the searcher is placed initially at .all positions are given with respect to the central lattice site of the search domain .furthermore , we impose that the search starts at time with the searcher having registered one detection .the size of the searcher is set to and the molecule s life time to . in this sectionwe study the variation of the search time with in the absence of wind .note that with this choice any change in corresponds to a quadratic change in the correlation length ( see eq .[ eq : lambda ] ) .these results are shown in fig .[ fig : d ] , where we can distinguish two different regimes : at small diffusivities the search time decreases two orders of magnitude as , reaching a minimum value at . at larger diffusivities increases and saturates at .both regimes can be understood simply in terms of the variation of the correlation length . at small diffusivitiesthe correlation length is small , meaning that the effective area inside which the bayesian inference has an effect is small compared to the whole domain .as increases increases and the search becomes more effective as this implies an increase of the effective region where the searcher explores to find the source s position , enhancing the searcher s `` field of vision '' . for correlation length becomes of the same order of the length of the domain ( ) , and the minimum search time is attained . at this pointthe exploitative terms in infotaxis become important . at the second regime where the correlation length becomes larger than the search domain the infotactic search looses resolution as larger values of further uncertainty about the source position , and the search time increases again and saturates .in the presence of wind , the same qualitative behaviour is expected ., obtained as an average over trajectories .the different symbols identify the direction of the initial step the searcher takes at time : circle ( down ) , square ( left / right ) and diamond ( up ) .the error bars correspond to the data s standard deviation .the rest of the parameters were set to , , and .,scaledwidth=45.0% ] it is interesting to note that the fluctuations around the search time also have a different behaviour in these two regimes .the behaviour of the fluctuations was recently studied in , and associated to the direction of the initial step taken by the searcher .there it was found that the initial step in infotaxis is fully determined by the geometry of the boundary and by the searcher s proximity to it , forming a partition with elements of similar initial behaviour .more importantly , the area and shape of the elements of the partition was mainly affected by the value of the correlation length .therefore , for a fixed initial position of the searcher , a variation in might change its initial step and the different symbols in fig .[ fig : d ] distinguish this initial behaviour .the increase of fluctuations around the search time in the regime of large diffusivity is in agreement with our previous findings in .we now turn our attention to the dependence of the search time on the wind speed .we show this in fig .[ fig : v ] for two different starting positions of the searcher : ( solid symbols ) corresponding to a searcher starting inside the region of frequent detections and ( empty symbols ) at which the searcher is in a region of low detections .the presence of wind breaks the radial symmetry of the search and more importantly , changes the correlation length .this will affect not only the mean search time but its fluctuations as discussed in .however , the search time does not seem to change much with the variation of the wind speed .moreover , we observe that the dependence of on the wind speed is qualitatively the same irrespectively of the starting position of the searcher . for two different starting positions of the searcher : ( solid symbols ) ( empty symbols ) .the rest of parameters were set to , , and ., scaledwidth=45.0% ] larger emission rates mean that the source emits more information about its presence to the environment , which in turns implies that the searcher will have more information about the source .this is what we observe in fig.[fig : gamma ] , where the search time decreases with increasing emission rate , independently of the magnitude of the wind .interestingly , we find that at large emission rates , the search last less at zero wind than in the presence of it . at first sightthis appears counterintuitive since the presence of wind acts as an additional source of information about the direction in which the source is located .however , we have found that these longer search times in the regime of large are due to the additional time the searcher spends during the initial explorative zigzagging motion when it is far from the source and the detections are scarce . in the absence of windthe searcher tends to move directly to the center of the domain , thus closer to the source and to the region in which the detections are more frequent . for ( solid symbols ) and ( empty symbols ) .the rest of the parameters were set to , and .,scaledwidth=45.0% ] we finish this section discussing the evaluation of the entropy variation involved in each of the possible searcher movements ( eq . [ deltas ] ) . the numerical computation of eq .[ deltas ] requires to truncate the infinite sum corresponding to the weighted probability of having any possible number of detections during the searcher motion from to .we do this by summing all terms until the cumulative probability of detections reaches a value close to ( in our computations ) .however , at high emission rates the mean number of detections increases drastically , demanding a much larger number of terms to consider in the infinite sum , entailing an important increase of the computational cost . to keep the infotaxis computationally efficientwe have approximated the entropy variation of eq .[ deltas ] by truncating the infinite sum to a maximum number of detections , irrespectively of the value of the cumulative probability , and found some interesting aspects of the infotatic search that we discuss now . in fig .[ fig : gamma_ii ] we show the dependence of on the emission rate in the absence of wind , truncating the sum in eq .[ deltas ] to and the rest of the parameters as in fig .[ fig : gamma ] . comparing these two figureswe observe that at low emission rates both numerical procedures lead to the same results since the cumulative probability of detections is one for . at high emission ratesthis is no longer true .nevertheless we obtain that the infotactic searches remain successful albeit with a much larger search time . in the absence of wind and for a truncated sum in eq .[ deltas ] with .the rest of the parameters were set to , and .,scaledwidth=45.0% ] ( upper row ) and with a wind of ( lower row ) .the rest of the paremeters were set to , , and .,scaledwidth=45.0% ] ( upper row ) and with a wind of ( lower row ) .the rest of the paremeters were set to , , and .,scaledwidth=45.0% ] to understand the consequences of approximating the truncation of the infinite sum we have studied the global topology of the search trajectories . in fig .[ fig : maps_gamma ] we show the density of visited sites of the trajectories that lead to a successful search in the absence ( upper row ) and presence ( lower row ) of wind .surprinsingly , under this approximation we observe that the belief function peaks exactly at the source even thugh the searcher never reaches the source position but get stucks away of it .this is evidenced in the density of visited sites in the right panels of fig .[ fig : maps_gamma ] . as a matter of fact, we have found that the searcher remains for long times over the density curve corresponding to . in this regionthe agent feels a number of detections that would correspond to be very close to the source , thus changing from an explorative search to an exploitative one , emphasizing a major contribution of the first term of eq.[deltas ] in the decision making processs .this is evidenced by comparing the highest density of visited sites on the right column of fig .[ fig : maps_gamma ] with the shape of the corresponding mean concentration field that we show in the left column of the same figure . notwithstanding this ,the bayesian inference continue to refine the belief function by making the probabilistic triangulation from a distance , until it becomes a peaked distribution over the source position .this surprising effect stresses one of the most important sources of the robustness of the infotaxis search strategy : the location of the source is possible even if the searcher never reaches its position .in this section we focus on the performance of infotaxis , as measured by the success rate and mean search time , when the searcher does not have an exact knowledge of the parameters of the transport process .it is natural to expect a drop of perfomance in this regime , but we are interested in a quantitative analysis .it is hard to overemphasize how important this matter is for practical purposes , as measuring devices introduce some uncertainty in the best case , and other parameters that are harder to measure can only be estimated .we begin our performance analysis with the misspecification in the correlation length parameter , fig.[fig : fpt_d_est ] .we recall that is defined in , so we will keep the rest of the parameters constant and let the diffusion coefficient change .we shall denote by the diffusion coefficient used by the searcher for his bayesian inference and the true diffusivity of the transport process ( and likewise for the rest of the parameters ) .our results ( see figure [ fig : fpt_d_est ] ) show that for the performance of infotaxis is largely unaffected by the mismatch : the sucess rate is close to and the mean search time is close to the case of perfect knowledge . as in the previous section , when , the searcher assumes that the information collected during the search process comes from a region larger than it really is , causing a slower learning to find the source position and thereby , a larger search time .however , such increase in the search time is hardly observed in this case due to the dilute conditions of the search and the particular starting position of the searcher chosen for the numerical simulations ( its first step at low detection rate is persistently directed towards the source ) .however , an underestimation of causes a drastic drop in performance .this is specially evident when is less than or of the order of the initial distance of the agent to the boundary of the search domain . in these cases ,the initial step of the search changes and the search time increases because the searcher explores the space and learns about the source position in steps smaller than it should .we should remark that all the unsuccessful searches occur when is underestimated and they correspond to type i failures : the maximum of the belief function when the entropy threshold is reached does not coincide with the true position of the source . .left panel : success rate .right panel : search time .rest of parameters : , , and .,title="fig:",scaledwidth=25.0% ] .left panel : success rate .right panel : search time .rest of parameters : , , and .,title="fig:",scaledwidth=25.0% ] perhaps the most interesting parameter to analyze is the rate of odor emission .it is worth stressing that while the other parameters of the transport process , such as the diffusivity an the wind velocity can be measured with appropriate equipment , the rate of emission of volatile particles that are transported by the medium is harder to measure and subject to greater variability , e.g. if infotaxis is used by a robotic agent to find the source of a plague in a crop field , the emission rate of volatiles will depend on the biological state of the infected plant .figures [ fig : success_gamma_est ] and [ fig : fpt_gamma_est ] show the success rate and the variation of the search time as a function of the mismatch in for two different emission regimes ( i.e. two different values of ) .the first clear observation when looking at fig .[ fig : success_gamma_est ] is that , as opposed to the results exhibited in section [ sec : param_analisis ] , the search is not always successful .indeed , there is a window of values of centered around the perfect knowledge ( ) where infotaxis is still feasible .this window corresponds to the interval $ ] and seems to be independent of the value of . on both sides of the window of admissible estimated values of search fails for two different reasons .an underestimation of ( ) leads to type i failures , while an overestimation of leads to type ii failures .the mean search time of the successful searches reaches a minimum in the case of perfect knowledge and grows on both sides , as shown in figure [ fig : fpt_gamma_est ] for two different values of .the reason of this increase in mean search time is a deficient bayesian inference , as the agent believes the source to be farther away or much closer than it really is for a given rate of detection . in other words, the bayesian inference converts a given rate of detection to a given distance of the agent to the source , and this conversion is not accurate when the estimated value of in use by the agent differs from the real one . .white : .rest of parameters : , and .,scaledwidth=45.0% ] we have explored in greater detail the patterns of motion of the searcher under the two situations , underestimation and overestimation of . in figure[ fig : maps_gamma_est ] we plot the density of sites visited by the searcher in 100 trajectories starting from the same initial position with the source in the same position . in the left panel , corresponding to the agent spends most of the time exploring its vicinity well away from the source .it shows a random motion similar to the final steps of an infotactic search in the vicinity of the source .underestimation of causes the agent to believe that the source is much closer than it really is . on the other hand ,when is overestimated , the agent believes the source to be farther than it is , and often the belief function concentrates on the domain boundary , specially in the top corners .this explains the density of sites visited by the searcher in the right panel of fig .9 , where most of the trajectories involve more deterministic and persistent motions , as in the initial steps of a normal search when the agent is far away from the source . due to the symmetry of the problem, the belief function concentrates for some time in one corner , but then it shifts to the other corner as the searcher approaches it and discovers that the source is not there .the searcher enters into a loop that ends up in a frozen position , due to an effect similar to the one described in section iii.3 , which is caused by an underestimation of ( the agent registers much fewer detections than the number it expects from its belief function ) . as a result, the search terminates in a type ii failure , as the maximum time is reached before the entropy falls below the detection threshold . .right panel : .rest of parameters : rest of parameters : , and .,title="fig:",scaledwidth=25.0% ] .right panel : .rest of parameters : rest of parameters : , and .,title="fig:",scaledwidth=25.0% ] .right panel : overestimation of the emission rate : rest of parameters : rest of parameters : , and .,title="fig:",scaledwidth=25.0% ] .right panel : overestimation of the emission rate : rest of parameters : rest of parameters : , and .,title="fig:",scaledwidth=25.0% ]we have studied the performance of the infotaxis search strategy as a function of the parameters of the transport process as well as its performance with respect to an inaccurate modeling of the environment .we have assessed these questions by means of intensive numerical simulations , and we have shown the variation of the search time and the success rate of infotaxis in all the different cases . in our implementation of infotaxiswe use the first hit as opposed to the first passage criterion , i.e. vicinity in information rather than physical space .we have shown , in accordance with the previous literature , that the search time shows strong dependence not only of the initial step of the search , but also in the way in which the searcher explores the environment ( mainly determined by the correlation length ) and exploits the information collected during the search process . in the case of a perfect knowledge of the environment , we find that the searches are always successful but the mean search time changes with the parameters of the transport process . as a function of the correlation length , the mean search time reaches a minimum value when has the size of the search domain .the dependence of the mean search time of the wind velocity is very mild as well as its dependence on the initial position of the agent relative to the source and wind direction . the mean search time decreases with emission rate , as information is released to the agent at a higher rate. however , at very high the computational complexity of the algorithm increases , and we have found that simplifying the computation still leads to succesful searches even when the agent never reaches the source .we have studied the drop in performance of infotaxis caused by an imperfect modelling of the environment expressed through an inaccurate estimation of the parameters of the transport process .our results show that in practical cases it is safer to overestimate the correlation length than to underestimate it , as in the former case no significant drop in performance occurs while in the latter the sucess rate quickly drops .the situation is different when the mismatch between real and estimated value occurs for the emission rate . in this casethere is a window around the real value where infotaxis remains robust , but overestimation or underestimation by a factor of two leads to a rapid decay in performance , with lower success rates and higher mean search times .our results places some limits on the performance of infotaxis , and have practical consequences for the design of future infotaxis based machines to track and detect an emitting source of chemicals or volatile substances . this work has been supported by grant no .245986 of the eu project robots fleets for highly agriculture and forestry management .was also supported by a picata predoctoral fellowship of the moncloa campus of international excellence ( ucm - upm ) .the research of d.g.u . has been supported in part by spanish mineco - feder grants no .mtm2012 - 31714 and no .fis2012 - 38949-c03 - 01 .we acknowledge the use of the upc applied math cluster system for research computing ( see http://www.ma1.upc.edu/eixam/index.html ) .cmm has been supported by the spanish micinn grant mtm2012 - 39101-c02 - 01 .r. m. c. jansen , j. wildt , i. f. kappers , h. j. bouwmeester , j. w. hofstee , e. j. van henten , detection of diseased plants by analysis of volatile organic compound emission _ annu . rev .phytopathol . _ * 49 * , 157 ( 2011 ) .j. duque rodrguez , j. gutirrez lpez , v. mndez fuentes , p. barreiro elorza , d. gmez - ullate , c. meja - monasterio , search strategies and the automated control of plant diseases , in proceedings of first international conference on robotics and associated high - technologies and equipment for agriculture , pisa 2012 , pp. 163168 .
|
we study the performance of infotaxis search strategy measured by the rate of success and mean search time , under changes in the environment parameters such as diffusivity , rate of emission or wind velocity . we also investigate the drop of performance caused by an innacurate modelling of the environment . our findings show that infotaxis remains robust as long as the estimated parameters fall within a certain range around their true values , but the success rate quickly drops making infotaxis no longer feasible if the searcher agent severely underestimates or overestimates the real environment parameters . this study places some limits on the performance of infotaxis , and thus it has practical consequences for the design of infotaxis based machines to track and detect an emitting source of chemicals or volatile substances .
|
complex networks arisen in natural and manmade systems play an essential role in modern society .many real complex networks were found to be heterogeneous with power - law degree distributions : , such as the internet , metabolic networks , scientific citation networks , and so on .because of the ubiquity of scale - free networks in natural and manmade systems , the security of these networks , i.e. , how well these networks work under failures or attacks , has been of great concern .recently , a great deal of attention has been devoted to the analysis of error and attack resilience of both artificially generated topologies and real world networks .also some researchers use the optimization approaches to improve the network s robustness with percolation theory or information theory .there are various ways in which nodes and links can be removed , and different networks exhibit diverse levels of resilience to such disturbances .it has been pointed out by a number of authors that scale - free networks are resilient to random failures , while fragile to intentional attacks .that is , intentional attack on the largest degree ( or betweeness ) node will increase the average shortest path length greatly . while random networks show similar performance to random failures and intentional attacks .the network robustness is usually measured by the average node - node distance , the size of the largest connected subgraph , or the average inverse geodestic length named _efficiency _ as a function of the percentage of nodes removed .efficiency has been introduced in the studies of small world networks and used to evaluate how well a system works before and after the removal of a set of nodes .the network structure and function strongly rely on the existence of paths between pairs of nodes .different connectivity pattern between pairs of nodes makes the network different performance to attacks .rewiring edges between different nodes to change the topological structure may improve the network s function . as an example , consider the simple five nodes network shown in fig .1 . the efficiency of fig .1(a ) is equal to 8/25 , while it is improved to 7/20 in fig .1(b ) by rewiring . andwe know that fig .1(b ) is more robust than fig .1(a ) to random failures .a natural question is addressed : how to optimize the robustness of a network when the cost of the network is given .that is , the number of links remains constant while the nodes connect in a different way .should the network have any particular statistical characters ?this question motivates us to use a heuristic approach to optimize the network s function by changing the network structure .the paper is organized as follows : we firstly present mts method in section 2 and the numerical results are shown in section 3 .then we construct a simple model to describe the optimal network and discuss one of the important dynamic processes happening on the network , synchronization , in section 4 .finally , we give some insightful indications in section 5 .generally , a network can be described as an unweighted , undirected graph . such a graph can be presented by an adjacency binary matrix . if and only if there is an edge between node and .another concerned matrix , named distance matrix , consists of the elements denoting the shortest path length between any two different nodes .then the efficiency between nodes and can be defined to be inversely proportional to the shortest distance : .the global efficiency of the network is defined as the average of the efficiency over all couples of nodes . with the above robustness criterion in mind , we can define the optimization problem as follows : the above problem is a standard combinatorial optimization problem , for which we can derive good ( though usually not perfect ) solutions using one of the heuristic algorithms , tabu search , which is based on memory ( mts ) .mts is described as follows : : generate an initial random graph with nodes , edges .set ; . compute the efficiency of denoted by . : if a prescribed terminal condition is satisfied , stop , otherwise random rewiring : specifically , a link connecting node and is randomly chosen and substituted with a link from to node , not already connected to , extracted with uniform probability among the nodes of the network , note the present network and the efficiency . : if , , , else if , , else if does not satisfy the tabu conditions , then , else .go to .the following condition is used to determine if a move is tabu : , which is the percentage improvement or destruction that will be accepted if the new move is accepted .thus , the new graph at is assumed tabu if the total change in the objective function is higher than a percentage . in this paper , is a random number generated between 0.50 and 0.75 .the terminal condition is that the present step is getting to the predefined maximal iteration steps .many real networks in nature and society share two generic properties : scale - free degree distribution and small - world effect ( high clustering and short path length ) .another important property of a network is the degree correlation of node and its neighbors .it is called _ assortative _ mixing if high - degree nodes are preferentially connected with other high - degree nodes , and _ disassortative _ mixing if high - degree nodes attach to low - degree nodes .newman proposed a simple measure to describe the mixing pattern of nodes , which is a correlation function of the degrees .the empirically studied results show that almost all the social networks show assortive mixing pattern while other technological and biological networks are disassortative .the statistical properties are clearly described in refs .we start from a random graph with size and .the terminal condition is the maximal iteration step reaching 1000 .it should be noted that for each step of the objective function being improved , we record the statistical properties of the present network. a typical run of statistical results are shown in figs . 2 and 3 .2 shows that with the increase of efficiency , the average shortest path length becomes short and the maximal degree becomes large , indicating that hub nodes develop to be present with the evolving process . with the increase of the efficiency ,the hub node develops to be the most important one to connect with almost other nodes in the network . for degree correlation coefficient ( fig2 .( e ) ) , it decreases in the whole process from zero to negative , which indicates that the nodes with high degree preferentially connect with the ones with low degree .still , for the clustering coefficient , it increases to a high value 0.6 and the network gets to be a highly clustering network . for the degree distribution , the cumulative degree distribution is shown in fig .3 . to check the optimal network s tolerance to random failures, we show the efficiencies of both initial random network and the optimal network versus the fraction of removed nodes in fig .it can be clearly observed that compared with the initial random network , the optimal network s robustness to errors is greatly improved .to provide a simple way to describe the properties of the optimal network , we consider to construct the network model directly . with a nongrowing network modelit can be constructed in the following way .\(a ) start from a random network with nodes , which can be implemented by rewiring edges of a regular graph with probability .\(b ) choose nodes as hub nodes randomly from the whole network with equal probability .\(c ) add edges randomly .one end point of the edge is selected randomly from the hub nodes and the other is chosen randomly from the network .in such a way , the network evolves to possess the statistical properties of the optimal network .this can be seen from fig .the primary goal of our simulation is to understand how the statistical properties of the network change with the process of adding edges .the construction of the model is similar to the two - layer model introduced by nishikawa _ et .al _ in ref .the main difference is that the initial network in our model is a random network different from that of a regular network . to show the effect of the parameter , we also present simulation results versus in fig . 6 . with the increase of the parameter , the network becomes less heterogeneous and more homogeneous , it s natural to observe that both the efficiency of the network and the maximal degree reduce .so both the average path length and the correlation coefficient increase in the homogeneous phase compared with the values in the heterogeneous phase .since the optimal network shows a strong heterogeneity , a small parameter of is reasonable . we know that most of the real world networks share the character of small - world effect and some degree of heterogeneity , so these networks are robust to random failures and they are also efficient in exchanging information .then we consider the synchronization of the network model , how does the network s synchronizability change with the adding of edges ?synchronization has been observed in diverse natural , social and biological systems .consider a network consisting of identical oscillators coupled through the edges of the network .the dynamics of each individual oscillator is controlled by and is the output function .thus , the equations of motion are as follows : where governs the dynamics of individual oscillator , is the coupling strength , and is the laplacian matrix of the network . it has been shown that the eigenvalue ratio is an essential measure of the network synchronizability , the smaller the eigenvalue ratio , the easier the network to synchronize .the progress in the studies of the relationship between topological structure and synchronizability can be found in refs . to discuss the synchronizability of the network model, we show the eigenvalue ratio versus the number of adding edges and the number of adding hub nodes in fig . 5 ( f ) and fig . 6 ( f ) respectively .note that in fig .5 ( f ) , the case for corresponds to the highest heterogeneity .the eigenvalue ratio increases with greatly , which means that the network becomes more difficult to synchronize with strong heterogeneity , even for short path length . in fig .6 ( f ) , the network synchronizability is improved with the increase of .these can all be explained as strong heterogeneity reduces network s synchronizability , which is consistent with the conclusion of nishikawa _ et .al _ who has pointed out that networks with homogeneous distributions are more synchronizable than heterogeneous ones . we can conclude that with the introduction of heterogeneity , though the network robustness to random failures and the efficiency of information exchange on the network are greatly improved , the network s synchronizability is really reduced .one ultimate goal of studies on complex networks is to understand the relationship between the network structure and its functions . to get the optimal strategies of a given function, we should evolve the structure with its function dynamically , which can be realized with optimization approaches . what special characters should the network have with a given function ?this problem motivates us to explore the relationship between the structure properties and functions and then get some insightful conclusions . by optimizing the network structure to improve the performance of the network resilience , we obtain the optimal network and do some statistics of the optimal network .we find that during the optimizing process , the average shortest path length becomes short .the increase of the maximal degree of the network indicates the hub nodes appearance .the degree correlation coefficient decreases and is always less than zero , which indicates that nodes with high degree preferentially connect with the low degree ones .the clustering coefficient increases in the whole process and arrives to a high level , then the network shows a high degree of clustering . as we all knowthat most of the real - world networks in social networks show high clustering , short path length and heterogeneity of degree distributions , which may indicate their good performance to random failures and high efficiency of information exchange. then we present a nongrowing network model to try to describe the statistical properties of the optimal network and also analyze the synchronizability of the network .and we find that although the network s robustness to random failures and the efficiency of information exchange are greatly improved ( for the average distance of the network is small ) , the network s synchronizability is really reduced for the network s strong heterogeneity . in summary, we try an alternative point of view to analyze the robustness of the network from its efficiency . by optimizing the network efficiencywe find that a network with a small quantity of hub nodes , high degree of clustering may be much more resilient to perturbations . andthe results strongly suggest that the network with higher efficiency are more robust to random failures , though its synchronizability is being reduced greatly .00 r. albert , a .-barabsi , rev .74 ( 2002 ) 47 ; m. e. j. newman , siam review 45(2003)167 ; s. n. dorogovtsev , j. f. f. mendes , adv .strogatz , nature 410 ( 2001 ) 268 .r. albert , h. jeong , a .-barabsi , nature 406 ( 2000 ) 378 .r. cohen , k. erez , d. ben - avraham , s. havlin , phys .85 ( 2000 ) 4626 .r. cohen , k. erez , d. ben - avraham , s. havlin , phys .86 ( 2001 ) 3682 .p. crucitti , v. latora , m. marchiori , a. rapisarda , physica a 340 ( 2004 ) 388 .p. crucitti , v. latora , m. marchiori , a. rapisarda , physica a 320 ( 2004 ) 622 .p. holme , b. j. kim , phys .e 65 ( 2002 ) 056109 . l. k. gallos , p. argyrakis , a. bunde , r. cohen , s. havlin , physica a 344 ( 2004 ) 504 .d. magoni , ieee j. selected .area commun , 21 ( 2003 ) 949 .t. zhou , g. yan , b .- h .wang , phys .e 71 ( 2005 ) 046141 .a. x. c. n. valente , a. sarkar , h. a. stone , phys92 ( 2004 ) 118702. t. tanizawa , g. paul , r. cohen , s. havlin , h. e. stanley , phys .e. 71 ( 2005 ) 047101 .g. paul , t. tanizawa , s. havlin , h. e. stanley , eur .j. b 38 ( 2004 ) 187 .d. s. callaway , m. e. j. newman , s.h .strogatz , d. j. watts , phys .85 ( 2000 ) 5468 .b. wang , h .- w .tang , c .- h .xiu , ( accepted by _ physica a _ ) doi : 10.1016/j.physa.2005.08.025 .liu , z .- t .wang , y .- z .dang . mod .b 19 ( 2005 ) 785 .v. latora , m. marchiori , phys .87 ( 2001 ) 198701 .ji , h. -w .tang . applied mathematics and computation .159 ( 2004 ) 449 .p. p. zhang , et al , physica a , 359 ( 2006 ) 835 .l. a. n. amaral , a. scala , m. barthlmy , h.e .stanley , proc .97 ( 2000 ) 11149 .m. e. j. newman , phys .( 2002 ) 208701 .m. e. j. newman , eur .j. b 38 ( 2004 ) 321 .t. nishikawa , a. e. motter , y .- c .lai , f.c .91 ( 2003 ) 014101 .m. barahona , l. m. pecora .( 2002 ) 054101 .b. wang , h .- w .tang , t. zhao , z .-xiu , cond - mat/0512079 .l. donetti , p. i. hurtado , m. a. muoz , phys .95 ( 2005 ) 188701 .t. zhou , m. zhao , b .- h .wang , cond - mat/0508368 .t. nishikawa , a. e. motter , y. -c .lai , f. c. hoppensteadt , phys .91 ( 2003 ) 014101 .m. zhao , t. zhou , b. -h .wang , w. -x .wang , phys .e 72 ( 2005 ) 057102 .h. hong , b. j. kim , m. y. choi , h. park , phys .e 69 ( 2004 ) 067105 .m. di bernardo , f. garofalo , f. sorrentino , cond - mat/0504335 .p. n. mcgraw , m. menzinger , phys .e. 72 ( 2002 ) 015101 .
|
network s resilience to the malfunction of its components has been of great concern . the goal of this work is to determine the network design guidelines , which maximizes the network efficiency while keeping the cost of the network ( that is the average connectivity ) constant . with a global optimization method , memory tabu search ( mts ) , we get the optimal network structure with the approximately best efficiency . we analyze the statistical characters of the network and find that a network with a small quantity of hub nodes , high degree of clustering may be much more resilient to perturbations than a random network and the optimal network is one kind of highly heterogeneous networks . the results strongly suggest that networks with higher efficiency are more robust to random failures . in addition , we propose a simple model to describe the statistical properties of the optimal network and investigate the synchronizability of this model . complex network ; network efficiency ; network resilience ; synchronization ; _ pacs _ : 89.75.-k ; 89.75.fb .
|
a version of the epr paradox prevents simultaneously doing work on a quantum system and knowing how much work has been done . a system can do work on its environment only if the two have a nonzero interaction energy . during interaction ,two become entangled , leading to a superposition of different possible values for the work . according to quantum mechanics , measuring the work projects into a state with exactly zero interaction energy .therefore the system - environment interaction is always either zero or unknown .one hundred years ago , einstein presented a first - order rate hypothesis concerning the rate of energy exchange between a molecular system and a reservoir of photons. under this hypothesis , the transition between states with known molecular energy levels by emission and absorption of discrete photons can be shown to bring about thermal equilibrium for all parties : the photons , the molecular energy levels , and the particle velocities .this semiclassical picture provided a clear , consistent , and straightforward picture for the time - evolution of coupled quantum systems .nevertheless , the argument must have appeared unsatisfactory at the time because it only provided a statistical , rather than an exact , mechanical description of the dynamics .many years later , einstein , podolsky , and rosen published the famous epr paradox. the paradox states that , before any measurement is made , neither position nor velocity exist as real physical quantities for a pair of entangled particles .either of the two choices can be ` made real ' only by performing a measurement .the consequence for energy exchange processes follows directly . for a particle entangled with a field ,neither a definite ( molecular energy level / photon number ) pair nor a definite ( stark state / field phase ) pair exist before any measurement is made .recent works on quantum fluctuation theorems confront this difficulty in a variety of ways .one of the most prominent is the stochastic schrdinger equation that replaces a dissipative quantum master equation with an ensemble of trajectories containing periodic jumps due to measurement. in that setup , the jump process represents dissipation , so heat is defined as any energy change in the system due to the jumps .other changes in energy , caused by varying the hamiltonian in time , are counted as work .fluctuation theorems for this process are based on the detailed balance condition for jumps due to the reservoir , avoiding most issues with defining a work measurement .the work of venkatesh shows that regular , projective measurement of work - like quantities based on the system alone ( such as time - derivative of the hamiltonian expectation ) generally leads to `` qualitatively different statistics from the [ two energy measurement ] definition of work and generally fail to satisfy the fluctuation relations of crooks and jarzynski . ''another major approach is to model the environment s action as a series of generic quantum maps .a physical interpretation as a two - measurement process accomplishing feedback control was given by funo. there , an initial partial projection provides classical information that is used to choose a hamiltonian to evolve the system for a final measurement .that work showed that the transition probabilities in the process obey an integral fluctuation theorem .although the interpretation relied on a final measurement of the system s energy , it provided one of the first examples for the entropic consequences of measurement back - action. recent work on the statistics of the transition process for general quantum maps showed that the canonical fluctuation theorems hold if the maps can be decomposed into transitions between stationary states of the dynamics. this agrees with other works showing the importance of stationary states in computing entropy changes from quantum master equations. the back - action due to measurement is not present in this case .in contrast , the present work starts from a physically motivated process and shows that work and heat can be defined without recourse to stationary states of the central system . by doing so, it arrives at a clear picture of the back - action , and a minimum temperature argument .it also builds a quantum parallel to the measurement - based definition of work and heat for classical nonequilibrium systems laid out in ref . . there, the transition probability ratio is shown to be equivalent to a physical separation of random and deterministic forces .although no fluctuation theorem can be shown in general , in the van hove limit , the interaction commutes with the stationary state, and a fluctuation theorem such as the one in ref . applies .our model uses a combination of system and reservoir with joint hamiltonian , the coupling hamiltonian should not be able to simply shift an energy level of either system , which requires } = 0 ] , for arbitrary scalar functions , .a simple generalization discussed later is to waive the first constraint , but this is not investigated here .there have been many definitions proposed for heat and work in quantum systems . these fall roughly into three categories : the near - equilibrium limit , experimental work - based definitions , and mathematical definitions based on information theory .the near - equilibrium limit is one of the earliest models , and is based on the weak - coupling limit of a system interacting with a quantum energy reservoir at a set temperature over long time intervals .that model is probably the only general one derivable from first principles where it can be proven that every system will eventually relax to a canonical equilibrium distribution with the same temperature as the reservoir. the essential step is taking the van hove limit , where the system - reservoir interaction energy scale , , goes to zero ( weak coupling ) with constant probability for energy - conserving transitions ( which scale as ) . in this limit ,the only allowed transitions are those that conserve the uncoupled energy , .the dynamics then becomes a process obeying detailed - balance for hopping between energy levels of the system s hamiltonian , .states with energy superpositions can mix , but eventually decay to zero probability as long as the environment can couple to every system energy level .adding an effective time - dependent hamiltonian , , onto this picture and assuming very long time - scales provides the following definitions of heat and work, } \notag \\ \dot w & = { \operatorname{tr}\left[{{\frac{\partial \hat h^\text{eff}_a(t)}{\partial t } } \rho}\right ] } , \label{e : qw } \\\intertext{where denotes the time - derivative of according to the dynamics , and must be the stationary state of the time - evolution used .note that to match the dynamics of a coupled system , must be a predefined function of satisfying , ( see eq.~\ref{e : dynab } ) } { \operatorname{tr}\left[{\hat h^\text{eff}_a(t ) { \operatorname{tr}_{b}\left[{\rho_{ab}}\right]}}\right ] } & = { \operatorname{tr}\left[{(\hat h_a + \gamma \hat h_{ab } ) \rho_{ab}}\right ] } \label{e : ematch}\end{aligned}\ ] ] work and heat defined by equation [ e : qw ] have been used extensively to study quantum heat engines. for this definition , it is possible to prove convexity, and positivity of . statistical fluctuations of heat and work have also been investigated. these first applications have demonstrated some of the novel properties of quantum systems , but encounter conceptual difficulties when applied to dynamics that does not follow the instantaneous eigenstates of . the paradox described in this work shows why moving away from eigenstates is so difficult . the small - coupling , slow - process limit under which eq .[ e : qw ] applies also amounts to an assumption that the system - environment pair is continually being projected into states with known .it is not suitable for use in deriving modern fluctuation theorems because its validity relies on the this limit .entropy can also be defined thermodynamically by analyzing physical processes taking an initial state to a final state .one of the simplest results using the thermodynamic approach is that even quantum processes obey a fluctuation theorem for exchanges of ( heat ) energy between system and environment when each transition conserves energy and there is no external driving force. on averaging , this agrees with the common experimental definition of heat production as the free energy change of two reservoirs set up to dissipate energy by a quantum contact that allows monitoring the energy exchange process. semiclassical trajectories have also been investigated as a means to show that postulated expressions for quantum work go over to the classical definition in the high - temperature or small-. other works in this category consider a process where the system s energy is measured at the start and end of a time - dependent driving process .it is then easy to show that the statistics of the energy change give a quantum version of the jarzynski equality for the free energy difference. more general results are difficult owing to the fact that , for coupled systems , quantum transitions that do not conserve energy are possible , giving rise to the paradox motivating this work .there have also been many mathematically - based definitions of entropy production for open quantum systems .the primary goal of a mathematical definition is to quantify the information contained in a quantum state. it is well - known that preparation of a more ordered system state from a less ordered one requires heat release proportional to the information entropy difference. from this perspective , information is more fundamental than measured heats , because it represents a lower bound on any physical process that could accomplish this transformation .a maximum work could be found from such a definition using energy conservation .however , the disadvantage of a mathematical definition is that it can not be used to construct a physical transformation process obeying these bounds .most of the bounds on mathematical entropy production are proven with the help of the klein inequality stating that relative entropy between two density matrices must be positive. there are , in addition , many connections with communication and measure theory that provide approximations to the relative entropy. one particular class of mathematical definitions that has received special attention is the relative entropy , } \notag \\ & = \beta ( f(t ) - f^\text{(eq ) } ) \label{e : srel } \\ \intertext { between an arbitrary density matrix and an ` instantaneous equilibrium ' state , } \rho^\text{inst } & = \exp{\left[-\beta \hat h^\text{eff}(t)\right]}/z^\text{eff}(\beta , t ) .\label{e : pos}\end{aligned}\ ] ] this definition is closely related to the physical process of measuring the system s energy at the start and end of a process .several notable results have been proven in those works , including work relations and integrated fluctuation theorems as well as useful upper and lower bounds. the present work is distinguished from these mathematical definitions because it completely removes the requirement for defining or using an ` instantaneous equilibrium ' distribution of the central system or directly measuring the central system at all .one of the primary motivations for this work has been to derive a firm theoretical foundation for analyzing time - sequences of measurements in hopes of better understanding the role of the environment in decoherence. the present paper provides a new way of understanding the gap between the lindblad operators describing the quantum master equation and the physical processes responsible for decoherence . rather than unravelling the lindblad equation , we choose a physical process and show how a lindblad equation emerges .this path shows the importance of the source of environmental noise in determining the low - temperature steady - state .the result also provides an alternative continuous time , monte carlo method for wavefunction evolution without using the dissipation operator associated with the lindblad master equation . another outcome has been finding a likely explanation for the anomalous temperature of utsumi et .al. those works attempted to test the classical fluctuation theorems for electron transport through a quantum dot , and found that the effective temperature of 1.37 k ( derived from the slope of the transport odds ratio , ) was much higher than the electron temperature of 130 - 300 mk . trying to lower the temperature further below that point showed minimal changes in the slope , indicating a minimum temperature had been reached .sections [ s : process ] and [ s : therm ] present a repeated measurement process , and show that it allows for a physical definition of heat and work that occurs between successive measurements .measurements are only performed on the interacting reservoir , and ( because of entanglement ) cause instantaneous projection of the central system according to the standard rules of quantum mechanics . in this way , it is not required to define a temperature for the central system . because the central system is generally out of equilibrium ,the concept of equilibrium is applied only to the environmental interactions .section [ s : clausius ] proves the clausius form of the second law for the new definitions , and section [ s : jcm ] immediately applies these to the quantum theory of radiation .the limits of slow and fast measurement rates are investigated in sections [ s : weak ] and [ s : strong ] .the slow rate limit recovers einstein s picture of first - order rate processes and complies with eq .[ e : qw ] when the system - reservoir coupling , , is infinitesimally small .the fast measurement limit does not exhibit a quantum zeno paradox, but effectively injects white noise into the energy of the joint system consistent with the energy - time uncertainty principle . at intermediate stages ,continuous finite interaction with the reservoir causes an effective increase in the ` temperature ' of the system s steady - state .although surprising , the measurement rate is unavoidable in the theory as it is the exact parameter controlling broadening of spectral lines. i end with a proof in section [ s : mint ] that effects from the minimum achievable temperature will be seen when the reservoir temperature is less than the system s first excitation energy and the measurement rate is on the order of this excitation energy .to study the action of continual environmental measurement on part of a quantum system , i propose the following process ( fig . [f : ref ] ) : 1 .let represent a general wavefunction of the central system , and represent the state of the measurement device at energy level .the central system is coupled to the measurement device whose state is chosen at random from a starting distribution , , ( panel d - a ) the starting distribution must have a well - defined energy , and so should be diagonal in the energy basis of system .3 . the joint system is evolved forward using the coupled hamiltonian , until the next measurement time , chosen from a poisson process with rate ( panel b - c ) . 4 .the state of the measurement device is ` measured ' _ via _ projection into one of its uncoupled energy eigenstates , ( panel c ) . with probability .the measurement process itself is described exactly by the ` purification ' operator of spohn and lebowitz, whose effect on the joint density matrix is given by , every time this operation is performed , the memory of the environmental system is destroyed , and all system - environment superposition is removed . for studying thermalization process , it suffices to use a thermal equilibrium distribution for , in many experimental cases , represents a specially prepared input to drive the system toward a desired state .the operation of measurement disconnects the two systems , and , more importantly , makes the energy of the reservoir system correspond to a physical observable .a complete accounting for heat in quantum mechanics can be made using only these measurements on ancillary systems , rather than the central , , system .the thermodynamics based on this accounting allows the central system to retain most of its quantum character , while at the same time deriving the traditional , operational relationships between heat and work .although the analysis below is phrased in terms of density matrices , that view is equivalent to carrying out this process many times with individual wave - functions . specifically , if , is composed of any number of pure states, the final density matrix at time is a linear function of and hence of each . carrying out the process on individual wave - functionsthus allows an extra degree of choice in how to compose , the use of which does not alter any of the results .this process is a repeatable version of the measurement and feedback control process studied in ref . , and fits into the general quantum map scheme of ref .nevertheless , our analysis finds different results because the thermodynamic interpretation of the environment and measuring device allows the reservoir to preform work in addition to exchanging heat .in order for heat and work to have an unambiguous physical meaning , they must be represented by the outcome of some measurement .[ f : therm ] presents the energies for each operation applied to a system and its reservoir over the course of each measurement interval in fig .[ f : ref ] .initially ( in step 2 ) , the density matrix begins as a tensor product , uncoupled from the reservoir , which has a known starting distribution , .however , for a coupled system and measurement device , time evolution leads to entanglement . at the time of the next measurement ,the entanglement is projected out , so it is again permissible to refer to the properties of the and systems separately .after a measurement , the total energy of the system / reservoir pair will have changed from to .the amount of energy that must be added to ` measure ' the system / reservoir pair at any point in time is therefore , .this step is responsible for the measurement ` back - action ' , and the violation of the ft for general quantum dynamics . strictly speaking, this measurement energy does not correspond to an element of physical reality .nevertheless , the starting and ending , are conserved quantities under the uncoupled time - evolution , and so the energy of the measurement step can be objectively defined in an indirect way .this instantaneous measurement of the reservoir simulates the physical situation where an excitation in the reservoir leaks out into the environment .after this happens , the information it carried is available to the environment , causing traditional collapse of the system / reservoir pair . to complete the cycle , the reservoir degree of freedommust be replaced with a new sample from its input ensemble . for the micromaser , thisreplacement is accomplished spatially by passing separate atoms ( ) through a cavity , one at a time .on average , the system should output a ` hot ' , which the environment will need to cool back down to . using the methods of ordinary thermodynamics, we can calculate the minimum heat and maximum work for transformation of back to via an isothermal , quasistatic process at the set temperature of the reservoir , } + { \operatorname{tr}\left[{\rho_b(t ) \log \rho_b(t)}\right ] } \notag \\ & = -\delta s_b\label{e : q } \\w_\text{therm } & = { \operatorname{tr}\left[{(\rho_b(0 ) - \rho_b(t ) ) \hat h_b}\right ] } + \delta s_b/\beta \notag \\ & = - \delta f_b \label{e : wtherm } \\ w & = w_\text{therm } + \delta h_a + \delta h_b \notag \\ & = \delta h_a - q \label{e : w}\end{aligned}\ ] ] these sign of these quantities are defined as the energy added to the system , while represents the total change in during evolution from one measurement time to the next . in this work, always refers to the externally set temperature of the reservoir system .the temperature of the reservoir , used in defining above , is entirely related to the conditions under which the reservoir states are prepared .it can be different for each measurement interval .note that when a thermal equilibrium distribution is used for the reservoir ( eq . [ e : eqb ] ) , the reservoir dissipates energy from the system .since it always begins in a state of minimum free energy , the reservoir always recovers work from the system as well , since is always strictly positive by eq .[ e : pos ] .this makes sense when the central system is relaxing from an initial excited state .when the central system is at equilibrium , the second law is saved ( sec .[ s : clausius ] ) by including the work done during the measurement step .the assumption of a time - dependent hamiltonian for the system leads to an ambiguity on the scale of the measurement back - action. this presentation does not follow the traditional route of assuming a time - dependent hamiltonian for the central system .the assumption of a time - dependent hamiltonian is awkward to work with in this context because it side - steps the measurement paradox .instead , it assumes the existence of a joint system wherein the dynamics for sub - system is given exactly by , ] . because over each time interval , , the total heat added obeys the inequality , assuming the minimum required heat release leads to a prediction of the quasistatic heat evolution , this is exactly the result of equilibrium quantum thermodynamics , valid for arbitrary processes , .second , if the system always begins in thermal equilibrium , , and the change in occupation probability for each energy level ( ) over a measurement interval is small , then we can directly use the expansion, this is helpful because in fig .[ f : therm ] , the entropy of the system is always calculated in the energy basis of .substituting the canonical equilibrium distribution , equations [ e : dsb ] and [ e : dqb ] apply whenever is a canonical distribution and the change in is small over an interval . in the van hove limit ( sec .[ s : weak ] ) , energy is conserved between the and systems . because of energy conservation , the heat evolution of eq .[ e : dqb ] is exactly the well - known result of eq .[ e : qw ] in this case .for the definitions of work and heat given above to be correct , they must meet two requirements . in order to satisfy the first law ,the total energy gain at each step must equal the heat plus work from the environment .this is true by construction because the total energy change over each cycle is just .next , in satisfaction of the second law , the present section will show that there can only be a net heat release over any cyclic process .since has been defined as heat input to the system , this means there is a fundamental open question as to whether the energy change caused by the measurement process should be classified as heat or work .counting it as heat asserts that it is spread throughout the environment in an unrecoverable way .conversely , counting it as work asserts that measurement can only be brought about by choosing to apply a stored force over a distance . in the cycle of fig .[ f : therm ] , it is classified as work , because this is the only assignment consistent with thermodynamics .counting as heat leads to a systematic violation of the second law , as i now show . integrating the quantity , over an entire cyclic process cancels , leaving the sub - system starts each interval in thermal equilibrium ( eq .[ e : eqb ] ) , this is the free energy difference used in eq .[ e : srel ] .the klein inequality then proves the _ positivity _ of each contribution to eq .[ e : rcontrib ] .therefore , over a cyclic process , .a thermodynamically sound definition is found when counting as part of only the entropy change of the reservoir .heat comes into this model because the environment is responsible for transforming back into .using a hypothetical quasistatic , isothermal process to achieve this will require adding a heat , .i now show that by considering entropy changes for the - system jointly . at the starting point ,the two systems are decorrelated, = s_a(0 ) + s_b(0 ) .\ ] ] the time - evolution of this state is unitary , so has the same value for the entropy. however , projection always increases the entropy, so & \ge s[\rho_{ab}(t ) ] .\intertext{the and systems in the final state are also decorrelated , proving the statement , } \delta s_a + \delta s_b & \ge 0.\end{aligned}\ ] ] this is quite general , and applies to any measurement time , starting state , and hamiltonian , . again , for a cyclic process must return to its starting point , so , and .it should be stressed that the results of this section hold regardless of the lengths of the measurement intervals , .the choice of poisson - distributed measurement times is not justified in every case .this is especially true for the physical micromaser , where the measurement times should instead be gaussian , based on the cavity transit time for each atom .instead , choosing measurement times from a poisson distribution mimics the situation where a measurement is brought about from an ideal , random collision - type process .exact numerical results are known for the micromaser in the rotating wave approximation a single - qbit system in state or coupled to a single mode of an optical cavity ( ) in a fock state , . the hamiltonian is known as the jaynes - cummings model ( jcm ) , the rotating wave approximation neglects a term , in the hamiltonian causing simultaneous excitation of the qbit and cavity .it is usually justified when the two frequencies , and , are near resonance.. ] after a time , , the initial state will be in superposition with a state where the photon has been emitted .the ideal 1-photon micromaser can be solved analytically because the total number of excitations is conserved , and unitary evolution only mixes the states and .thus the only allowed transitions are between these two states .attempting to define the work done by the excited atom on the field requires measuring the energy of the atom .this is physically realized in the micromaser when the atom exits the cavity .this will project the environment into a state with known excitation , or .the work and ending states ( and from those the heat ) can all be neatly expressed in terms of , the average number of photons absorbed by the atom given that a projective measurement on the atom is performed at time , the expression for the transition probability , , is recounted in the appendix .the analytical solution gives an exact result for the heat and work when a measurement is done at time .averaging over the distribution of measurement times will then give the expected heat and work values over an interval .in the limit of many measurements ( ) , this expectation gives the rate of heat and work per average measurement interval .note that for the physical micromaser setup , the interaction time is set by the velocity of the atom and the cavity size resulting in a narrow gaussian distribution rather than the poisson process studied here . for a poisson distribution of interaction times ,the averages are easily computed to be , strong and weak - coupling limits of this equation give identical first - order terms , since measurements happen with rate , the effective rate of atomic absorptions in these limits is , this recovers einstein s simple picture of photon emission and absorption processes occurring with equal rates, all the coefficients are equal to the prefactor of eq .[ e : dx ] here because counts only a single cavity mode at frequency . in a blackbody ,the coefficient goes as because more modes contribute. the denominator , is exactly the one that appears in the traditional expression for a lorentzian line shape . here, however , the measurement rate , appears rather than the inverse lifetime of the atomic excited - state .the line broadens as the measurement rate increases , and the atom is able to absorb / emit photons further from its excitation frequency .only the resonant photons will cause equilibration , while others will cause noise . in the van hove limit , andthe contribution of the resonant photons will dominate . )coupled to a 2-level reservoir ( eqns .[ e : jcm]-[e : hab2 ] , , , , ) .panels ( a ) and ( b ) compare the system energy loss , , to the work and heat computed from the measured reservoir states ( eq . [ e : w ] and [ e : q ] ) .panels ( c ) and ( d ) show the information entropy of the system and the combined entropy change , .note that the traditional calculation of heat ( eq . [ e : qw ] ) gives only , .panels ( a ) and ( c ) show results for the time - evolution of the density matrix using the exact process , while panels ( b ) and ( d ) are computed using the weak - coupling approximation of sec .[ s : weak ] ., scaledwidth=45.0% ] this simple picture should be compared to the full ( rabi ) coupling , eq .[ e : hab ] plus eq .[ e : hab2 ] .the remaining figures show numerical results for the simulation of a resonant cavity ( ) and qubit ( ) system starting from a cavity in the singly excited energy state. figure [ f : decay]a compares the average work and heat computed using this cycle for at the state - point ( , , ) .the average was taken over 5000 realizations of process [ f : therm ] .rabi oscillations can be seen clearly as the photon exchanges with the reservoir ( atom ) .initially , this increases the entropy of the incoming atom s energy distribution . when there is a strong probability of emission , however , the integrated heat release , , shows system actually decreases the entropy of the reservoir .this happens because the the reservoir atom is left in a consistent , high - energy , low - entropy state . in this way, the reservoir can extract useful work from the cavity .panel ( b ) shows that no laws of thermodynamics are broken , since the system starts in a pure state , but ends in an equilibrium state .the information entropy of the system itself increases appreciably during the first rabi cycle .eventually , the equilibration process ends with the initial excitation energy being transformed into both heat and work . despite the appearance of fig .[ f : decay]a ( which happens for this specific coupling strength ) , the emitted heat is generally non - zero . the work and entropy defined by eq .[ e : qw ] differ from the results of this section .because the earlier definition is based only on the system itself , without considering the reservoir , there is no way to use the energy of the interacting atom for useful work .[ e : qw ] therefore finds zero work , and classifies entirely as heat lost to the environment .panels ( b ) and ( d ) of fig .[ f : decay ] show results from considering the system and reservoir jointly in the weak - coupling limit as will be discussed in sec .[ s : weak ] .the classical van hove limit was investigated in detail by spohn and lebowitz, who showed generally that thermal equilibrium is reached by in this limit irrespective of the type of coupling interaction , .first , the interaction strength , , must tend to zero so that only the leading - order term in the interaction remains .this makes the dynamics of } a ] .when , transitions where energy is conserved between the and systems ( ) dominate in the sum , resulting in a net prefactor of .the transition rate is then exactly the combination that is kept constant in the van hove limit . in this limit , tracing over in eq .[ e : wdiss ] should recover eq .iii.19 of ref . .by applying the interaction part of eq .[ e : wdiss ] to the time evolution with rate , the effective master equation in the weak coupling limit becomes , + \frac{\gamma^2 \lambda}{\hbar^2 } { \operatorname{tr}_{b}\left[{l ' [ \rho_a(t)\otimes \rho_b(0)]}\right ] } .\label{e : lind}\end{aligned}\ ] ] for the jcm , there is just one , which gives the same answer as the exact result , eq .[ e : dx ] . from an excited state ( ) at different values of the measurement rate .panels ( a)-(d ) have rates , , and , respectively .the exact repeated measurement process is compared with the second - order perturbation theory of the weak - coupling limit .the shape of the decay to steady - state behavior is a combination of fast energy exchange due to rabi oscillations and the slower process of memory loss through repeated measurement.,scaledwidth=45.0% ] relaxation process simulated by continuously applying can show qualitative differences from the process in sec . [s : process ] . without the trace over the environment , just gives the approximation to from second - order perturbation theory .this decays faster than when repeated projection is actually used because the environment loses its memory after each projection. these two time - scales can be seen in fig . [f : cmp ] .[ f : cmp ] ( and fig .[ f : decay]b , d ) compares simulation of with the exact process [ f : ref ] when repeated projection is used in the same way for both .that is , time evolution under the lindblad equation ( [ e : lind ] ) is carried out in intervals , ( ) .after each interval , the purification operator ( eq . [ e : pure ] ) is applied to the density matrix .this way , the only difference from the exact process is that the time - propagator has been approximated by its average .it is evident that the initial shape and rabi oscillation structure have been lost .instead , the propagator shows a fast initial loss followed by simple exponential decay toward the steady - state .nevertheless , the observed decay rate and eventual steady states match very well between the two methods .the total evolved heat shows a discrepancy because the fast initial loss in the propagator quickly mixes .numerical simulations of the lindblad equation were carried out using qutip. for the atom - field system , it was shown that the transition rate approached the same value in both the weak coupling and infinitely fast measurement case . to find the general result for the poisson measurement process as , note that the taylor series expansion of the time average turns into an expansion in powers of , it is elementary to calculate successive derivatives , , by plugging into .\ ] ] the average measured after a short interaction time on the order of is therefore , \notag \\ & + \frac{\gamma}{\lambda^2\hbar^2 } \left[[\hat h_a + \hat h_b , \hat h_{ab } ] , \rho_{ab}(0)\right ] \notag \\ & + \frac{\gamma^2}{\lambda^2\hbar^2 } \left(2 \hat h_{ab } \rho_{ab}(0 ) \hat h_{ab } - \{\hat h_{ab}^2 , \rho_{ab}(0)\ } \right ) \notag \\ & + o\left(\frac{\gamma^3}{\lambda^3\hbar^3}\right ) .\label{e : strong}\end{aligned}\ ] ] we can immediately see that this limit is valid when the measurement rate is faster than measurements per second .the terms are in the form of a time - propagation over the average measurement interval , .they have only off - diagonal elements , and do not contribute to or .the third term has the familiar lindblad form , which immediately proves a number of important consequences .first , all three terms are trace - free and totally positive .next , this term introduces dissipation towards a stationary state for . for a system under infinitely fast repeated measurement, the terms do not contribute to tr , and the density matrix evolves according to , \notag \\ & - \frac{\gamma^2}{\lambda \hbar^2 } { \operatorname{tr}_{b}\left [ { [ \hat h_{ab},[\hat h_{ab } , \rho_{a}\otimes \rho_b(0 ) ] ] } \right ] } .\end{aligned}\ ] ] a more explicit representation is possible by defining the sub - matrices , {ij } = [ \hat h_{ab}]_{in , jm}.\ ] ] these have the symmetry , , so \big]_{m , m } \notag \\ & = \sum_n p^b_n 2 \hat v^{mn } \rho_a \hat v^{\dagger\ , mn } - p^b_m \{\hat v^{mn } \hat v^{\dagger\ , mn } , \rho_a \}\end{aligned}\ ] ] for the jcm , this gives , the stationary state of this system will usually not be in the canonical , boltzmann - gibbs form .in fact , the prefactor does not depend on the cavity - field energy mismatch , , so it gives atomic transitions regardless of the wavelength of the light .this phenomenon is an explicit manifestation of the energy - time uncertainty principle . in the long - time limit of sec .[ s : weak ] , energy - preserving transitions dominated over all possibilities . in the short - time limit of this section ,all the transitions contribute equally , and the energy difference caused by a transition could be infinitely large . in - between , energy conservation ( and convergence to the canonical distribution ) depends directly on the smallness of the measurement rate , .results from simulating the time - evolution of the open quantum system using eq . [ e : wdiss ] reveal that even as the reservoir temperature approaches zero , the probability of the first excited state does not vanish .in fact , the results very nearly resemble a gibbs distribution at elevated temperatures .as the reservoir goes to absolute zero , the effective system temperature levels off to a constant , minimum value .this section gives both intuitive and rigorous arguments showing that this is a general phenomenon originating from work added during the measurement process .first , observe that the total hamiltonian , , is preserved during coupled time - evolution . when allowed by the transitions in ( i.e. when \ne 0 $ ] ), a portion of that total energy will oscillate between and .consider , for example , a dipole - dipole interaction , . at equilibrium ,the individual systems have , but the coupled system polarizes so that , . intuitively , the joint system can be pictured as relaxing to a thermal equilibrium at an elevated temperature . the initial density matrix at each restart , ,would then look like an instantaneous fluctuation of where is too high and is too low . at steady state, must be the same at the beginning and end of every measurement cycle .this allows the equilibrium argument above to determine by self - consistency , if equilibrium at is reached by the average measurement time , then expanding yields , where is the heat capacity of the reservoir system .it is well - known that quantum mechanical degrees of freedom freeze out at temperatures that are fractions of their first excitation energy ( ) .since the heat capacity when goes to zero , while the interaction energy should remain nonzero , this intuitive argument suggests that the temperature of the system can not go much below .to be more quantitative , can be estimated in the weak coupling limit from the second - order perturbation theory of sec [ s : weak ] .this comparison considers the case , since the stationary state where is known to be non - canonical . also, the jcm with rotating wave approximation is too idealistic , since when no off - resonance interactions can occur so commutes with and the minimum temperature argument does not apply . in other words , in the rotating wave approximation , the number of absorption events , , always increases the energy of the atom and decreases the energy of the cavity by the same amount. however , if the physical interaction hamiltonian , is used , then the weak coupling theory should also include transitions between and .the average number of simultaneous excitations must be tracked separately , since it increases both the energy of the atom and cavity .using eq .[ e : wdiss ] with , this average is in the low - temperature limit , only the probabilities of the four lowest - lying states , labeled , are relevant .the general result whenever allows for both and transitions with with equal weight and respective energy differences of zero and is , this can be solved for steady - state , to find , . the arrows plot the limiting value of from eq .[ e : mint ] . each line represents the steady - states found using a fixed measurement rate , , as the reservoir temperature varies .their y - values were computed from the steady - state probabilities for simulation in the weak - coupling limit ( eq . [ e : lind ] ) ., scaledwidth=45.0% ] this argument brings the energy - time uncertainty principle into sharp focus .if the measurement rate is on the order of the transition frequency , , then can be of order 1 , making absolute zero unreachable regardless of the coupling strength , , or the reservoir temperature determining .on the other hand , as the relative measurement rate , , approaches zero the thermodynamic equilibrium condition , , dominates . in the limit where measurements are performed very slowly , transitions that do not conserve the energy of the isolated systems are effectively eliminated .figure [ f : steady ] illustrates these conclusions . for high reservoir temperatures and low measurement rates , the system s steady - state probabilities follow the canonical distribution with the same temperature as the reservoir .when the reservoir temperature is lowered below a limiting value , the system is unable to respond effectively reaching a minimum temperature determined by eq .[ e : mint ] .effects from the minimum temperature can be minimized by lowering the measurement rate .a measurement process is needed in order to define heat and work a quantum setting . continuously measuring the energy of an interacting quantum system leads either to a random telegraph process or else to the quantum zeno paradox, while waiting forever before measuring the energy leads the epr paradox .the resolution by intermittent measurement leads to the conclusion that quantum systems under measurement do not always reach canonical ( boltzmann - gibbs ) steady - states .instead , the steady - state of a quantum system depends both on its coupling to an external environment and the rate of measurement .the presence of a measurement rate in the theory indicates the importance of the outside observer a familiar concept in quantum information .most experiments on quantum information have been analyzed in the context of a lindblad master equation , whose standard interpretation relies on associating a measurement rate to every dissipative term .this work has shown that every dissipative term can be a source / sink for both heat and work .this work has re - derived the master equation in the limit of weak coupling for arbitrary ( poisson - distributed ) measurement rates .the result agrees with standard line - shape theory , and shows that measurement rates on the order of the first excitation energy can cause observable deviations from the canonical distribution .the physical consequences of the measurement rate will become increasingly important as quantum experiments push for greater control. however , they also present a new probe of the measurement rule and energy - time uncertainty principle for quantum mechanics . for the micromaser , the rate _seems _ to be the number of atoms sent through the cavity per unit time since every atom that leaves the cavity is measured via its interaction with the outside environment .it is not , however , because even there the atoms can be left isolated and held in a superposition state indefinitely , leading to entanglement between successive particles. most generally , the number of measurements per unit time is determined by the rate at which information can leak into the environment . if information leaks quickly , the amount of energy exchanged can be large and the minimum effective temperature of the system will be raised .if information leaks slowly , the work done by measurement will be nearly zero , and the quantum system will more closely approach the canonical distribution . by the connection to the width of spectroscopic lines , this rate is closely related to the excited - state lifetime .this model presents a novel , experimentally motivated and thermodynamically consistent treatment of heat and work exchange in the quantum setting . by doing so, it also raises new questions about the thermodynamics of measurement .first , the explicit connection to free energy and entropy of reservoir states provides an additional source of potential work that may be extracted from coupling .connecting multiple systems together or adding partial projection using this framework will provide more realistic conditions for reaching this maximum efficiency .second , we have shown special conditions that cause the present definitions to reduce to well - known expressions in the literature .third , although the initial process was defined in terms of wavefunctions , the average heat and work is defined in terms of the density matrices . definitions ( eq . [ e : q ] and [ e : w ] ) still apply when the density matrix consists of a single state , but the repeated measurement projecting to a single wavefunction has a subtly different interpretation .the difference ( not investigated here ) is related to landauer s principle, since measuring the exact state from the distribution , , carries a separate ` recording ' cost .stochastic schrdinger equation and power measurement based methods assume that all energy exchange with the reservoir is as heat .there , work is supplied by the time - dependence of the hamiltonian . as we have shown here ,heat is most closely identified with the von neumann entropy of the system .the energy exchange with the reservoir is only indirectly connected to the heat exchange through eq .[ e : dqb ] .the fact that this becomes exact in the van hove limit explains the role of the steady - state for and observations by many authors that the work of measurement is the source of non - applicability of fluctuation theorems. when , the measurement back - action disappears , and the fluctuation theorem for is given by the formalism of ref . .it should also be possible to derive a forward fluctuation theorem ( not restricted to time - reversal ) for predicting the force / flux relationships along the lines of refs . .there have been many other investigations on thermodynamics of driven , open quantum systems .the restriction to time - independent hamiltonians in this work differs from most others , which assume a pre - specified , time - dependent . to make a comparison , either the cycle should be modified as described in sec .[ s : issues ] or work at each time - step in such models must be re - defined to count only energy that is stored in a time - independent hamiltonian for the central system , .the process studied here retains a clear connection to the experimental measurement process , and is flexible enough to compute heat and work for continuous feedback control . in view of the near - identity between eq .[ e : mint ] and eq .10 of ref ., it is very likely that recent experimental deviations from the fluctuation theorem are due to the phenomenon of minimum temperature , as well as to differences between traditional , system - centric , and the present , observational , definitions of heat and work .i thank brian space , sebastian deffner , and bartomiej gardas for helpful discussions .this work was supported by the university of south florida research foundation and nsf mri che-1531590 . 10 a. einstein . on the quantum theory of radiation ., 18:4762 , 1916 .translation by alfred engel in _ the collected papers of albert einstein _ ,, princeton univ . press , 1997 .a. einstein , b. podolsky , and n. rosen .can quantum - mechanical description of physical reality be considered complete ?, 47:777780 , may 1935 .albert einstein .physics and reality .221(3):349382 , 1936 . jordan m. horowitz .quantum - trajectory approach to the stochastic thermodynamics of a forced harmonic oscillator ., 85:031110 , 2012 .b. prasanna venkatesh , gentaro watanabe , and peter talkner .quantum fluctuation theorems and power measurements . , 17:075018 , 2015 .integral quantum fluctuation theorems under measurement and feedback control . , 88:052121 , 2013 .sebastian deffner , juan pablo paz , and wojciech h. zurek . quantum work and the thermodynamic cost of quantum measurments . , 94:010103(r ) , 2016 .gonzalo manzano , jordan m. horowitz , and juan m. r. parrondo .nonequilibrium potential and fluctuation theorems for quantum maps . , 92:032129 , 2015 .ronnie kosloff .quantum thermodynamics : a dynamical viewpoint ., 15:21002128 , 2013 .david m rogers and susan b rempe .irreversible thermodynamics . , 402:012014 , 2012 .herbert spohn and joel l. lebowitz .irreversible thermodynamics for quantum systems weakly coupled to thermal reservoirs . in stuarta. rice , editor , _ adv ._ , volume 38 , pages 109142 .wiley , 1978 .r. alicki .the quantum open system as a model of the heat engine ., 12(5):l103 , 1979 .eitan geva and ronnie kosloff . a quantummechanical heat engine operating in finite time . a model consisting of spin-1/2 systems as the working fluid ., 96:3054 , 1992 .tien d. kieu .the second law , maxwell s demon , and work derivable from quantum heat engines ., 93:140403 , sep 2004 .h. t. quan , yu - xi liu , c. p. sun , and franco nori .quantum thermodynamic cycles and quantum heat engines . ,76:031105 , sep 2007 . massimiliano esposito , ryoichi kawai , katja lindenberg , and christian van den broeck . quantum - dot carnot engine at maximum power . , 81:041106 , apr 2010 . sang wook kim , takahiro sagawa , simone de liberato , and masahito ueda .quantum szilard engine ., 106:070401 , 2011 .lajos disi . .springer , 2011 .2 ed .( lecture notes in physics volume 827 ) .hai li , jian zou , wen - li yu , lin li , bao - ming xu , and bin shao .negentropy as a source of efficiency : a nonequilibrium quantum otto cycle ., 67:134 , 2013 .h. t. quan , s. yang , and c. p. sun .microscopic work distribution of small systems in quantum isothermal processes and the minimal work principle .78:021116 , aug 2008 . christopher jarzynski and daniel k. wjcik .classical and quantum fluctuation theorems for heat exchange . ,92:230602 , jun 2004 .y. utsumi , d. s. golubev , m. marthaler , k. saito , t. fujisawa , and g. schn .bidirectional single - electron counting and the fluctuation theorem ., 81:125331 , 2010 .b. kng , c. rssler , m. beck , m. marthaler , d. s. golubev , y. utsumi , t. ihn , and k. ensslin .irreversibility on the level of single - electron tunneling ., 2:011001 , jan 2012 .j. v. koski , t. sagawa , o - p .saira , y. yoon , a. kutvonen , p. solinas , m. mttnen , t. ala - nissila , and j. p. pekola .distribution of entropy production in a single - electron box ., 9:644648 , 2013 .jukka p. pekola . towards quantum thermodynamics in electronic circuits ., 11:118123 , 2015 .christopher jarzynski , h. t. quan , and saar rahav .quantum - classical correspondence principle for work distributions ., 5:031038 , sep 2015 .h. tasaki .jarzynski relations for quantum systems and some applications .arxiv : cond - mat/0009244 , 2000 .p. talkner and p. hnggi .the tasaki - crooks quantum fluctuation theorem . ,40:f569 , 2007 . see note in text .v. vedral .the role of relative entropy in quantum information theory ., 74(1):197234 , 2002 .eric lutz and sergio ciliberto .information : from maxwell s demon to landauer s eraser ., 68(9):30 , 2015 .juan m. r. parrondo , jordan m. horowitz , and takahiro sagawa .thermodynamics of information . , 11(2):131139 , 2015 .m. b. ruskai and f. h. stillinger .convexity inequalities for estimating free energy and relative entropy . , 23(12):2421 , 1990 .takahiro sagawa .second law - like inequalities with quantum relative entropy : an introduction . in mikio nakahara and shu tanaka , editors , _ lectures on quantum computing , thermodynamics and statistical physics _ , volume 8 of _ kinki univ .series on quantum comput ._ , page 127 .world sci ., 2013 .m. campisi , p. talkner , and p. hnggi .fluctuation theorem for arbitrary open quantum systems ., 102:210401 , 2009 .m. campisi , p. talkner , and p. hnggi .thermodynamics and fluctuation theorems for a strongly coupled open quantum system : an exactly solvable case ., 42:392002 , 2009 .sebastian deffner and eric lutz .generalized clausius inequality for nonequilibrium quantum processes ., 105:170402 , oct 2010 .v. b. braginsky , y. i. vorontsov , and k. s. thorne .quantum nondemolition measurements ., 209:547 , 1980 . c. p. sun , x. x.yi , s. r. zhao , l. zhang , and c. wang .dynamic realization of quantum measurements in a quantized stern - gerlach experiment ., 9(1):119 , 1997 .erich joos . , pages 117 .springer , 1998 .walter t. strunz , lajos disi , and nicolas gisin .non - markovian quantum state diffusion and open system dynamics . in _ decoherence : theoretical , experimental , and conceptual problems _ , volume 538 of _ lecture notes in physics _ , pages 271280 , 2000 .q. a. turchette , c. j. myatt , b. e. king , c. a. sackett , d. kielpinski , w. m. itano , c. monroe , and d. j. wineland .decoherence and decay of motional quantum states of a trapped atom coupled to engineered reservoirs . ,62:053807 , 2000 .n. hermanspahn , h. hffner , h .- j .kluge , w. quint , s. stahl , j. verd , and g. werth .observation of the continuous stern - gerlach effect on an electron bound in an atomic ion ., 84(3):427430 , 2000 .r. e. s. polkinghorne and g. j. milburn .single - electron measurements with a micromechanical resonator . , 64:042318 , 2001 .w. zurek .decoherence and the transition from quantum to classical revisited . , 27:225 , 2002 .m. ballesteros , m. fraas , j. frhlich , and b. schubnel .indirect acquisition of information in quantum mechanics ., 162(4):924958 , 2016 .b. danjou , l. kuret , l. childress , and w. a. coish .maximal adaptive - decision speedups in quantum - state readout ., 6:011017 , feb 2016 .jean dalibard , yvan castin , and klaus mlmer . wave - function approach to dissipative processes in quantum optics . , 68(5):580583 , 1992 .h. j. carmichael .quantum trajectory theory for cascaded open systems ., 70(15):22732276 , 1993 .edwin a. power .the natural line shape . in w.t. grandy , jr . andp. w. milonni , editors , _ physics and probability : essays in honor of edwin t. jaynes _ , pages 101112 .cambridge univ . press , 1993 .e. t. jaynes .information theory and statistical mechanics .ii . , 108(2):171190 , oct 1957 .l. d. landau and e. m. lifshitz . .pergamon press , 1977 .. 6 44 .a. shabani and d. a. lidar .completely positive post - markovian master equation via a measurement approach ., 71:020101 , feb 2005 .sabrina maniscalco and francesco petruccione .non - markovian dynamics of a qubit . , 73:012111 , jan 2006 .serge haroche and jean - michel raimond . .oxford university press , 2006 .herbert walther , benjamin t h varcoe , berthold - georg englert , and thomas becker .cavity quantum electrodynamics . , 69(5):1325 , 2006 .serge haroche .nobel lecture : controlling photons in a box and exploring the quantum to classical boundary*. , 85(3):10831102 , jul 2013 .e. t. jaynes . some aspects of maser theory .microwave laboratory report number 502 , stanford univ . , 1958 .note that the atom - field interaction should also contain a diamagnetic term that is ignored here but may sometimes be grouped with an effective change in . .j. r. johansson , p. d. nation , and franco nori .qutip 2 : a python framework for the dynamics of open quantum systems . , 184(4):12341240 , 2013 .m. d. crisp . steak dinner problem ii . in w.t. grandy , jr . andp. w. milonni , editors , _ physics and probability : essays in honor of edwin t. jaynes _ , pages 8190 .cambridge univ . press , 1993 .the solution to the jaynes - cummings model under the rotating wave approximation is well - known. i summarize it in the notation of this work for completeness . for states with total excitations, the time - evolution operator decomposes into a block - diagonal, with the definitions, because of the simplicity of this system , measuring the atom also projects the cavity into a fock state .this simplifies the analysis , since we only need to track the pure probabilities , . assuming the incoming atomic states are chosen to be pure or at random ( with probabilities or , resp . ), eq . [ e : pt ] uses the fact that .this master equation has a non - trivial steady - state at .the existence of this steady - state , and the fact that the cavity does not have a canonical distribution , even when the atom does ( ) were noted by jaynes. experimentally , relaxation to the canonical distribution occurs because of imperfect isolation of the cavity , which allows thermalization interactions with external resonant photons and results in a near - canonical ( but not perfect ) steady state. such interactions could easily be added to the present model , but for clarity this analysis focuses on interaction with the single reservoir system , .
|
we carefully examine the thermodynamic consequences of the repeated partial projection model for coupling a quantum system to an arbitrary series of environments under feedback control . this paper provides observational definitions of heat and work that can be realized in current laboratory setups . in contrast to other definitions , it uses only properties of the environment and the measurement outcomes , avoiding references to the ` measurement ' of the central system s state in any basis . these definitions are consistent with the usual laws of thermodynamics at all temperatures , while never requiring complete projective measurement of the entire system . it is shown that the back - action of measurement must be counted as work rather than heat to satisfy the second law . comparisons are made to stochastic schrodinger unravelling and transition - probability based methods , many of which appear as particular limits of the present model . these limits show that our total entropy production is a lower bound on traditional definitions of heat that trace out the measurement device . examining the master equation approximation to the process at finite measurement rates , we show that most interactions with the environment make the system unable to reach absolute zero . we give an explicit formula for the minimum temperature achievable in repeatedly measured quantum systems . the phenomenon of minimum temperature offers a novel explanation of recent experiments aimed at testing fluctuation theorems in the quantum realm and places a fundamental purity limit on quantum computers .
|
+ gene expression in bacterial cells is modulated to enhance the cell s performance in changing environmental conditions . to this end ,transcription regulatory networks continuously sense a set of signals and perform computations to adjust the gene expression profile of the cell .a subset of such signals contains molecules that the cell can metabolize .these molecules range from nutrients to toxic compounds .a commonly occurring motif in the networks sensing such signal molecules is a negative feedback loop . in this motifan enzyme used to metabolize the signal molecule is controlled by a regulator whose action , in turn , is regulated by the same signal molecule .this motif allows for genes that are not transcription factors to negatively regulate their own synthesis . because these negative feedback loops are situated at the interface of genetic and metabolic networks , understanding their behavior is crucial for building integrated network models , as well as synthetic gene circuits .in fact , if one ignores the interface , the network topology gives the impression that feed back mechanisms are less frequent than feed forward loops .in addition , by ignoring feedback associated to signal molecules one would also tend to overemphasize the modular features of the overall system and underemphasize the average number of incoming links to proteins . even within the framework of a negative feedback loopthere are several different mechanisms possible both for transcriptional regulation and for the action of the signal molecule .we list below four mechanisms which are present in living cells , with examples taken from _[ schematic ] ) : + \(i ) the regulator , r , represses the transcription of the enzyme , e , which metabolizes the signal molecule , s. the signal molecule binds to the repressor resulting in the dissociation of the r - operator complex and an increase in the production of e. this mechanism is exemplified by a negative feedback loop in the _ lac _ system , where the roles of r , e and s are played by laci , -galactosidase , and lactose , respectively .+ ( ii ) r represses the transcription of e which metabolizes s. but here the signal molecule can bind to r even when it is at the operator site . when this happens the effect of r on the promoter activity is cancelled , or even reversed .two examples of this kind are the _ bet _ and _ mer _ systems , which are involved in the response of cells to the harmful conditions of osmotic stress and presence of mercury ions , respectively .+ ( iii ) here the regulator , r , is an activator of the transcription of e when s is bound to it . without the signal molecule ,r can not bind to the dna site and activate transcription .for instance , malt in complex with maltose is a transcriptional activator of genes which metabolize maltose .this mechanism differs from ( ii ) in that in the absence of s , r is a repressor in ( ii ) while here it does not affect the promoter activity . + ( iv ) here too, r alone can not bind to the operator site .however , in contrast to ( iii ) , r bound to s represses the transcription of e. further , in this case e increases the production of the signal molecule , rather than metabolizing it , thereby again making the overall feedback negative .one such example is the regulation of de novo purine nucleotide biosynthesis by purr .+ a major difference between these four loops is the manner in which the signal molecule acts . in ( i ) the binding of s to r drastically reduces its affinity to the dna site . on the other hand , in ( ii ) , ( iii ) and ( iv ) , the signal molecule increases , or does not significantly alter , the binding affinity of r and can also affect the action of the regulator when it is bound to the dna. henceforth we will refer to these two methods of action as ` mechanism ( 1 ) ' and ` mechanism ( 2 ) ' . in this paperwe have investigated how this difference in the mechanism of action of the signal molecule translates to differences in the steady state and dynamical behaviour of the simplest kind of negative feedback loops containing proteins and signal molecules .these loops have only one step , e , between the regulator and the signal molecule .further , the regulator is assumed to have only one binding site on the dna .we concentrate on the cases where r is a repressor and s lifts the repression ( i and ii ) . in particular , we show that the two mechanisms differ substantially in their dynamic behaviour when r is large enough to fully repress the promoter of e in the absence of s. we illustrate how the difference is used in cells by the examples of the _ bet _ , _ mer _ and _ lac _ systems .+ + first we consider how the steady state activity of the promoter of e responds to changes in the concentration of s for each of the mechanisms .consider a feedback loop , like fig .[ schematic ] ( i ) , where the operator can be found in one of two states : free , , and bound to the regulator , , with the total concentration of operator sites being a constant : .we assume that the promoter is active only when the operator is free , and completely repressed when it is bound by r. this loop uses mechanism ( 1 ) and is an idealization of the _ lac _ system in _ e. coli_. the promoter activity is given by : in steady state , and can be expressed as functions of the total concentration of regulators , , and the concentration of signal molecules , .the expression also contains the parameters and ( the equilibrium binding constant for r - operator binding and the corresponding hill coefficient ) , and ( for r - s binding . )equation [ eq : mech1_full ] in the methods section contains all the details .the main effect of s is to decrease the amount of free r because , where is the concentration of the r - s complex . for a feedback loop using mechanism ( 2 ), the operator can be found in one of three states : free , , bound to the regulator , , and bound to the regulator along with the signal molecule , . the total concentration of operator sites , , is constant .the promoter activity has a basal value ( normalized to 1 ) when the operator is free .when the regulator alone is bound it represses the activity .we assume the activity in this state is zero . when the operator is bound by r along with s the activity returns to the basal level .this is an idealization of the _ bet _ system . here, the promoter activity is given by : the main effect of s comes from the second term in the numerator of equation 2 , which is the concentration of the r - s - operator complex . as in the case of , in steady statethe activity can be expressed in terms of and . because of the third state of the operator , , the expression for includes one more parameter , , the equilibirum binding constant for r - operator binding when s is bound to r ( see equation [ eq : mech2_full ] in the methods section for details . ) for mechanism ( 2 ) , we mainly consider the case where , i.e. , the binding of the signal molecule does not change the binding affinity of the regulator to the operator .this is the simplest situation and illustrates the basic differences between the two mechanisms . in real systems these binding constants are often different .however , as we show , for _ bet _ and _ mer _ the inequality of and does not obscure the differences caused by the two mechanisms of action of the signal molecule .this is because the main effect of changing is simply to shift the position of the response curve .only when becomes very large ( which results in dissociation of r from the operator when s binds to it , as in _ lac _ ) does mechanism ( 2 ) effectively reduce to mechanism ( 1 ) .figure [ 3dplot ] shows the activities and for a range of values of and .the following observations can be made from the figure : * for sufficiently small values of there is no difference between and . * from and higher , requires larger and larger s to rise to its maximum value , i.e. , its effective binding constant increases with ( where we define to be the value of at which the activity is half - maximum . )* , on the other hand , has a which is remarkably robust to changes in , remaining close to for . *zooming in to the low region shows that rises more steeply than for small values .all these features can be explained by taking a closer look at the equations for ( eq . [ eq : mech1_partial ] and [ eq : mech1_full ] ) and ( eq . [ eq : mech2_partial ] and [ eq : mech2_full ] . ) taking the observations in reverse order , first we see that for small values of , the promoter activities rise as a power of : . from equations [ eq : mech1_full ] and [ eq :mech2_full ] we find that this power for mechanism ( 1 ) and for mechanism ( 2 ) .thus , as long as , mechanism ( 2 ) will have a steeper response at small values of .next , let us consider the amount of inducer needed to half - activate the promoter under the two mechanisms .the fact that is close to for is because of the term , which occurs in both the numerator and the denominator of equation [ eq : mech2_partial ] . when is large enough ( i.e. , ) , the operator is rarely free , and the constant term ( = 1 ) in eq .[ eq : mech2_full ] can be disregarded from both numerator and denominator . in that case only depends on the ratio between the binding affinities .accordingly becomes independent of the value of for .on the other hand , the activity is always highly dependent on . from equation [ eq : mech1_full ]we see that reaches half - maximum when .this happens when .therefore is an increasing function of for mechanism ( 1 ) .for both mechanisms , when drops below we enter a regime where the inducer is not needed for derepression . for our standard parameters , repressor concentration implies that and dominate in equation [ eq : mech2_partial ] .thereby the functional form of the activity approaches that of , as indeed seen from the regime in fig .in addition to these mathematical arguments , the above observations can be understood physically from the nature of the processes allowed in mechanisms ( 1 ) and ( 2 ) .consider the case of a fully repressed promoter ( when ) .mechanism ( 1 ) then requires dissociation of from the operator for the activity to rise and this is associated with a free energy cost proportional to . in mechanism( 2 ) there is no such cost and therefore a smaller amount of is required to achieve the same level of inhibition of r. thus , for genes which are typically completely repressed , and transcription of which , on the other hand , may be needed suddenly , mechanism ( 1 ) is inferior to mechanism ( 2 ) because it needs a larger amount of .after first discussing three real systems , we will elaborate on this response advantage by comparing the explicit time dependence of the two mechanisms in the next section .the most general framework within which the promoter activities of the enzymes in the _ bet _ , _ mer _ and _ lac _ systems can be represented is the following generalization of equations 1 and 2 : equation [ eq : general_full ] in the methods section shows the dependence of on and . are constants dependent on which system we are trying to describe . is the promoter activity in the absence of r and is used as a reference ( 1.00 ) . and are the relative promoter activities in the presence of r alone , and r together with s , respectively .table 1 shows the values of as well as how the binding affinity of r to the operator is changed by the binding of s ( the ratio ) for the three systems .we have used the hill coefficients ( assuming that two protein subunits are involved in dna binding ) and ( for simplicity and to compare with fig .[ 3dplot ] . ) mechanism ( 1 ) and ( 2 ) are special cases of this equation .equations [ eq : mech1_partial ] and [ eq : mech1_full ] , for mechanism ( 1 ) , are obtained by setting and taking the limit ( the value of is irrelevant in this limit ) .equations [ eq : mech2_partial ] and [ eq : mech2_full ] , for mechanism ( 2 ) , are obtained by setting . from table 1 , it is clear then that _ lac _ uses mechanism ( 1 ) and _ bet _ uses mechanism ( 2 ) ._ mer _ is an even more extreme case of mechanism ( 2 ) where the term has a much larger weight ( ) than the idealized mechanism ( 2 ) ..values of parameters in equation 3 for three systems found in _e. coli_. in the case of _ lac _ we used a simplified case , where the _ lac _ promoter is repressed by laci binding to a single operator , _o1_. [ cols="^,^,^,^,^,^",options="header " , ] fig . [ 4systems ] shows the response curves for the _ bet _ , _ mer _ and _ lac _ systems ._ bet _ and _ mer _ , representatives of mechanism ( 2 ) , and _, a representative of mechanism ( 1 ) , indeed behave similar to the idealized versions of the two mechanisms investigated in fig .[ 3dplot ] .the difference between _ bet _ and _ mer _ is the result of changes in the binding affinity of r in the absence and presence of s. a further complication that could occur in real systems is that the probabilities of rna polymerase recruitment could be different for different states of the operator .we find that taking the changing probabilities of rna polymerase recruitment into account does not change the mathematical form of the equations for the promoter activities ( see methods . )thus , this additional complication does not affect our results .+ we now turn to an analysis of differences in the temporal behaviour of the feedback mechanisms .we model the dynamics by two coupled differential equations : and and represent the concentrations of the enzyme and signal molecule , respectively .the first term in the equation is the rate of production of e which is equal to the promoter activity , ( equations [ eq : general_partial ] and [ eq : general_full ] ) .the second term represents degradation of e. the second equation describes the evolution of the concentration of s ; it increases if there is a source , , of s ( for instance from outside the cell ) and decreases due to the action of the enzyme e. in the first equation both terms could be multiplied by rate constants , representing the rates of transcription , translation and degradation .however , we have eliminated these constants by measuring time , , in units of the degradation time of e , and by rescaling appropriately ( see the methods section for details . ) thus , in these equations , and are dimensionless , with lying between 0 and 1 . can then be interpreted as the maximum rate of degradation of s in units of the degradation rate of e. * a. + * + * b. + * fig .[ dynamics]a ( left panel ) shows what happens if the cell is subject to a sudden pulse of s. that is , the source always , but at time concentration of s abruptly jumps from zero to .this triggers an increase in the production of e which then starts to decrease the concentration of s. there is no further addition of s to the system , so eventually all of it is removed and the system returns to its condition before the pulse . from the figure we see that , for the same parameter values , mechanism ( 2 ) results in a much faster removal of s because the response of e to the pulse is larger .the right panel adds further evidence to this conclusion .it shows , for both mechanisms , perturbed by varying sized pulses of s , the time taken for the concentration of s to fall to .this measure shows that mechanism ( 2 ) generally responds faster than mechanism ( 1 ) .the two mechanisms converge for small perturbations because there is no signal to respond to ( levels of are very low ) , and for very large perturbations because then the promoter becomes fully activated by the huge concentration of the inducing molecule s. fig .[ dynamics]b shows what happens when the cell is subject to the appearance of a constant source of s. at time the value of abruptly jumps from zero to per degradation time of e. in response , the production of e is increased and eventually reaches a new steady state value to deal with the constant influx of s ( left panel ) . from the right panel of the figure it is evident that mechanism ( 2 ) is able to suppress the amount of s much more than mechanism ( 1 ) for most values of the rate of influx . again , for similar reasons , the two mechanisms converge at small and large values of .these observations apply for the case when for mechanism ( 2 ) .the only effect on the dynamical equations caused by changing the ratio lies in the expression for in the first term of the equation . as mentioned in the previous section , changing this ratio mainly results in shifting of the response curve and as is increased , approaches .for the dynamics this results in an increase in ( for a pulse ) and in the steady state value of ( for a source ) as is increased .these values approach those for mechanism ( 1 ) in the limit .the amount by which has to be boosted to effectively reduce mechanism ( 2 ) to mechanism ( 1 ) increases with increasing , as in the steady state case .+ in the present paper we have discussed various strategies for negative feedback mechanisms involving the action of one signal molecule on a transcription factor . in particular , we have investigated two broadly different ways in which the signal molecule may change the action of the transcription factor : first , it could inhibit its action by sequestering it , and second , it could bind to the transcription factor while it is on the dna site and there alter its action .the first mechanism occurs when the binding of the signal molecule reduces the affinity of the transcription factor to such an extent that it can not subsequently remain bound to the dna .this kind of inhibition of the transcription factor occurs in the _ lac _ system , where ( allo)lactose reduces the binding affinity of laci to the operator _o1 _ by a factor of 1000 .this mechanism has also been exploited in synthetic gene networks . in the second mechanism the binding affinityis not altered that much ; _ bet _ and _ mer _ belong to this category . in the case of _ mer_ the presence of the signal molecule reverses the action of the transcription factor , changing it from a repressor to an activator . in steady state ,the two mechanisms differ most when the levels of the transcription factor are large enough to ensure substantial repression in the absence of the signal molecule .the underlying reason for these differences is that , in this regime of full repression , for each transcription factor that binds to the signal molecule there is , for mechanism ( 1 ) , an extra energy cost for the dissociation of the transcription factor from the dna .the dynamical behaviour of feedback loops based on mechanisms ( 1 ) and ( 2 ) also differ substantially when promoters are in the fully repressed regime .we have shown that when the systems are perturbed by the sudden appearance of either a pulse or a source of signal molecules , mechanism ( 2 ) is generally faster and more efficient than mechanism ( 1 ) in suppressing the levels of the molecule .this prediction could be tested using synthetic gene circuits which implement these two mechanisms , for instance by extending the circuits built in ref .in addition , this observation fits neatly with the fact that the _ bet _ and _ mer _ systems use versions of mechanism ( 2 ) , because they respond to harmful conditions ( osmotic stress and the presence of mercury ions , respectively ) and therefore need to respond quickly , while mechanism ( 1 ) is associated with _lac _ , a system involved in metabolism of food molecules which therefore does not need to be as sensitive to the concentration of the signal molecules . in the case of _ lac _ it is probably energetically disadvantageous for the cell to respond to low levels of lactose sources .the differences between mechanisms are clear when they are compared keeping all parameters constant . in cells ,however , parameter values vary widely from one system to another which can obscure the differences caused by the two mechanisms . for instance, it is possible to increase the speed of response of mechanism ( 1 ) by reducing the value ( i.e. , increasing the binding strength between the regulator and the signal molecule . ) keeping all other parameters constant needs to be decreased by a factor 10 for mechanism ( 1 ) to behave the same as mechanism ( 2 ) when .this factor increases as increases , i.e. , as the repression is more complete .this can again be understood in terms of the extra energy cost for mechanism ( 1 ) : increasing sufficiently makes the extra energy cost insignificant compared to the r - s binding energy .thus , a negative feedback loop in a real cell which needs to respond to signals on a given fast timescale could do so either by using mechanism ( 2 ) , or by using mechanism ( 1 ) with a substantially larger r - s binding affinity . for signal molecules where it is not possible for the r - s binding to be arbitrarily strengthened , mechanism ( 2 )would be the better choice .on the other hand , mechanism ( 2 ) also has its disadvantages .for instance , at promoters with complex regulation the dna bound transcription factor using mechanism ( 2 ) may interfere with the action of other transcription factors . in figure 1we showed 4 examples , and have extensively discussed example ( i ) and ( ii ) .another implementation of mechanism ( 2 ) is example ( iii ) , with an activity $ ] . in general this regulatory moduleis at least as efficient as mechanism ( 2 ) , with a dynamical response which is even more efficient in the intermediate range of ( around ) . the loop in fig .[ schematic](iv ) is , on the other hand , a different kind of negative feedback from the other three examples .it involves synthesis of the signal molecule , and thus is aimed at maintaining a certain concentration of the molecule , rather then minimising or consuming it . in practice , it is the kind of feedback that is common in biosynthesis pathways , where it helps maintain a certain level of amino acids , nucleotides , etc ., inside the cell . the simple one - step, single - operator negative feedback loops investigated here clearly indicate that the mechanism of action of the signal molecule is a major determinant of the steady state and dynamical behaviour of the loop .additional complexity in the mechanism of regulation ( e.g. , cooperative binding of a transcription factor to multiple binding sites ) or of the regulatory region ( competing transcription factors or multiple regulators responding to different signals ) would open up more avenues for the differences between the two mechanisms to manifest themselves .these feedback loops form the link connecting the genetic and metabolic networks in cells .in fact , such loops involving signal molecules are likely to be a dominant mechanism of feedback regulation of transcription .feedback using only regulatory proteins , without signal molecules , is probably too slow because it relies on transcription to change the levels of the proteins .negative auto - regulation can speed up the response of transcription regulation .nevertheless , feedback loops based on translation regulation , active protein degradation or metabolism of signal molecules will certainly be able to operate on much faster timescales .this is probably why feedback loops are rare in purely transciptional networks , which has contributed to the view that feed forward loops are dominant motifs in transcription regulation .taking feedback loops involving signal molecules into account alters this viewpoint substantially . in _e. coli _ the number of feedforward loops in the transcription regulatory network has been reported to be 40 . based on data in the ecocyc database , we know that there are more than 40 negative feedback loops involving signal molecules where the regulation is by a transcription factor . adding this many feedback loops to the genetic network would also change the network topology substantially . in particular , it would diminish the distinction between portions of the network that are downstream and upstream of a given protein .the effect of this would be to make the network more interconnected and reduce the modularity of the network by increasing the number of links between apparently separate modules .+ + the operator can exist in one of two states : ( i ) free , , and ( ii ) bound to the regulator , .if the concentration of free regulators is then similarly , the concentration of regulators bound to signal molecules is and the total concentration of regulators , a constant , is given by we assume that the number of signal molecules is much larger than the number of regulators which , in turn , is much larger than the number of operator sites , i.e. , . then we can take to be approximately constant and we can take , giving : and ^h o_{\rm free}\ ] ] using these and we get : ^h}.\ ] ] + the operator can exist in one of three states : ( i ) free , , ( ii ) bound to the regulator , , and ( iii ) bound to the regulator along with the signal molecule , . again , with similar assumptions , we get equation [ eq : r ] and [ eq : ro ] for and plus an additional expression for : ^h o_{\rm free}.\ ] ] using and equation [ eq : mech2_partial ] for , we get : ^h}{1+\left[\frac{(r^{tot}/k_{ro})}{1+(s / k_{rs})^{h_s}}\right]^h+\left[\frac{(r^{tot}/k_{rso})(s / k_{rs})^{h_s}}{1+(s / k_{rs})^{h_s}}\right]^h}.\ ] ] + the most general expression for the activity , shown in equation 3 , can also be rewritten using the expressions for and calculated above : ^h+\gamma\left[\frac{(r^{tot}/k_{rso})(s / k_{rs})^{h_s}}{1+(s / k_{rs})^{h_s}}\right]^h}{1+\left[\frac{(r^{tot}/k_{ro})}{1+(s / k_{rs})^{h_s}}\right]^h+\left[\frac{(r^{tot}/k_{rso})(s / k_{rs})^{h_s}}{1+(s / k_{rs})^{h_s}}\right]^h}.\ ] ] + a more correct , but more cumbersome , way to calculate the promoter activities is to explicitly take rna polymerase into account .then , in the most general case , the system can be in one of 6 states : * r not bound to operator , rnap not recruited : weight=1 . *r not bound to operator , rnap recruited : wt= .* r bound to operator , rnap not recruited : wt= . * r bound to operator , rnap recruited : wt= . *r - s bound to operator , rnap not recruited : wt= . *r - s bound to operator , rnap recruited : wt= . here are the probabilities ( per concentration ) for recruitment of rna polymerase in the three different states of the operator , and is the concentration of rna polymerase . taking the promoter activity to be 0when the polymerase is not recruited and in states ( 2 ) , ( 4 ) and ( 6 ) , respectively , the activity can be written as follows : by absorbing the constants into , and , we recover equation [ eq : general_full ] .+ with all rate constants included , the dynamical equations for the time evolution of the concentrations of e and s can be written as follows : now measuring time in units of the degradation time of e : , and transforming e using , we get which , with and , are the equations used in the main text .+ this work was supported by the danish national research foundation .salgado , h. , santos - zavaleta , a. , gama - castro , s. , millan - zarate , d. , diaz - peredo , e. , sanchez - solano , f. , perez - rueda , e. , c. bonavides - martinez , c. , and collado - vides , j. ( 2001 ) regulondb ( version 3.2 ) : transcriptional regulation and operon organization in escherichia coli k-12 ._ nucleic acids res . ,_ * 29 * , 7274 .keseler , i. , collado - vides , j. , gama - castro , s. , ingraham , j. , paley , s. , paulsen , i. , peralta - gil , m. , and karp , p. ( 2005 ) ecocyc : a comprehensive database resource for escherichia coli . _nucleic acids res ., _ * 33 * , d334d337 .kobayashi , h. , kaern , m. , araki , m. , chung , k. , gardner , t. s. , cantor , c. r. , and collins , j. j. ( 2004 ) programmable cells : interfacing natural and engineered gene networks .( usa ) , _ * 101 * , 84148419 .rokenes , t. p. , lamark , t. , et al .( 1996 ) dna - binding properties of the beti repressor protein of escherichia coli : the inducer choline stimulates beti - dna complex formation . _ j. bacteriol . , _ * 178 * , 16631670 .meng , l. m. , kilstrup , m. , et al .( 1990 ) autoregulation of purr repressor synthesis and involvement of purr in the regulation of purb , purc , purl , purmn and guaba expression in escherichia coli .j. biochem ., _ * 187 * , 373379 .
|
+ * the molecular network in an organism consists of transcription / translation regulation , protein - protein interactions / modifications and a metabolic network , together forming a system that allows the cell to respond sensibly to the multiple signal molecules that exist in its environment . a key part of this overall system of molecular regulation is therefore the interface between the genetic and the metabolic network . a motif that occurs very often at this interface is a negative feedback loop used to regulate the level of the signal molecules . in this work we use mathematical models to investigate the steady state and dynamical behaviour of different negative feedback loops . we show , in particular , that feedback loops where the signal molecule does not cause the dissociation of the transcription factor from the dna respond faster than loops where the molecule acts by sequestering transcription factors off the dna . we use three examples , the _ bet _ , _ mer _ and _ lac _ systems in _ e. coli _ , to illustrate the behaviour of such feedback loops .
|
a combination of a 2d finite - volume plasma transport code with a kinetic monte - carlo model for neutral particles is typically applied for numerical modelling of the tokamak edge and divertor plasmas .a well known example of such modelling tool is the code package b2-eirene ( solps ) widely used in the field .the monte - carlo method allows physically accurate description of atomic and molecular kinetics in complex geometries , but has a disadvantage of random error - statistical noise in the calculated quantity .there were always concerns that this statistical noise can have detrimental impact on the coupled solution . in the present paper one specific noise related issue which can lead to pathological solutions is addressed - violation of the global particle balance .it is shown that the error in the steady - state particle balance can be presented as a sum of three terms .those are operator splitting error , residual of the fluid solver , and the time - derivative .whereas the first term can be effectively reduced by the source re - scaling the reduction of residuals may require iterative solution of the discretized fluid equations after each call of the monte - carlo model .this can , in turn , pose severe restrictions on the time - step and lead to a very long overall run - time .e.g. in the iter modelling studies one model run could take several months of wall - clock time .special diagnostics for monitoring of the particle balance allow to clearly identify the cases when reduction of residuals is absolutely necessary , and the corresponding measures must be taken .this paper presents in condensed form the most important findings from a dedicated studiy of the solps code .prototypes of the numerical diagnostics were implemented and tested in the code solps4.3 which is the legacy version of b2-eirene used in the past for the iter design modelling .the approach itself is thought to be applicable to any finite - volume edge code .the numerical convergence is analyzed here only in terms of the global balances and criteria of the ( quasi-)steady - state .it is not attempted to use the stricter methods of analysis proposed for the combination of fluid and monte - carlo models in .only steady - state solutions are considered .the rest of the paper is organized as follows . in the next section a finite - volume fluid code with source terms calculated by monte - carlois described in general terms . in section [ diagnosticss ]the diagnostics for monitoring of the particle balance are introduced .an example of calculations with different error ( residual ) reduction techniques is discussed in section [ examples ] .further methods which can be used to reduce the residuals and the associated error in the particle balance are outlined in section [ reductions ] .last section summaries the conclusions .here only minimal information about numerical procedure of the code b2-eirene is given which is necessary for the subsequent discussion .the plasma transport code b2 solves a set of 2d ( axi - symmetric ) equations for particle conservation , parallel momentum balance , electron and ion energy .the full set of equations can be found in ref . ,chapter 2 .the computational domain comprises the scrape - off - layer ( sol ) region outside of the 1st magnetic separatrix , and the edge of the core plasma inside the separatrix .finite - volume discretization of the differential equations leads to a set of algebraic equations which can be symbolically written as : here is the solution vector : is the number density and is the parallel velocity of the ion fluid , and are the electron and ion temperatures .the discrete variables are defined in the cell centers or on the cell faces of the grid . the non - linear vector function , are the source terms calculated by the test particle monte - carlo method . to find the solution of equation ( [ generaleq ] ) false time - stepping is used .a discrete time - derivative is added to the equations , and iterations over `` time '' are performed . on each time - iteration solution of the following set of equation has to be found : the `` time derivative '' is defined such that .e.g. for the particle continuity , where is the time - step .the notation with tilde underlines that this source term is calculated by monte - carlo and contains random error , as opposite to the `` exact '' value which would be obtained with the infinite number of test particles . in the code b2 the set of non - linear algebraic equations ( [ timestepeq ] )is solved by simple iterations and block gauss - seidel algorithm ( splitting by equations ) .the so called `` internal iterations '' of b2 are described in detail in ref . ,chapter 3 , one may also refer to ref . , chapter 1.2 .approximate solution obtained at the end of internal iteration can be inserted back into equation ( [ timestepeq ] ) to find the residual : that is , the found fulfills the equation : by comparing with equation ( [ generaleq ] ) one can see that the difference between and the right hand side of equation ( [ internal_iterationeq ] ) can be seen as generalization of the common residual . in the simplest procedurethe source terms are calculated at the beginning of internal iterations and are fixed afterward .that is , they stay as .however , certain modifications of the sources can be made in the iterative solver to adjust them with the changed plasma solution .this modification is reflected in the notation as .critical importance of the very high accuracy in the global particle balances for the reactor - scale edge modelling was recognized back at the early stages of the iter analysis . to reach this high accuracythe monte - carlo neutral transport code must ensure perfect particle conservation in its solution .the internal balance in the neutral solver is usually achieved by re - scaling of the volumetric ion sources estimated by the statistical procedure to make them entirely consistent with the primary sources of neutral particles . to increase accuracy the particles originating from the different primary sources sampled independently from each other - the source is split into independent `` strata '' .the primary sources of neutrals are : i ) recombination of ions on the solid surfaces - `` recycling '' ; ii ) volumetric recombination in the plasma ; iii ) gas puff ; iv ) erosion .the strength of recycling sources is proportional to the ion fluxes .if the volumetric ion sources stay fixed , but the fluxes of neutralized ( recycled ) ions change in the course of internal iterations , than an imbalance in the sinks and sources occurs . to compensate for this inconsistency the sources of ions coming from recycling strata : ,must be re - scaled as follows : here is the index of internal iteration , , is the total flux of neutralized ions to which the source is proportional .e.g. if is he then is the sum of the fluxes of he and he .numerical diagnostic for monitoring of the steady - state global particle balance can be derived from equation ( [ internal_iterationeq ] ) by transforming it into the form : \label{generalized_residualeq}\end{aligned}\ ] ] error ( inconsistency ) of the global particle balance is defined separately for each ion species .`` ion species '' here is the chemical element as opposite to `` ion fluids '' which are charged states of an element .e.g. species carbon includes 6 ion fluids from c to c .equation ( [ generalized_residualeq ] ) is applied to discretized continuity equation for each ion fluid in each cell .then the sum is calculated : = \sum_i \sum_{\alpha ' } \left [ - r^{\alpha'}_i + \tilde s^{\alpha'}_i\left(\phi^m_k \right ) -\tilde s^{\alpha'}_i\left(\phi^m_k | \phi_{k-1 } \right ) - d^{\alpha'}_i\left ( \phi^m_k , \phi_{k-1 } \right ) \right ] \label{particle_balanceeq}\ ] ] here is the sum over all grid cells , is the sum over all ion fluids which belong to ion species .it is readily seen that zero left hand side of equation ( [ particle_balanceeq ] ) means perfect balance between volumetric sources and fluxes , and the right hand side is the error in the global particle balance of species . alternative way of writing the particle balance uses formulation via fluxes : here is the strength of external particle source - gas puff , is the ion flux through the core grid boundary , is the flux sputtered from the solid surfaces , is the flux ( of both ions and neutrals ) absorbed on solid surfaces - pumped flux , is the flux of atoms which leak to the core .the final steady - state solution has to self - adjust in such way that the rate with which the particles are removed from the system becomes equal to the particle input : that is , serves as a scale to which the particle balance error has to be compared .the numerical solution can be considered as physically meaningful only if this error .coming back to equation ( [ particle_balanceeq ] ) , its right hand side yields the following expression for the relative error : }{\gamma^\beta_{in}}\ ] ] first term contains residuals calculated with equation ( [ residualeq ] ) after the end of internal iterations .this is the error in the solution of the set of nonlinear finite - volume equations on each time - iteration .the term is due to inconsistency of the neutral - related sources calculated on the `` old '' and `` new '' plasma .it can be called an operator splitting error .this term can become large if , e.g. , the re - scaling procedure , equation ( [ residualeq ] ) , is not implemented .last term is the time derivative which is considered as error when a stationary solution is looked for .if the plasma fluxes in equation ( [ balance_fluxeseq ] ) are taken from the solution , and the neutral fluxes are calculated on the same plasma , then it is easy to show that equations ( [ balance_fluxeseq ] ) and ( [ balance_residualeq ] ) must yield exactly same result when one extra condition is fulfilled .this condition is the discrete analogue of the divergence theorem : here is the total flux of ions of species to the grid boundaries .the total ion source is calculated as the total source of neutral particles minus their pumped and leaked fluxes : volume recombination does not appear in equation ( [ total_sourceeq ] ) because atoms originating from recombination which re - ionize back in plasma do not contribute to the net source , and particles which are removed from the system are already included in and . subtracting equation ( [ total_fluxeq ] ) from equation ( [ total_sourceeq ] ) yields the nominator of equation ( [ balance_fluxeseq ] ) . in practiceit makes sense to use both diagnostics in parallel .incorrect particle balance in the solution for neutrals or a mistake in the transfer of ion fluxes to the monte - carlo code manifests itself as non - physical particle sinks or sources .the diagnostic of equation ( [ balance_residualeq ] ) may not be able to detect them because it does not distinguish between `` legitimate '' and `` illegitimate '' sources and sinks of neutrals .this distinction is made in equation ( [ balance_fluxeseq ] ) .the two diagnostics are complimentary to each other and enable an additional consistency check .an example discussed here is based on a solps4.3 run from the data - base of iter simulations ( case # 1568vk4 , see ref . , chapter 4.2 ) .the model plasma consists of all charged states of d , he and c. power entering the computational domain from the core is equal to =80 mw , 47 % of is radiated , mainly by c ions .the d particle content is controlled by the gas puff =1.17e22 d - at and ion flux from the core =0.91e22 s .influx of he ions from the core is set to =2.1e20 s .all plasma facing components in the model are assumed to be covered by carbon .the pump is modelled by an absorbing surface in divertor beneath the dome .the solution represents a relatively hot attached plasma in front of divertor targets , with insignificant parallel momentum losses and volume recombination .+ + + in the iter modelling studies the b2-eirene code was always applied with internal iterations in the fluid solver . in the model run in question =20 internal iterationsare used , the time - step is set to =3e-7 sec .significant increase of the time - step is not possible : with -6 sec a numerical instability develops and no stationary solution can be found .it turns out that can be increased by orders of magnitude if no - internal iterations are applied , that is . in this caseno visible instability develops even with =1e-4 sec .however , solutions obtained with and without internal iterations - they are shown in figure [ comparing_solutionsf ] - strongly deviate from each other .strictly - speaking , in the presence of monte - carlo noise in the source terms the solutions never reach true steady - state .one can only speak about quasi - steady - state solution which randomly oscillates around some average . as applied tothe b2-eirene runs , the `` quasi - steady - state '' is defined through characteristic decay times of selected parameters derived from their time - traces , see appendix . in practicethe run is regarded as converged if the condition of quasi - steady - state is fulfilled , and if errors in the global power and particle balances are small .errors in the balances for the two test runs are given in table [ particle_energy_balancet ] .the power balance error is defined by equation ( [ energy_balance_diageq ] ) , are calculated using equation ( [ balance_fluxeseq ] ) .one can see that is small in both cases .the situation is completely different for the particle balance .whereas in the run made with both , % , in the `` fast '' run the error approaches 100 % .that is , the pumped fluxes are negligible compared to . individual terms of the error are shown in table [ particle_balancet ] for both recycling species d and he .there is a good agreement between and calculated independently by two diagnostics . from table[ particle_balancet ] the reason of the large error in case immediately becomes clear .while and always remain relatively small , becomes very large if the code is operated without internal iterations after each monte - carlo call .particle balance is much more difficult to converge than the power balance because of different relation between the controlling flux and internal sources and sinks in the system . for the powerthe sources and sinks in plasma are smaller than .in contrast , the particles are `` recycled '' between the plasma and solid surfaces and the total volumetric ion sources by far exceed . in the present example =4.3e24 s and =8.1e22 s .those numbers are more than two orders of magnitude larger than of those species .this problem does not appear for c because in the present model all incident c particles are absorbed on the surfaces - this species does not recycle .c|cccc case & & & & + =20 , =3e-7 & 0.74 & 1.39 & 6.77 & 0.023 + =1 , =1e-4 & 0.32 & 91.5 & 99.3 & 4.6 + =1 + 99 , =1e-4 & 1.8 & 16.1 & 14.0 & 0.53 + [ particle_energy_balancet ] c|ccc|ccc case & & & & & & + =20 , =3e-7 & 0.02 & 0.14 & -1.54 & 3.10 & 0.67 & -10.63 + =1 , =1e-4 & 91.6 & 1.06 & -1.98 & 100.7 & -0.11 & 0.15 + =1 + 99 , =1e-4 & 0.17 & 16.3 & 0.19 & -0.10 & 4.52 & 8.92 + [ particle_balancet ] in the b2-eirene model run above extra measures for reduction of residuals on each time - iteration were absolutely necessary . only in this casea solution can be obtained which is correct in terms of the global balances .techniques , such as internal iterations in b2 , can impose severe limitation on the time - step , and it is not attractive from the run - time point of view to operate the code in this mode . experience has shown that the use of b2-eirene with =1 does not always lead to deviations as large as that shown in figure [ comparing_solutionsf ] .e.g. in iter cases with single fluid ( d only ) plasma was found to be sufficiently small both with and without internal iterations , and the obtained solutions are very close to each other , see example in ref . ,chapter 4.1 .the multi - fluid simulation analyzed here clearly demonstrates that this must not always be the case .the example emphasizes that in each simulation the particle balance has to be carefully monitored with the special diagnostics .too large error detected by the diagnostic is an unequivocal indication that the residual reduction techniques must be applied irrespective of the run - time penalty which they impose .a series of studies have been undertaken with the b2-eirene code to find algorithms which would deliver sufficiently good accuracy without penalizing the run - time .their outcome may be of general interest for developers and users of other edge modelling codes as well .main results are briefly summarized in this section . as a simplest remedy to the particle balance problem a `` 0d correction ''was first tried , see ref . , chapter 5.5 . the ion density in the whole computational domainis multiplied by a constant factor calculated in such way that with the corrected ion fluxes automatically becomes zero .it was found that this method can not be applied because it always produces solutions oscillating in time , and no stationary solutions .much more success was achieved with a correction based on iterative relaxation of the finite - volume continuity equations .technical details of the implementation in b2 can be found in ref . ,chapter 5.2 - 5.4 .this algorithm works as follows .the whole set of equations for particle , momentum and energy balances is relaxed only on the first internal iteration . on subsequent iterations only equations for particle continuity are relaxed . to be precise ,in the code b2 those are pressure correction equations where both the density and velocity fields are modified .( b2 uses compressible version of the patankar s simple algorithm , see ref . , chapter 6.7 and ref . , chapter 3 . ) nevertheless , correction of the particle balance via relaxation of the pressure correction equations was found to be very reliable .tests performed for the same iter model as in section [ examples ] showed that such iterations robustly converge with time - steps up to =1e-4 sec .results obtained with this algorithm can be found in the last row in tables [ particle_energy_balancet ] and [ particle_balancet ] .the run was performed with 99 iterations for continuity equations after one full internal iteration , which is reflected in the designation =1 + 99 . despiteincreased the method leads to significant reduction of the total error due to reduction of .as expected , the main disadvantage of this procedure is that it increases residuals of other equations .closer investigations ( ref . , chapter 6.1 ) have shown that especially the parallel momentum balance suffers .however , comparison of the solutions obtained with full internal iterations and with the reduced scheme demonstrates that they are close to each other : the =1 + 99 case is shown by dashed line in figure [ comparing_solutionsf ] .moreover , tests have shown that this result holds even for the iter model with high density detached divertor , see ref . , chapter 6.4 .hence , the method can be suggested for use in a two stage approach for fast finding of the initial approximation to the solution which is then refined on the second `` slow '' stage by more accurate techniques . as a next stepa scheme was proposed where coupled continuity and parallel momentum balance equations are iterated - without equations for temperatures .this kind of `` incomplete internal iterations '' was implemented in b2-eirene and tested as well , but the results were found to be unsatisfactory : ref . , chapter 2.3 .tests have shown that similar to the full internal iterations the `` incomplete iterations '' are prone to numerical instabilities at large time - steps , and therefore bring no advantages .the simple pressure correction which introduces extra non - linearity is a possible reason of this behavior .the scheme could be improved if monolithic coupling of the continuity and momentum equation would be applied instead .that is , when corrections for both the density and the velocity fields are calculated simultaneously in a one set of linear equations .a fairly simple technique which increases accuracy and can be easily implemented in any code is time - averaging of the source terms , ref . ,chapter 3 .although this algorithm can be helpful in many cases , it was found to be not always efficient enought in reducing , in particular with impurities , see example in ref . , chapter 4.2 . in a more advanced piling method " is described which do not reset the whole history as the calculation of the new average starts .finally , the brute force method can always be applied to decrease both the statistical error in the source terms and the residuals - massive increase of the number of test particles .applicability of this solution strongly depends on the available computing hardware .the test particle monte - carlo algorithm is easy to parallelize , and the increased number of particles does not necessarily mean the increased wall - clock run time .experience has indicated that the pure `` brute force compensation '' of the particle balance issue described in section [ examples ] is likely to require processors to be practical .use of the test particle monte - carlo for neutrals in the tokamak edge modelling codes has an unpleasant side effect of random error in the source terms .if no special measures are taken , then this persistent statistical noise leads to residuals of the discretized fluid equations which do not converge , but saturate at a certain level . in the present paper one particular well identified issue caused by the saturated residuals has been described .it has been shown that too large finite - volume residuals can cause crude violation of the global particle balance . in turn , for the system in question - the tokamak edge and divertor plasma - violation of the particle conservation may have a very strong ( `` zero order '' ) non - local impact on the whole numerical solution .there are computational techniques which can effectively reduce the residuals .e.g. in the code b2 which uses splitting by equations an extra loop of simple iterations on each time - iteration is applied . however ,severe restriction imposed by those internal iterations on the time - step leads to a very long overall model run - time when this option is used . with numerical diagnostics proposed in this workit can be unambiguously identified when the too large error in the particle balance is caused by the saturated residuals , and the residual reduction techniques must be applied to obtain the physically meaningful solution .the diagnostics can be implemented in any finite - volume edge code .the problem describe here would become less of an issue if solving the set of non - linear equations on each time - iteration would not require reduced time - step . if such solvers are not feasible , then the accuracy and run - time drawbacks may even outweight the very advantage of using the kinetic test particle monte - carlo in the self - consistent models .the drawbacks can be partly compensated by reducing the statistical error which is , in principle , only a matter of available computing resources .emerging heterogeneous cpu - booster architectures could be particularly well suited for the combination of a fluid and a monte - carlo code .while the serial finite - volume part runs on cpu , the monte - carlo part can make use of massive parallelization on hundreds of processing units on the accelerator .this work was performed under efda work programme 2013 `` assessment studies for solps optimisation '' ( wp13-sol ) .characteristic time - scale of the parameter is calculated from its time - trace by fitting it with a linear function : in the present paper the number of last time - iterations used for the fit was equal to , where is the number of points which cover last 5 of physical time .least - square method is applied to find the parameters and .same data - points were used to calculate average and in table [ particle_energy_balancet ] and in table [ particle_balancet ] .the control parameters for which are calculated are the total amount of ions of species , total diamagnetic energy in electrons and ions : as well as plasma parameters averaged along the magnetic separatrix : , , . herethe integration is performed over the whole computational grid , is geometrical volume , is the sum over all ion fluids , is the electron density , is the atomic mass of ions , is their average macroscopic velocity .the b2-eirene solutions analyzed in this paper were regarded as stationary when sec for all the parameters listed above . for and . beside this condition of steady - statethe errors in the global particle and power balances are checked .error in the particle balance is expressed by equation ( [ balance_fluxeseq ] ) .relative error in the power balance is defined as follows : here is the power influx into the computational domain from the core plasma , is the power deposited by charged particles to the plasma facing components ( pfc ) , is the power deposited to pfc by neutrals , is the power radiated by both charged and neutral particles , is the power transferred by neutrals back to the core .
|
test particle monte - carlo models for neutral particles are often used in the tokamak edge modelling codes . the drawback of this approach is that the self - consistent solution suffers from random error introduced by the statistical method . a particular case where the onset of nonphysical solutions can be clearly identified is violation of the global particle balance due to non - converged residuals . there are techniques which can reduce the residuals - such as internal iterations in the code b2-eirene - but they may pose severe restrictions on the time - step and slow down the computations . numerical diagnostics described in the paper can be used to unambiguously identify when the too large error in the global particle balance is due to finite - volume residuals , and their reduction is absolutely necessary . algorithms which reduce the error while allowing large time - step are also discussed .
|
a standard coin flipping is a game in which 2 parties , alice and bob , wish to flip a coin from a distance .the two parties do not trust each other , and would each like to win with probability of at least .a natural problem is to find good protocols - a protocol in which a player could not cheat and force the outcome of the game to his benefit .there are two types of coin flipping - strong and weak . in strong coin flipping, each party might want to bias the outcome to any result . in weak coin flipping each party has a favorite outcome .we denote the winning probability of an honest alice in , and similarly for bob .the maximum winning probability of a cheating alice ( i.e. when she acts according to her optimal strategy , and when bob is honest ) is denoted by , and similarly for bob .+ let be the _ bias _ of the protocol .the bias actually tells us how good the protocol is .the smaller the bias is , the better the protocol is , because the cheating options decrease .it is well known that without computational assumptions , coin flipping ( whether weak or strong ) is impossible to achieve in the classical world .that is , one of the players can always win with probability . in the quantum setting ,the problem is far more interesting .quantum strong coin flipping protocols with large but still non - trivial biases were first discovered .kitaev then proved ( see for example in ) that in strong coin flipping , every protocol must satisfy , hence .this result raised the question of whether weak coin flipping with arbitrarily small bias is possible .protocols were found with smaller and smaller biases , until mochon showed in his paper that there are families of weak coin flipping protocols whose bias converges to zero .it is also known that even in the quantum world , a perfect protocol ( i.e. ) is not possible .if we try to expand the problem of weak coin flipping to more than parties - namely parties with possible outcomes , where each party wins if the outcome is , we get the leader election problem .as mentioned before , it is classically impossible to do a weak coin flipping with a bias without assumptions about the computation power . hence , it is also impossible to solve the leader election problem in the classical setting . presents a classical leader election protocol ( given that a player can flip a coin by herself ) that given honest players , an honest player is chosen with probability in addition there is a proof that every classical protocol has a success probability of , for every .note that there are limitation on the number of cheaters .there are many types of leader elections .the most natural type seems to be the one we have just defined , which is parties wanting to select one of them as a leader .we will refer to this type as the _ leader election problem_. + another possibility is a protocol that chooses a processor randomly among possibilities ( there are no cheaters , and we want a protocol that uses minimal complexity , or one that works without knowing the number of processors .see ) .this type sometimes called _ fair leader election problem_. + until recently there was no quantum result regarding the leader election problem as it was defined here .however there were some results on other types , such as which allow penalty for cheaters that got caught ( which is obviously a weaker version of the problem ) .in mochon showed the existence of a weak coin flipping protocol with an arbitrarily small bias of at most .let us denote this protocol in throughout this paper .this protocol assumes that an honest player has chance of winning .it is also possible to build an unbalanced weak coin flipping with an arbitrarily small bias , in which one honest player will have winning probability , and the other player will have winning probability .we will denote this protocol as .in it was shown that this is possible using repetition of .it is very likely that this can also be achieved by finding an appropriate families of time independent point games ( see ) , but this will not be done in this paper .we will present a leader election protocol with parties ( based on mochon s result ) in which the probability of an honest player to win converges to .our protocol is fairly simple and uses rounds of unbalanced weak protocols as will be defined later .the protocol uses at most unbalanced weak coin flip protocols .this limitation is important , because at the moment we only know how to implement an unbalanced flip using a repetition of balanced coin flip , which influences our running time complexity .if is a power of , then the quantum solution is easy ( given a good weak coin flip protocol ) - we can do a tournament ( the looser quits ) with weak coin flip rounds , and the winner of the tournament will be elected as the leader . a problem arise when is not a power of , then this is not possible , and putting in a dummy involves some difficulties . if the cheaters could control the dummy , they would increase their winning probability .our first solution ( which was also discovered in , independently ) is to let play and then the winner of that will play and so on , as in a tournament , except we use unbalanced weak coin flips .these are also known to be possible using the weak coin flipping protocol ( see in ) but are more expensive in terms of time .+ the leader will be the winner of the final ( ) step .this works , however it uses many ( ) rounds , and thus we later combine the two ideas in order to reduce the number of rounds .we start by some standard definitions . by a _ weak coin flipping protocol _we mean that alice wins if the outcome is , and bob wins if it is .a protocol with , bias will be denoted by . a _ leader election protocol _ with parties has an outcome .we will denote by the probability that the outcome is .we assume that each player has its own private space , untouchable by other players , a message space which is common to all ( can include a space for the idification of the sender and receiver ) .we use the existence of a weak coin flipping protocol with bias at most for every .this fact was proved in for and we will denote it as and by the number of its rounds .it seems possible to generalize it to any , but that was not proved yet .there is a proof that there is such a protocol for every in by repetitions of , with rounds .p_x_e_0 ] ,\ k\in\mathbb{n\ } \exists p_{x,\epsilon_{0}}$ ] with rounds , such that and .this means that if we want a protocol , we can use the proposition with , hence .this will lead to where , and it uses rounds .[ p_q , e]for the protocol will use no more than rounds .there is one delicate point , that the cheaters can not increase their winning probability in a specific protocol , by loosing previous protocols ( say by creating entanglements ) .this will be discussed in the end .we will first present the simpler case of 3 parties , to show the basic idea .the general case is a natural generalization of this , and we will analyze it in details .nevertheless this case captures the basic idea of the problem and solution .alice , bob and charlie want to select a leader .alice will be elected if the outcome will be bob if it will be and charlie will win on outcome .let .we will show a leader election protocol , such that if all are honest then .if party is honest then we want that .let be a constant depend on . 1 .alice plays bob .the winner plays charlie .the winner of that flip is declared as the leader .* if all players are honest , then ( same for ) has chance of winning .( the chance of winning , and to then win ) .+ has just one game , so he obviously has chance of winning .* if is honest , we can think of it as if is the only honest player .then the calculation is almost the same from her point of view : in the first game she has winning probability , and in the second ( if she won the first ) she has winning probability .so in the total she has .thus we have extra to spare ( the coalition has less than chance of winning ) . *if is honest then the calculation is the same , just replace with and vice versa . *if is honest - again he has only one flip , so he has at least chance of winning . *number of coin flips = . *first coin flip is , hence involves rounds . +second coin flip is .we have probability to spare .hence by , it can be done with less than rounds . in the general casewe have parties .let .we will show a leader election protocol , such that if all are honest then .if party is honest then we want that .let be a constant that depends on ( the condition on is far from tight , for convenient of the calculation ) . 1 .let .2 . for 1 . plays a weak coin flipping protocol . is the winner . is declared as the leader .note that player enters the game in the stage ( i.e. when on in the protocol ) , when he plays coin flip with winning probability ( the only exception in the protocol is that also plays for the first time when , but then also , so it is the same ) .* if all are honest , then . * if is honest , ( and we have more than to spare ) . *number of coin flips = .+ the coin flip is , so it will have rounds .we mentioned in the beginning that if , we can do a tournament with rounds . in the last try we came up with a protocol of rounds , because each time only one couple played a coin flip .the problem with this simple solution is that it is quite inefficient in terms of number of rounds , and also almost all the coin flips are unbalanced .we can improve this protocol by combining it with the tournament idea . 1 .the following couples play : .2 . the winners of play between them .+ the winner of plays a .the two winners of last stage play a .* if all are honest , then have the same steps , and they have winning probability of . + have a winning probability of .+ has only two flips , so obviously . *if is honest he has winning probability of .+ same result for .+ if ( or ) is honest then he has winning probability of . + for it is obviously . * number of coin flips steps .* we have more than to spare , hence we can comfortably take . in the general case we have parties .let .we will show a leader election protocol , such that if all parties are honest then , with coin flipping rounds .if party is honest then we want that .let be a constant that depends on ( the condition on is far from tight , for convenient of the calculation ) .we will define the protocol recursively .let us call it .say it returns the leader selected .let s.t . . 1 .the following are done simultaneously : * plays a tournament ( of rounds ) among themselves with . denote the winner as .* . plays a .+ the winner of this is the leader . * assume that all parties are honest .+ look at . in the tournamenteveryone has a winning chance .then they have another coin flip with winning chance , so in total they each have winning chance .+ from induction we know each of has a winning chance in the recursive leader procedure ( to become ) .then the winner has a winning chance in the last step , so altogether he has .* we have coin flipping steps , and according to we will have up to total rounds . *if is honest , then he has a winning probability of ( ) . *the number of unbalanced coin flips is bounded by .+ in fact : coin flips ( # of s in the binary representation of ) - 1 .+ this can be proved easily by induction on : + for it is clear .+ if then it has such .else , and the first use again unbalanced coin flips between them .the remaining use from the induction hypothesis the of s in its binary from - 1 .when joining the two groups , we again use an unbalanced coin flip which is equivalent to the msb in the binary form of the in the bit ( from the ) .let be a weak coin flipping protocol , with the maximal cheating probability of bob .we want to run two instances of , one after the other ( not even at the same time ) . we will define ( see for full details ) : * let be the hilbert space of the system .* is the initial state of the system .* let there be ( even ) stages , denote the current stage .* on the odd stages , alice will apply a unitary on .* on the even stages , bob will apply a unitary on .* let be the state of the system in the stage .* let be the density matrix of alice in the stage .* alice s initial state ( density matrix ) is .* for even state we have .* let be the state of after alice gets the message .* for odd : , .we know that \label{bob 's bound}\ ] ] regardless of bob s actions ( see for full proof ) . we conclude that bob can not improve his winning chances in a specific round by doing something in previous rounds .if alice plays bob a series of weak coin flipping , then bob s winning probability is bounded from above by in the game . since alice will always start the protocol with a new - empty environment ( i.e. no correlations or leftovers from previous protocols ) , bob can not achieve a higher winning probability than the bound at [ bob s bound ] in any way , including correlations from previous protocols .( if one could have raised his winning probability , by using a previous protocol , he could have also done it without using it ) .when we analyze coin flipping between two players , we assume one of them is honest and analyze the scenario that the other player is cheating and we then bound his winning probability . in multiparty protocol , such as the leader election , another possibility might occur . a cheating player might interfere a weak coin flipping protocol between two other players and ( honest or not ) .we claim that this should not change our computation .if plays a coin flip , then even if interferes , it does not change their winning probabilities .we can look at it from the point of view of honest .he can assume that the rest of the players are cheaters .when he plays against it does nt matter whether interferes ( as long as he does not change private space ) , because a cheater could have done the same on its own , and this scenario is included in the calculation of . hence leaving the winning probabilities unchanged .a related work was published recently .they refer to the leader election problem as weak dice rolling , and they use the same protocol as we did ( independently ) in [ first protocol ] .they also extend the leader election problem to the strong scenario , under the name of strong dice rolling .namely they consider the problem of remote parties , having to decide on a number between and , without any party being aware to any other s preference .they generalize kitaev s bound to apply to this case , and get that , where gives the probability for the outcome when all parties but the are dishonest and acting in unison to force the outcome .this was done by noting that strong leader election can always be used to implement strong imbalanced coin flipping .they also extend the protocol in to parties and outcomes for any .i would like to thanks prof .dorit aharonov for her support and guidance ,
|
a group of individuals who do not trust each other and are located far away from each other , want to select a leader . this is the leader election problem , a natural extension of the coin flipping problem to players . we want a protocol which will guarantee that an honest player will have at least chance of winning ( ) , regardless of what the other players do ( whether they are honest , cheating alone or in groups ) . it is known to be impossible classically . this work gives a simple algorithm that does it , based on the weak coin flipping protocol with arbitrarily small bias recently derived by mochon . the protocol is quite simple to achieve if the number of rounds is linear ; we provide an improvement to logarithmic number of rounds .
|
the last few years have changed our view on the evolution of galaxies : the most distant galaxies ( i.e. systems having stellar population ) have been found at =6.68 ( chen et al . , 1999 ) and radio galaxies at =5.2 ( van breugel et al . , 1999 ) .some of models suggests that galaxies can start their formation at =17 ( chen et al . , 1999a ) .to understand deeper a situation with a stellar population of galaxies and to check various mordern models it is very important to have a capability to detect correctly an age of galaxies .the labour intensity of obtaining statistically significant high - quality data on distant and faint galaxies and radio galaxies forces one to look for simple indirect procedures in the determination of redshifts and other characteristics of these objects . with regard to radio galaxies ,even photometric estimates turned out to be helpful and have so far been used ( mccarthy , 1993 ; benn et al . , 1989 ) . in the late 1980s and early 1990sit was shown that the color characteristics of galaxies can yield also the estimates of redshifts and ages for the stellar systems of the host galaxies .numerous evolutionary models appeared with which observational data were compared to yield results strongly differed from one another ( arimoto and yoshii , 1987 ; chambers and charlot , 1990 ; lilly , 1987 , 1990 ) . over the last few yearsthe three models : pegase ( project deetude des galaxies par synthese evolutive ( fioc and rocca - volmerange , 1997 ) ) , poggianti ( 1997 ) and gissel96 ( bruzual , charlot , 1996 ) , have been extensively used , in which an attempt has been made to eliminate the shortcomings of the previous versions . in the `` big trio '' experiment ( parijskij et al . , 1996 )we also attempted to apply these techniques to distant objects of the rc catalogue with ultra steep spectra ( uss ) .color data for nearly the whole basic sample of uss frii ( fanaroff and riley , 1974 ) rc objects have been obtained with the 6 m telescope of sao ras . to accelerate a procedure of age ( and photometric redshift ) estimationwe have begun a project `` evolution of radio galaxies '' , supported by the russian foundation of basic research ( grant no.99 - 07 - 90334 ) , which has to allow a user to obtain age and photometric redshift estimations .this system , being developed at present , will allow a user to operate with simulated curves of spectral energy distributions ( sed ) to estimate ages and redshifts by photometral data .authors use seds of three models : the system will be situated on the special web - server unifying various type resources , including specialized internet protocol daemons ( for the ftp , http , e - mail support ) and the designed software permitting a user to operate with the sed curves .requesting and filling in the standard html - forms a user will be able to select different types of curves or trust to do this to a computer by the method .the input forms contain information about input filters or wavelengths and corresponding magnitudes .the estimation of ages and redshifts iss performed by way of selection of the optimum location on the sed curves of the measured photometric points obtained when observing radio galaxies in different filters .we use the already computed table sed curves for different ages .the algorithm of selection of the optimum location of points on the curve consists briefly ( for details see verkhodanov , 1996 ) in the following : by shifting the points lengthwise and transverse the sed curve such a location was to be found at which the sum of the squares of the discrepancies was a minimum . through moving over wavelengths and flux density along the sed curve we estimate the displacements of the points from the location of the given filter andthen the best fitted positions were used to compute the redshift . from the whole collection of curves ,we select the ones on which the sum of the squares of the discrepancies turned out to be minimal for the given observations of radio galaxies . in order to take account of the absorption ,we apply the maps ( as fits - files ) from the paper `` maps of dust ir emission for use in estimation of reddening and cmbr foregrounds '' ( schlegel et al . ,the conversion of stellar magnitudes to flux densities are performed by the formula ( e.g. von hoerner , 1974 ) : * sorted bibliographical collection of papers for different stages of radio galaxy evolution , * archive of radio galaxies data in various wavelength ranges ( both observed in special astrophysical observatory and taken from internet ) and * search for information about radio galaxies using the largest data bases ned , cats , leda et al . very close interaction with the cats database ( verkhodanov et al . , 1996 ) , designed and situated in the sao , is supposed in the radio sources identification .arimoto n. , yoshii y. astron .astroph . , * 179 * , p.23 , 1987 ., wall j. , vigotti m. , grueff g. , mnras , * 235 * , p.46 , 1989 .van breugel w. , de breuck c. , stanford s. a. , stern d. , rottgering h. , miley g. * astro - ph/9904272 * , 1999 .chambers k. , charlot s. , astrophys .j. lett . , * 348 * , l1 , 1990 .chen h .- w . ,lanzetta , k. m. , pascarelle , s. nature , * 398 * , p.586 , 1999 .chen h .- w . ,lanzetta , k. m. , pascarelle , s. * astro - ph/9907002 * , 1999a .fanaroff b.l . , riley j.m .mnras , * 167 * , p.31 , 1974 .fioc m. , rocca - volmerange b. astron .astroph . , * 326 * , p.950 , 1997 .lilly s. mnras , * 229 * , p.573 , 1987 .lilly s. in `` evolution of the universe '' , ed .kron r.g . ,pacific , p.344 , 1990 .mccarthy p.j ., an . review ., * 31 * , p.639 , 1993 .parijskij yu .n. , goss w.m ., kopylov a.i ., soboleva n.s . ,temirova a.v . ,verkhodanov o.v . ,zhelenkova o.p ., naugolnaya m.n ., bulletin sao , * 40 * , p.5 , 1996 .poggianti b.m .astroph . , * 122 * , p.399 , 1997 .schlegel d. , finkbeiner d. , davis m. astrophys .j. * 500 * , p.525 , 1998 .verkhodanov o.v .bulletin sao , * 41 * , p.149 , 1996 .verkhodanov o.v . ,trushkin s.a . , andernach h. , chernenkov v.n . in `` astronomical data analysis software and systems vi '' , eds .g.hunt & h.e.payne .asp conference series , vol . * 125 * , p.322 , 1997 .verkhodanov o.v . ,kopylov a.i ., parijskij yu.n . ,soboleva n.s ., temirova a.v .bulletin sao , * 48 * , pp.41 - 120 , 1999 ( astro - ph/9910559 ) .von hoerner s. , 1974 , in : `` galactic and extragalctic radio astronomy '' , eds .g.l.verschuur & k.i.kellermann , springer - verlag
|
the project of the informational system creation on the problem of evolution of radio galaxies is described . this system , being developed at present , will allow a user to operate with simulated curves of spectral energy distributions ( sed ) and to estimate ages and redshifts by photometrical data . authors use seds of several models ( gissel96 ( bruzual , charlot , 1996 ) , pegase ( fioc , rocca - volmerange , 1996 ) and poggianti(1996 ) ) for different types of galaxies . planned modes of access , formats of output result and additional functions are described .
|
peer - to - peer systems are self - organizing distributed systems where participating nodes both provide and receive services from each other in a cooperative effort to prevent any one node or set of nodes from being overloaded .peer - to - peer systems have recently gained much attention , primarily because of the great number of features they offer applications that are built on top of them .these features include : scalability , availability , fault tolerance , decentralized administration , and anonymity .along with these features has come an array of technical challenges . in particular , over the past year , there has been much focus on the fundamental indexing and routing problem inherent in all peer - to - peer systems : given the name of an object of interest , how do you locate the object within the peer - to - peer network in a well - defined , structured manner that avoids flooding the network ? as a performance enhancement , the designers of these systems suggest caching index entries with expiration times at intermediate nodes that lie on the path taken by a search query .intermediate caches are desirable because they balance query load for an item across multiple nodes , reduce latency , and alleviate hot spots .however , little attention has been given to how to maintain these intermediate caches .this problem is interesting because the peer - to - peer model assumes the index will change constantly .this constant change stems from several factors : peer nodes continuously join and leave the network , content is continuously added to and deleted from the network , and replicas of existing content are continuously added to alleviate bandwidth congestion at nodes holding the content . in this paper we propose a new comprehensive architecture for controlled update propagation ( cup ) in peer - to - peer networks that asynchronously builds caches of index entries while answering search queries .it then propagates updates of index entries to maintain these caches .the basic idea is that every node in the peer - to - peer network maintains two logical channels per neighbor : a query channel and an update channel .the query channel is used to forward search queries for items of interest to the neighbor that is closest to the authority node for those items .the update channel is used to forward query responses ( first - time updates ) asynchronously to a neighbor and to update index entries that are cached at the neighbor .queries for an item travel `` up '' the query channels of nodes along the path toward the authority node for that item .updates travel `` down '' the update channels along the reverse path taken by a query .figure [ fig : logicalchannels ] shows this process . and are authority nodes . , , , and are the four logical channels between nodes and .a query arriving at node for an item for which is the authority is pushed onto query channel to . if has a cached entry for the item , it returns it through .otherwise , it forwards the query towards .any update originating from flows downstream to which may forward it onto through .the analogous process holds for queries at for items for which is the authority.,height=264 ] the advantages of the query channel are twofold .first , if a node receives two or more queries for an item for which it does not have a fresh response , the node pushes only one instance of the query for that item up its query channel .this approach can have significant savings in traffic , because bursts of requests for an item are coalesced into a single request .second , using a single query channel solves the `` open connection '' problem suffered by some peer - to - peer systems .each time a query arrives at a node which does not have a cached response , the node opens one or more connections to neighboring nodes and must maintain those connections open until the response returns through them .the asynchronous nature of the query channel relieves nodes from having to maintain many open connections since all responses return through the update channel . through simple bookkeeping ( setting an interest bit )the node registers the interest of its neighbors so it knows which of its neighbors to push the query response to when the answer arrives .the cascaded propagation of updates from authority nodes down the reverse paths of search queries has many advantages .first , updates extend the lifetime of cached entries allowing intermediate nodes to continue serving queries from their caches without having to push queries up their channels explicitly .it has been shown that up to fifty percent of content hits at caches are instances where the content is valid but stale and therefore can not be used to serve queries without first being re - validated .these occurrences are called _ freshness misses_. second , a node that proactively pushes updates to interested neighbors reduces its load of queries generated by those neighbors .the cost of pushing the update down is recovered by the first query for the same item following the update .third , the further down an update gets pushed , the shorter the distance subsequent queries need to travel to reach a fresh cached answer . as a result, query response latency is reduced .finally , updates can help prevent errors .for example , an update to invalidate an index entry prevents a node from answering queries using the entry before it expires . in cup, nodes decide individually when to receive updates .a node only receives updates for an item if the node has registered interest in that item .furthermore , each node uses its own incentive - based policy to determine when to cut off its incoming supply of updates for an item . this way the propagation of updates is controlled anddoes not flood the network .similarly , nodes decide individually when to propagate updates to interested neighbors .this is useful because a node may not always be able or willing to forward updates to interested neighbors .in fact , a node s ability or willingness to propagate updates may vary with its workload .cup addresses this by introducing an adaptive mechanism each node uses to regulate the rate of updates it propagates downstream .a salient feature of cup is that even if a node s capacity to push updates becomes zero , nodes dependent on the node for updates fall back with no overhead to the case of standard caching with expiration .when compared with standard caching , under unfavorable conditions , cup reduces the average miss latency by as much as a factor of three . under favorable conditions, cup reduces the average miss latency by more than a factor of ten .cup overhead is more than compensated for by its savings in cache misses .in fact , the `` investment '' return per update pushed in saved misses grows substantially with increasing network size and query rates. the cost of saved misses can be one to two orders of magnitude higher than the cost of updates pushed .we demonstrate that the performance of cup depends highly on the policy a node uses to cut off its incoming updates .we find that the cut - off policy should adapt to the node s query workload and we present probabilistic and log based methods for doing so . finally , we show that cup continues to outperform standard caching even when update propagation is reduced by either node capacity or network conditions .the rest of the paper is organized as follows : section [ architecture ] describes in detail the design of the cup architecture .section [ evaluation ] describes the cost model we use to evaluate cup and presents experimental evidence of the benefits of cup .section [ relatedwork ] discusses related work and section [ conclusions ] concludes the paper .first , we provide some background terminology we use throughout the paper and very briefly describe how peer - to - peer networks for which cup is appropriate perform their indexing and lookup operations . then we describe the components of the cup protocol. the following terms will be useful for the remainder of the paper : _ node _ : this is a node in the peer - to - peer network .each node periodically exchanges `` keep - alive '' messages with its neighbors to confirm their existence and to trigger recovery mechanisms should one of the neighbors fail .every node also maintains two logical channels ( connections ) for each neighbor : the query channel and the update channel .the query channel is used by the node to push queries to its neighbor .the update channel is used by the node to push updates that are of interest to the neighbor ._ global index _: the most important operation in a peer - to - peer network is that of locating content . as in assume a hashing scheme that maps keys ( names of content files or keywords ) onto a virtual coordinate space using a uniform hash function that evenly distributes the keys to the space .the coordinate space serves as a global index that stores index entries which are _( key , value ) _ pairs .the value in an index entry is a pointer ( typically an ip address ) to the location of a replica that stores the content file associated with the entry s key .there can be several index entries for the same key , one for each replica of the content ._ authority node _ : each noden in the peer - to - peer system is dynamically allocated a subspace of the coordinate space ( i.e. , a partition of the global index ) and all index entries mapped into its subspace are owned by n. we refer to n as the authority node of these entries ._ replicas _ of content whose key corresponds to an authority node n send birth messages to n to announce they are willing to serve the content . depending onthe application supported , replicas might periodically send refresh messages to indicate they are still serving a piece of content .they might also send deletion messages that explicitly indicate they are no longer serving the content .these deletion messages trigger the authority node to delete the corresponding index entry from its local index directory. _ search query _ : a search query posted at a node n is a request to locate a replica for key k. the response to such a search query is a set of index entries that point to replicas that serve the content associated with k. _ query path for key k _ :this is the path a search query for key _ k _ takes .each hop on the query path is in the direction of the authority node that owns _k_. if an intermediate node on this path has fresh entries cached , the path ends at the intermediate node ; otherwise the path ends at the authority node . _reverse query path for key k _ :this path is the reverse of the query path defined above ._ local index directory _: this is the subset of global index entries owned by a node ._ cached index entries _ :this is the set of index entries cached by a noden in the process of passing up queries and propagating down updates for keys for which n is not the authority .the set of cached index entries and the local index directory are disjoint sets ._ lifetime of index entries _: we assume that each index entry cached at a node has associated with it a lifetime and a timestamp indicating the time at which the lifetime was set . when the difference between the current time and the timestamp is greater than the lifetime field , the entry has expired and can not be used to answer queries .an index entry is considered fresh until it expires .we assume that anytime a node issues a query for key _k _ , the query will be routed along a well - defined structured path with a bounded number of hops from the querying node to the authority node for _ k_. the routing mechanism is designed so that each node on the path hashes _ k _ using the same hash function to deterministically choose which of its neighbors will serve as the next hop .examples of peer - to - peer systems that provide this type of structured location and routing mechanism include content - addressable networks ( cans ) , chord , pastry and tapestry .cup can be used in the context of any of these systems . at each node, index entries are grouped together by key .for each key k , the node stores a flag that indicates whether the node is waiting to receive an update for k for the first time and an interest bit vector .each bit in the vector corresponds to a neighbor and is set or clear depending on whether that neighbor is or is not interested in receiving updates for k. each node tracks the popularity or request frequency of each non - local key k for which it receives queries .the popularity measure for a key k can be the number of queries for k a node receives between arrivals of consecutive updates for k or a rate of queries of a larger moving window . on an update arrival for k , a node uses its popularity measure to re - evaluate whether it is beneficial to continue caching and receiving updates for k. we elaborate on this cut - off decision in section [ cutoffpolicies ] .node bookkeeping in cup involves no network overhead . with increasing cpu speeds and memory sizes ,this bookkeeping is negligible when we consider the reduction in query latency achieved .we classify updates into four categories : first - time updates , deletes , refreshes , and appends .deletes , refreshes , and appends originate from the replicas of a piece of content and are directed toward the authority node that owns the index entries for that content .first - time updates are query responses that travel down the reverse query path .deletes are directives to remove a cached index entry .deletes can be triggered by two events : 1 ) a replica sends a message indicating it no longer serves a piece of content to the authority node that owns the index entry pointing to that replica .2 ) the authority node notices a replica has stopped sending `` keep - alive '' messages and assumes the replica has failed . in either case ,the authority node deletes the corresponding index entry from its local index directory and propagates the delete to interested neighbors .refreshes are keep - alive messages that extend the lifetimes of index entries .refreshes that arrive at a cache do not result in errors as deletes do , but help prevent freshness misses . finally , appends are directives to add index entries for new replicas of content .these updates help alleviate the demand for content from the existing set of replicas since they add to the number of replicas from which clients can download content . upon receipt of a query for a key _k _ , there are three basic cases to consider . in each of the cases, the node updates its popularity measure for _ k_. the node also sets the appropriate bit in the interest bit vector for _ k _ if the query originates from a neighbor .otherwise , if the query is from a local client , the node maintains the connection until it can return a fresh answer to the client . to simplify the protocol description we use the phrase ``push the query '' to indicate that a node pushes a query upstream toward the authority node .we use the phrase `` push the update '' to indicate that a node pushes an update downstream in the direction of the reverse query path .* case 1 : fresh entries for key k are cached . * the node uses its cached entries for _ k _ to push the response as a first - time update to the querying neighbor or local client .* case 2 : key k is not in cache . *the node adds _ k _ to its cache and marks it with a _ pending - first - update _ flag .the purpose of the _ pending - first - update _ flag is to coalesce bursts of queries for the same key into one query . a subsequent query for _ k _ from a neighbor or a local client will save the node from pushing another instance of the query for _* case 3 : all cached entries for key k have expired .* the node must obtain the fresh index entries for _ k_.if the _ pending - first - update _ flag is set , the node does not need to push the query ; otherwise , the node sets the flag and pushes the query .a key feature of cup is that a node does not forward an update for _ k _ to its neighbors unless those neighbors have registered interest in _k_. therefore , with some light bookkeeping , we prevent unwanted updates from wasting network bandwidth . upon receipt of an update for key _k _ there are three cases to consider .* case 1 : pending - first - update flag is set .* this means that the update is a first - time update carrying a set of index entries in response to a query .the node stores the index entries in its cache , clears the _ pending - first - update _ flag , and pushes the update to neighbors whose interest bits are set and to local client connections open at the node . * case 2 : pending - first - update flag is clear . *if all the interest bits for _ k _ are clear , the node decides whether it wants to continue receiving updates for _ k_. the node bases its decision on _ k _ s popularity measure .each node uses its own policy for deciding whether the popularity of a key is high enough to warrant receiving further updates for it .if the node decides _ k _s popularity is too low , it pushes a _ clear - bit _ control message to the neighbor from whom it received the update .the _ clear - bit _ message indicates that the neighbor s interest bit for this node should be cleared .otherwise , if the popularity is high or some interest bits are set , the node applies the update to its cache and pushes the update to the neighbors whose bits are set .note that a greedy or selfish node can choose not to push updates for a key k to interested neighbors .this forces downstream nodes to fall back to standard caching for k. however , by choosing to cut off downstream propagation , a node runs the risk of receiving subsequent queries from its neighbors .handling each of these queries is twice the cost of propagating an update downward because the node has to receive the query from the downstream neighbor and then push the response as an update .therefore , although each node is free to stop pushing updates at any time it is in its best interest to push updates for which there are interested neighbors .* case 3 : incoming update has expired . * this could occur when the network path has long delays and the update does not arrive in time .the node does not apply the update and does not push it downstream .a _ clear - bit _ control message is pushed by a node to indicate to its neighbor that it is no longer interested in updates for a particular key from that neighbor .when a node receives a _ clear - bit _ message for key k , it clears the interest bit for the neighbor from which the message was sent .if the node s popularity measure for k is low and all of its interest bits are clear , the node also pushes a _ clear - bit _ message for k. this propagation of _ clear - bit _ messages toward the authority node for k continues until a node is reached where the popularity of k is high or where at least one interest bit is set . _ clear - bit _ messages can be piggy - backed onto queries or updates intended for the neighbor , or if there are no pending queries or updates , they can be pushed separately . ideally every node would propagate all updates to interested neighbors to save itself from having to handle future downstream misses .however , from time to time , nodes are likely to be limited in their capacity to push updates downstream .therefore , we introduce an adaptive control mechanism that a node uses to regulate the rate of pushed updates .we assume each node n has a capacity u for pushing updates that varies with n s workload , network bandwidth , and/or network connectivity .n divides u among its outgoing update channels such that each channel gets a share that is proportional to the length of its queue .this allocation maintains the queues roughly equally sized .the queues are guaranteed to be bounded by the expiration times of the entries in the queues .so even if a node has its update channels completely shut down for a long period , all entries will expire and be removed from the queues . under a limited capacity andwhile updates are waiting in the queues , each node can re - order the updates in its outgoing update channels by pushing ahead updates that are likely to have greater impact on query latency reduction , on query accuracy , or on the load balancing of content demand across replicas . during the re - ordering any expired updatesare eliminated .the strategy for re - ordering depends on the application .for example , in an application where query latency and accuracy are of the most importance , one can push updates in the following order : first - time updates , deletes , refreshes , and appends . in an application subject to flash crowds that query for a particular item, appends might be given higher priority over the other updates .this would help distribute the load faster across the entire set of replicas .a node can also re - order refreshes and appends so that entries that are closer to expiring are given higher priority .such entries are more likely to cause freshness misses which in turn trigger a new query search .so it is advantageous to try to catch this in time by pushing these first .the peer - to - peer model assumes that participating nodes will continuously join and leave the network .cup must be able to handle both node arrivals and departures seamlessly .* arrivals . *when a new node n enters the peer - to - peer network , it becomes the authority node for a portion of the index entries owned by an existing node m. n , m , and all surrounding affected nodes ( old neighbors of m ) update the bookkeeping structures they maintain for indexing and routing purposes . to support cup , the issues at hand are updating the interest bit vectors of the affected nodes and decidingwhat to do with the index entries stored at m. depending on the indexing mechanism used , the cardinality of the bit vectors of the affected nodes may change .that is , bit vectors may expand or contract as some nodes may now have more or fewer neighbors than before n s arrival . since all nodes already need to track who their neighbors are as part of the routing mechanism , updating the cardinality of the interest bit vectors to reflect n s arrival is straightforward .for example , nodes that now have both n and m as neighbors have to increase their bit vectors by one element to include n. the affected nodes also need to modify the mappings from bit i d to neighbor ip address in their bit vectors . for example , if a node that previously had m as its neighbor now has n as its neighbor , the node must make the bit i d that pointed to m now point to n. to deal with its stored index entries , m could simply not hand over any of its entries to n. this would cause entries at some of m s previous neighbors to expire and subsequent queries from those nodes will restart update propagations from n. alternatively , m could give a copy of its stored index entries to n. both n and m would then go through each entry and patch its bit vector . this waynodes that previously depended on m for updates of particular keys could continue to receive updates from either m or n but not both .* departures .* node departures can be either graceful ( planned ) or ungraceful ( due to sudden failure of a node ) . in either casethe index mechanism in place dictates that a neighboring node m take over the departing node n s portion of the global index . to support cup ,the interest bit vectors of all affected nodes must be patched to reflect n s departure .if n leaves gracefully , n can choose not to hand over to m its index entries .any entries at surrounding nodes that were dependent on n to be updated will simply expire and subsequent queries will restart update propagations . again , alternatively n may give m its set of entries .m must then merge its own set of index entries with n s , by eliminating duplicate entries and patching the interest bit vectors as necessary .if n s departure is not planned , there can be no hand over of entries and all entries in the affected neighboring nodes will expire as in standard caching . note that the transfer of entries can be coincided with the transfer of information that is already occurring as part of the routing mechanism in the peer - to - peer network , and therefore does not add extra network overhead .also the bit vector patching is a local operation that affects only each individual node .therefore even in cases where a node s neighborhood changes often , the effect on the overall performance of cup is limited to that node s neighborhood ( see section 3.7 ) .figure [ fig : cuptrees ] shows a snapshot of cup in progress in a network of seven nodes .the left hand side of each node shows the set of keys for which the node is the authority .the right hand side shows the set of keys for which the node has cached index entries as a result of handling queries .for example , node a owns k3 and has cached entries for k1 and k5 . for each key, the authority node that owns the key is the root of a cup tree .the branches of the cup tree are formed by the paths traveled by queries from other nodes in the network .for example , one path in the tree rooted at a is \{f , d , c , a}. updates originate at the root ( authority node ) of a cup tree and travel downstream to interested nodes .queries travel upstream toward the root .the goal of cup is to extend the benefits of standard caching based on expiration times .there are two key performance questions to address .first , by how much does cup reduce the average cost per query ?second , how much overhead does cup incur in providing this reduction ?we first present the cost model based on economic incentive used by each node to determine when to cut off the propagation of updates for a particular key .we give a simple analysis of how the cost per query is reduced ( or eliminated ) through cup .we then describe our experimental results comparing the performance of cup with that of standard caching .consider an authority node a that owns key k and consider the tree generated by issuing a query for k from every node in the peer - to - peer network .the resulting tree , rooted at a , is the _ virtual query spanning tree _ for k , v(a , k ) , and contains all possible query paths for k. the _ real query tree _ for k , r(a , k ) is a subtree of v(a , k ) also rooted at a and contains all paths generated by real queries . the exact structure of r(a , k ) depends on the actual workload of queries for k. the entire workload of queries for all keys results in a collection of criss - crossing real query trees with overlapping branches .we first consider the case of standard caching at the intermediate nodes along the query path for key k. consider a node n that is at distance d from a in v(a , k ) .we define the cost per query for k at n as the number of hops in the peer - to - peer network that must be traversed to return an answer to n. when a query for k is posted at n for the first time , it travels toward a looking for the response . if none of the nodes between n and a have a fresh response cached , the cost of the query at n is : d hops to reach a and d hops for the response to travel down the reverse query path as a first - time update .if there is a node on the query path with a fresh answer cached , the cost is less than .subsequent queries for k at n that occur within the lifetime of the entries now cached atn have a cost of zero . as a result , caching at intermediate nodes has the benefits of balancing query load for k across multiple nodes and lowering average latency per query .we can gauge the performance of cup by calculating the percentage of updates cup propagates that are `` justified '' .we precisely define what a justified update is below , but simply put , an update is justified if it recovers the overhead it incurs , i.e. , if its cost is recovered by a subsequent query .an unjustified update is therefore overhead that is not recovered ( wasted ) .updates for popular keys are likely to be justified more often than updates for less popular keys .a refresh update is justified if a query arrives sometime between the previous expiration of the cached entry and the new expiration time supplied by the refresh update .an append update is justified if at least one query arrives between the time the append is performed and the time of its expiration .a first - time update is always justified because it carries a query s response toward the node that originally issues the query .a deletion update is considered justified if at least one query arrives between the time the deletion is performed and the expiration time of the entry to be deleted . for each update ,let be the critical time interval described above during which a query must arrive in order for the update to be justified .( for first - time updates ) .consider a node n at distance d from a in r(a , k ) .an update propagated down to n is justified if at least one query q is posted within time units at any of the nodes of the virtual subtree v(n , k ) .note that an update is justified if q arrives at the virtual tree v(n , k ) , _ not _ the real query tree r(n , k ) because q can be posted anywhere in v(n , k ) . given the distribution of query arrivals at each node in the tree v(n , k ) , we can find the probability that the update at n is justified by calculating the probability that a query will arrive at some node in v(n , k ) .assume that queries for k arrive at each node in v(n , k ) according to a poisson process with parameter .then it follows that queries for k arrive at v(n , k ) according to a poisson process with parameter equal to the sum of all . therefore , the probability that a query for k will arrive within time units is and equals the probability that the update pushed to n is justified .the closer to the authority n is , the higher the and thus the higher the probability for an update pushed to n to be justified . for query arrival per second and seconds , the probability that an update arriving at n is justified is 99 percent .the benefit of a justified update goes beyond recovery of its cost . for each hopan update is pushed down , exactly one hop is saved since without the propagation , a subsequent query arriving within time units would have to travel one hop up and one hop down .this halves the number of hops traveled which reduces query response latency , and at the same time provides enough benefit margin for more aggressive cup strategies .for example , a more aggressive strategy would be to push some updates even if they are not justified .as long as the number of justified updates is at least fifty percent the total number of updates pushed , the overall update overhead is completely recovered .if the percentage of justified updates is less than fifty percent , then the overhead will not be fully recovered but query latency will be further reduced . therefore ,if network load is not the prime concern , an `` all - out '' push strategy achieves minimum latency .one of the challenges in evaluating this work is the unavailability of real data traces of completely decentralized peer - to - peer networks such as those assumed by cup .the reason for this is that such systems are not yet in widespread use to make collecting traces feasible .therefore , in the evaluation of cup we choose simulation parameters that range from unfavorable to favorable conditions for cup in order to show the spectrum of performance and how it compares to standard caching under the same conditions .for example , low query rates do not favor cup because updates are less likely to be justified since there may not be enough subsequent queries to recover the cost of the updates .on the other hand , queries for keys that become suddenly hot not only justify the propagation overhead , but also enjoy a significant reduction in latency . for our experiments, we simulated a two - dimensional `` bare - bones '' content - addressable network ( can ) using the stanford narses simulator .the simulation takes as input the number of nodes in the overlay peer - to - peer network , the number of keys owned per node , the distribution of queries for keys , the distribution of query inter - arrival times , the number of replicas per key , and the lifetime of replicas in the system .we ran experiments for n = nodes where k ranged from 3 to 12 .simulation time was 22000 seconds , with 3000 seconds of querying time .we present results for experiments with replica lifetime of 300 seconds to reflect the dynamic nature of peer - to - peer networks where nodes might only serve content for short periods of time .for all experiments , refreshes of index entries occur at expiration .query arrivals were generated according to a poisson process .nodes were randomly selected to post the queries .we present five experiments .first we compare the performance and overhead of cup against standard caching where cup propagates updates without any concern for whether the updates are justified . in this experiment ,we vary the level in the cup tree to which updates are propagated .we use this experiment to establish the level that provides the maximum benefit and then use the performance results at this level as a benchmark for comparison in later experiments .second , we compare the effect on cup performance of different incentive - based cut - off policies and compare the performance of these policies to that of the benchmark .third , using the best cut - off policy of the previous experiment , we study how cup performs as we vary the size of the network .fourth , we study the effect on performance of increasing the number of replicas corresponding to a key .finally , we study the effect of limiting the outgoing update capacities of nodes . in this set of experiments we compare standard caching with a version of cup that propagates updates down the real query tree of a key regardless of whether or not the updates are justified .we use this information to determine a maximal performance baseline .we determine the reduction in misses achieved by cup and the overhead cup incurs to achieve this reduction .we define _ miss cost _ as the total number of hops incurred by all misses , i.e. freshness and first - time misses .we define the cup overhead as the total number of hops traveled by all updates sent downstream plus the total number of hops traveled by all clear - bit messages upstream .( we assume clear - bit messages are not piggybacked onto updates .this somewhat inflates the overhead measure . )we define _ total cost _ as the sum of the _ miss cost _ and any overhead hops incurred .note that in standard caching , the _ total cost _ is equal to the _ miss cost_. figures [ fig : pushlevq1,10 ] and [ fig : pushlevq100,1000 ] plot cup s total cost and miss cost versus the push level for a network of nodes .a push level of means that updates are propagated to all nodes that have queried for the key and that are at most hops from the authority node .a push level of corresponds to the case of standard caching , since all updates from the authority node ( the root of the cup tree ) are immediately squelched and not forwarded on . for this set of experiments, query arrivals were generated according to a poisson process with average rate of 1 , 10 , 100 , and 1000 queries per second at the network .the figures show that as the push level increases , cup significantly reduces the miss cost when compared with standard caching and does so with little overhead as shown by the displacement of each pair of curves . in figure[ fig : pushlevq1,10 ] , for query per second , the total cost incurred by cup decreases and reaches a minimum at around push level 20 , after which it slightly increases .this turning point is the level beyond which the overhead cost of updates is not recoverable . for queries per second ,a similar turning point occurs at around push level 25 . in figure[ fig : pushlevq100,1000 ] the minimum total cost occurs at push level 25 and tapers off for both and queries per second . for low query arrival rates ,the turning point occurs at lower push levels .for example , for queries per second , the turning point occurs at push level 15 .these results show that there is no specific optimal push level at which cup achieves the minimum total cost across all workloads .if there were , then the simplest strategy for cup would be to have updates be propagated to that optimal push level .in fact , we have found that in addition to the query workload , the optimal push level is affected by the number of nodes in the network and the rate at which updates are generated , both of which change dynamically . in the absence of an optimal push level , each node needs a policy for determining when to stop receiving updates .we next examine some cut - off policies . on receiving an update for a key , each node determines whether or not there is incentive to continue receiving updates or to cut off updates by pushing up a clear - bit message .we base the incentive on the _ popularity _ of the key at the node .the more popular a key is , the more incentive there is to receive updates for that key . for a key k ,the popularity is the number of queries a node has received for k since the last update for k arrived at the node .we examine two types of thresholds against which to test a key s popularity when making the cut - off decision : probability - based and log - based .a probability - based threshold uses the distance of a node n from the authority node a to approximate the probability that an update pushed to n is justified . per our cost model of section 3.2 ,the further n is from a , the less likely an update at n will be justified .we examine two such thresholds , a linear one and a logarithmic one . with a linear threshold ,if an update for key k arrives at a node at distance and the node has received at least queries for k since the last update , then k is considered popular and the node continues to receive updates for k. otherwise , the node cuts off its intake of updates for k by pushing up a clear - bit message .the logarithmic popularity threshold is similar .a key k is popular if the node has received queries since the last update .the logarithmic threshold is more lenient than the linear in that it increases at a slower rate as we move away from the root .a log - based threshold is one that is based on the recent history of the last _ n _ update arrivals at the node . if within _n _ updates , the node has not received any queries , then the key is not popular and the node pushes up a clear - bit message .a specific example of a log - based policy is the second - chance policy . in this policy , .when an update arrives , if no queries have arrived since the last update , the policy gives the key a `` second chance '' and does not push a clear - bit message immediately . if at the next update arrival the node has still not received any queries for k , then it pushes a clear - bit message .the philosophy behind this policy is that pushing these two updates down from the parent node costs two hops .if a query arrives in the interval between these two updates , then it will recover the cost of pushing them down , since a query miss would incur two hops , one up and one down .[ cols= " < , > , > , > , > " , ] the last column of table [ tab : replicas ] shows the total cost when each replica refresh is sent as a separate update . when compared to the 55905 hops of total cost for standard caching from table [ tab : policies2 ] , we observe that the total cost of cup will eventually overtake that of standard caching as we increase the number of replicas .in fact this occurs at eight replicas where the total cost is 57430 . while these results may seem to imply that a handful of replicas is enough for good cup performance , for some applications , having many more replicas in the network is necessary even if they run the risk of unrecoverable additional cup overhead .for example , having multiple replicas of content helps to balance the demand for that content across many nodes and reduces latency .one may view pushing updates for multiple replicas as an example of an aggressive cup policy we refer to in section [ costmodel ] . at 100 replicas ,the total cost is about 10 times that of standard caching .cup pays the price of extra overhead but achieves a miss cost that is about 13.5 percent that of standard caching .therefore , at the cost of extra network load , both query latency is reduced and the demand for content is balanced across a greater number of nodes .if however network load is a concern , there are a couple of techniques an authority node can use to reduce the overhead of cup when there are many replicas in the network .first , rather than push all replica refreshes , the authority node can selectively choose to propagate a subset of the replica refreshes and suppress others .this allows the authority node to reduce update overhead as well as balance demand for content across the replicas .another alternative would be to aggregate replica refreshes .when a refresh arrives for one replica , the authority node waits a threshold amount of time for other updates for the same key to arrive .it then batches all updates that arrive within that time and propagates them together as one update .this threshold would be a function of the lifetime of a replica and could be dynamically adjusted with the number of replicas in the system .we are experimenting with different kinds of threshold functions .our experiments thus far show that cup clearly outperforms standard caching under conditions where all nodes have full outgoing capacity .a node with full outgoing capacity is a node that can and does propagate all updates for which there are interested neighbors . in reality ,an individual node s outgoing capacity will vary with its workload , network connectivity , and willingness to propagate updates . in this sectionwe study the effect on cup performance of reducing the outgoing update capacity of nodes .we present two experiments run on a network of 1024 nodes . in the first experiment , called `` up - and - down '' , after a five minute warm up period , we randomly select twenty percent of the nodes to reduce their capacity to a fraction of their full capacity .these nodes operate at reduced capacity for ten minutes after which they return to full capacity .after another five minutes for stabilization , we randomly select another set of nodes and reduce their capacity for ten minutes .we proceed this way for the entire 3000 seconds during which queries are posted , so capacity loss occurs three times during the simulation . in the second experiment , called `` once - down - always - down '' , after the initial five minute warmup period ,the randomly selected nodes reduce and remain at reduced capacity for the remainder of the experiment . figure [ fig : capacityq1 ] shows the total cost incurred by cup versus reduced capacity for both up - and - down and once - down - always - down configurations .a reduced capacity means a node is only pushing out one - fourth the updates it receives .the figure also shows the total cost for standard caching as a horizontal line for comparison .the rate is 1 query per second .figure [ fig : capacityq1000 ] shows the same for = 1000 which is especially interesting because cup has bigger wins with higher query rates since more updates are justified than with lower query rates . therefore , with high query rates cup has more to lose if updates do not get propagated . note that even when the capacity of one fifth of the nodes is reduced to zero percent and these nodes do not propagate updates , cup outperforms standard caching for both query rates .the total cost incurred by cup is about half that of standard caching for one query per second for both configurations . for 1000 queries per second ,the total cost of cup is 0.56 and 0.77 that of standard caching for `` up - and - down '' and `` once - down - always - down '' respectively .a key observation from these experiments is that cup s performance degrades gracefully as decreases .this is because the reduction in propagation saves any overhead that would have occurred otherwise .the important point here is that even if nodes can only push out a fraction of updates to interested neighbors , cup still extends the benefits of standard caching . clearly though , cup achieves its full potential when all nodes have maximum propagation capacity .some peer - to - peer systems suffer from what we call the `` open - connection '' problem . every time a peer node receives a query for which it does not have an answer cached, it asks one ( e.g. , freenet ) or more ( e.g. , gnutella ) neighbors the same query by opening a connection and forwarding the query through that connection .the node keeps the connection open until the answer is returned through it . for every query on every item for which the node does not have a cached answer, the connection is maintained until the answer comes back .this results in excessive overhead for the node because it must maintain the state of many open connections . cup avoids this overhead by asynchronously pushing responses as first - time updates and by coalescing queries for the same item into one query .chord and cfs suggest alternatives to making the query response travel down the reverse query path back to the query issuer .chord suggests iterative searches where the query issuer contacts each node on the query path one - by - one for the item of interest until one of the nodes is found to have the item .cfs suggests that the query be forwarded from node to node until a node is found to have the item .this node then directly sends the query response back to the issuer .both of these approaches help avoid some of the long latencies that may occur as the query response traverses the reverse query path .cup is advantageous regardless of whether the response is delivered directly to the issuer or through the reverse query path .however , to make this work for direct response delivery , cup must not coalesce queries for the same item at a node into one query since each query would need to explicitly carry the return address information of the query issuer . all of the above systems ( gnutella , freenet , chord , and cfs ) enable caching at the nodes along the query path .they do not focus on how to maintain entries once they have been cached .cached items are removed when they expire and refetched on subsequent queries .for very popular items this can lead to higher average response time since subsequent bursts of queries must wait for the response to travel up and ( possibly ) down the query path .cup can avoid this problem by refreshing or updating cached items for which there is interest before they expire .consistent hashing work by karger et al . looks at relieving hot spots at origin web servers by caching at intermediate caches between client caches and origin servers .requests for items originate at the leaf clients of a conceptual tree and travel up through intermediate caches toward the origin server at the root of the tree .this work uses a model slightly different from the peer - to - peer model .their model and analysis assume requests are made only at leaf clients and that intermediate caches do not store an item until it has been requested some threshold number of times .also , this work does not focus on maintaining cache freshness .update propagations in cup form trees very similar to the application - level multicast trees built by scribe .scribe is a publish - subscribe infrastructure built on top of pastry .scribe creates a multicast tree rooted at the rendez - vous point of each multicast group .publishers send a message to the rendez - vous point which then transmits the message to the entire group by sending it down the multicast tree .the multicast tree is formed by joining the pastry routes from each subscriber node to the rendez - vous point .scribe could apply the ideas cup introduces to provide update propagation for cache maintenance in pastry .cohen and kaplan study the effect that aging through cascaded caches has on the miss rates of web client caches . for each objectan intermediate cache refreshes its copy of the object when its age exceeds a fraction _ v _ of the lifetime duration .the intermediate cache does not push this refresh to the client ; instead , the client waits until its own copy has expired at which point it fetches the intermediate cache s copy with the remaining lifetime .for some sequences of requests at the client cache and some _ v _s , the client cache can suffer from a higher miss rate than if the intermediate cache only refreshed on expiration .their model assumes zero communication delay .a cup tree could be viewed as a series of cascaded caches in that each node depends on the previous node in the tree for updates to an index entry .the key difference is that in cup , refreshes are pushed down the entire tree of interested nodes .therefore , barring communication delays , whenever a parent cache gets a refresh so does the interested child node . in such situations ,the miss rate at the child node actually improves .in this paper we propose cup : controlled update propagation for cache maintenance .cup query channels coalesce bursts of queries for the same item into a single query .cup update channels refresh intermediate caches and reduce the average query latency by over a factor of ten in favorable conditions , and as much as a factor of three in unfavorable conditions . through light book - keeping , cup controls and confines propagations so that only updates that are likely to be justified are propagated .in fact , when only half the number of updates propagated are justified , cup s overhead is completely recovered .finally , even when a large percentage of nodes can not propagate updates ( due to limited capacity ) , cup continues to outperform standard caching with expiration .this research is supported by the stanford networking reseach center , and by darpa ( contract n66001 - 00-c-8015 ) .
|
recently the problem of indexing and locating content in peer - to - peer networks has received much attention . previous work suggests caching index entries at intermediate nodes that lie on the paths taken by search queries , but until now there has been little focus on how to maintain these intermediate caches . this paper proposes cup , a new comprehensive architecture for controlled update propagation in peer - to - peer networks . cup asynchronously builds caches of index entries while answering search queries . it then propagates updates of index entries to maintain these caches . under unfavorable conditions , when compared with standard caching based on expiration times , cup reduces the average miss latency by as much as a factor of three . under favorable conditions , cup can reduce the average miss latency by more than a factor of ten . cup refreshes intermediate caches , reduces query latency , and reduces network load by coalescing bursts of queries for the same item . cup controls and confines propagation to updates whose cost is likely to be recovered by subsequent queries . cup gives peer - to - peer nodes the flexibility to use their own incentive - based policies to determine when to receive and when to propagate updates . finally , the small propagation overhead incurred by cup is more than compensated for by its savings in cache misses .
|
competitive runners focus upon training effectively in order to enhance their fitness and performance . yet despite many advances in the scientific evaluation of responses to training , the prescription and effectiveness assessment of training programmes relies upon the intuition and experience of runners and their coaches .the early attempts to model the effects of training on performance were pioneered by who proposed an impulse - response model ( see , also , for a more recent application of the methodology ) .banister and colleagues quantified training impulse as a single model input using arbitrary units , where the response is modelled as a change in performance that varies according to the training input .however , runners can not use this model to inform their training as it requires frequent performance trials that interfere with their training programme .several fundamental issues have also been highlighted by and .specifically , the model needs input training data to be aggregated according to an assumed biological or physiological model .this biological or physiological model provides an abstraction of the complex relation between training input and response but has never been validated for this purpose .critically , the necessary data aggregation restricts the use of the model to evaluating only programmes comprised of identical training sessions . ultimately , the limitations of the model are such that its derived parameter estimates are not generalizable beyond the training session or participant studied. a superior approach to modelling training would be to characterise its effects on race performance , i.e. as the time to cover a specified race distance .the relationship between _ best _ race performances over varying race distances has been found to follow a standard exponential curve over a wide range of values . was the first to illustrate that the power law is a very good fit to this relation by plotting world best performances for men running from 100 m to 100 km , with an average deviation of only 4.3% .the more recent studies of and have also confirmed that a power law fits well the relationship of best running performance and race distance .these models , however , do not account for the effect that training or the physiological status of the runner has on their performance , and hence they are overly simplistic into describing how the performance of individual runners varies accordingly . with recent developments in training technology runnerscan now use gps devices to record their training and races with a level of accuracy and detail that was previously inconceivable . the capability to obtain large volumes of training data from runners presents the opportunity for new insights into the links between training and performance by removing the need to use the limited impulse response model and by extending well - established parametric relationships for performance .that is the primary aim of the present study .the present study investigates the effects of training and physiological factors on the performance of highly - trained runners competing in distances from 800 metres to marathon ( endurance runners , for short ) .a secondary aim is to produce a predictive equation for race performance .the available data are from a year - long observational study of 14 endurance runners , which produced detailed gps records of their training , their physiological status and their best performance in standardised field tests .the contribution of the present work is two - fold .firstly , a multiplicative effects model for the performance of endurance runners is constructed , which extends the well - studied power - law relationship between runners performance times and distances by also taking into account the physiological status of the runner and information on the runners training .particularly , in order to capture information on the runners training , the concept of the training distribution profile is introduced .the resulting model involves performance as a scalar response , a group of associated scalar covariates and the training distribution profile , which is a functional covariate . in order to simultaneously identify the speed intervals on the domain of the training distribution profile and the scalar physiological covariates that are important for explaining performance , we introduce a procedure termed multi - resolution elastic net .multi - resolution elastic net proceeds by combining the partitioning of the domain of functional covariates in an increasing number of intervals with the elastic net of , and results in predictive equations that involve only interval summaries of the functional covariates .we argue that the usefulness of multi - resolution elastic net extends beyond the present study .the supplementary material provides a reproducible analysis of a popular data set in functional regression analysis using multi - resolution elastic net .the structure of the manuscript is as follows .section [ studydata ] provides a description of the available data , along with an overview of the protocols used for data collection , the performance assessments and the collection of physiological status information from laboratory tests . section [ modelling ] works towards setting up a general multiplicative effects model for performance that decomposes the performance times into effects due to race distance , physiological status , training and other effects .section [ estimation ] describes multi - resolution elastic net and its use for estimating the parameters of the multiplicative model .the outcomes of the modelling exercise are described in section [ results ] and confirm and extend established relationships between physiological status and runner performance , and , importantly , identify a contiguous group of speed intervals between 5.3 to 5.7 m as influential for performance . a predictive equation for performanceis also provided by the minimization of the estimated mean squared prediction error estimated from a test set across resolutions .the manuscript concludes with section [ discussion ] which provides discussion and directions for further work .the available data were gathered as part of the study in which examines changes in laboratory and field test running performances of highly - trained endurance runners .the study involved the observation of 14 competitive endurance runners , who had a minimum of 8 years experience of running training , and competition experience in race distances ranging from 800 m to marathon .all participants provided written informed consent for this study which had local ethics committee approval from the university of kent , chatham maritime , united kingdom .more details on the participants , the study design and data collection protocols are provided in , but a brief account is also provided below .the data was gathered by observing the training of the participating runners for a year . on commencing the study each runnerwas supplied with a wrist - worn gps device ( forerunner 310xt , garmin international inc .kansas , usa ) and instructed in its use according to the manufacturer s guidelines .the runners were asked to use the gps device to record their training time and distance throughout every session and race in the observation period .the study did not involve any direct manipulation of the runners training programmes and the runners regularly downloaded the data from their gps devices and sent the resulting files to the lead scientist .in addition to their training , each runner completed 5 laboratory tests and 9 track - based field tests at regular intervals throughout the study .the laboratory tests were used to measure traditional physiological status determinants of running performance , i.e. running economy , obla ( on - set blood lactate accumulation ) , and ( a measure of maximum oxygen consumption ) .the track - based field tests were conducted to measure the runners best performance over distances of metres , metres and metres . for the purposes of the current study ,only the field tests that occurred within a few days of the laboratory tests are considered , i.e. 5 out of the 9 field tests for each runner .the complete observation period for each runner was set to the interval between and including the runner s first and last field test .prior to all laboratory and field tests , careful standardisation ensured that each test was completed under conditions where the time of day , prior exercise , diet , hydration and warm up were specified and controlled .the runners were also familiarised with all laboratory and field tests before commencing the study .the raw gps data consists of 2,499,894 timestamped measurements of cumulative distances calculated using latitude and longitude information for the complete observation period for each runner .those raw observations were used to identify 3,469 distinct training sessions accounting for 3,239.4 hours of recorded training activity .a technical note that details the process for extracting the training sessions from timestamped measurements is provided in the supplementary material .figure [ study ] shows the resulting observation timeline for each runner and puts the events of the study on a calendar scale , where triangles denote lab tests , circles denote field tests and crosses denote training sessions . in what follows ,a training period is defined as the interval between two consecutive field tests .so , as is also apparent from figure [ study ] , each runner had 4 training periods .the figure also shows the number of recorded sessions and number of training hours within each training period ( above and below each timeline , respectively ) and the corresponding totals for the whole timeline ( on the right of each timeline ) .as seen on figure [ study ] , some runners have no training records for long intervals on their timelines .these intervals are either because of absence from training due to injury or vacation , or due to the runner failing to deliver their gps container files to the scientist .the training speed profiles for each training session shown in figure [ study ] were calculated from the timestamped gps measurements after appropriate imputation of zero speeds .the respective calculations and the imputation process are detailed in the technical note provided in the supplementary material .in order to investigate the effects of training and physiological factors on the running performance , it is assumed that the performance ( in seconds ) at the field test over distance decomposes as where and are the parameters controlling the power - law relationship between performance times and distances and is an error component with zero mean .model ( [ model ] ) extends the established power - law model for performance , by also taking into account the effect of the runners physiological status ( ) , the effect of training ( ) , and the effects of other factors , for example psychological , environmental ( ) that can potentially influence performance .model ( [ model ] ) asserts that the mean performance of the runners decomposes into a distance effect , a physiological status effect , a training effect and the effect of other unmeasured factors .note that the inclusion of physiological status effects also brings runner - specific effects into the model . in the current study we only have information for the distance , and the effects of physiological status effect and training .for this reason and taking into account the careful standardization prior to all laboratory and field tests ( see subsection [ collection ] for more details ) , we assume that the effect of other unobserved factors on the performance across field tests is constant and set ..the available covariate information that is used to characterize each of the effects in model ( [ model ] ) . [ cols="<,<,<",options="header " , ] the physiological status effect in model ( [ model ] ) is assumed to have the additive form where is a vector of unknown parameters and is the vector of the laboratory test results at the end of the training period .table [ covariates ] lists the available laboratory test results , which involve measurements on weight , height , age and physiological status determinants of running performance .the definition of the training effect in model ( [ model ] ) has to incorporate an effective summary of the training that took place over each training period .using directly some summary of the training speeds , such as a few quantiles , is an option but the speed profiles are rather noisy sometimes resulting in extreme speeds ( see , for example , the top row of figure [ speedprofiles ] for two well - behaved profiles ) .so , the use of a smoothing procedure is necessary prior to the calculation of any summaries .this , though , would require assumptions on the behaviour of runners speeds during their regular training sessions as a function of time in order to determine the right amount of smoothing .furthermore , any scalar summary of speeds over the training period does not directly reflect how the runner planned the allocation of speeds in the sessions within each training period . in order to overcome such difficulties in defining the training effect, we introduce the concept of the training distribution profile . for a session that lasted seconds , let and we call the curve , the `` training distribution profile '' .the training distribution profile represents the training time spent exceeding any particular speed threshold and is a smoother representation of the allocation of speeds than the speed profile .note that for any , and that for any .in addition , is a necessarily decreasing function of speed .the observed version of can be calculated using times and the calculated speeds ( see the technical note in the supplementary material for details on the calculation of speeds ) as where is the number of observations after the imputation process has taken place , and is the calculated speed , in meters per second , at time .then , for a chosen grid of speed values , , can be used to estimate the training distribution profile using smoothing techniques that respect the positivity and monotonicity of .the top of figure [ speedprofiles ] shows two examples of speed profiles , one characteristic of an almost constant - pace training session and one from a high - intensity , interval - based training session .the corresponding estimated training distribution profiles are shown in the bottom row of figure [ speedprofiles ] . as can be seen, the difference in the structure of the two training sessions alters the shape of the training distribution profiles correspondingly .the black curves in figure [ speedprofiles ] are calculated using ( [ observedtrainingdistribution ] ) and the grey , smoother curves result from fitting a shape constrained additive model with poisson responses to ensure that the relationship between the smoothed version of and is still monotone decreasing .the fitting is done using the ` scam ` package in * r * .note that the resulting smoothed version is almost indistinguishable from and from visual inspection ( not reported here ) this is the case for all recorded sessions in the data .for this reason , all subsequent analysis uses directly the smoothed versions .any training sessions that correspond to estimated training distribution profiles with more than 125 seconds above 8 m were dropped from the analysis as errors in data collection , because they exceed the world record speed for 800 metres ( 7.93 m for 800 m , david rudisha , london olympic games , 9 august 2012 ) .there were such sessions in the data and they have all been identified with the participants as bicycle rides or instances where the participants did not switch off their gps device before driving their car or riding their bicycle after the end of training session . for defining the training effect in model [ model ] , we assume that the average structure of the available training sessions within a training period ( see figure [ study ] ) approximates well the average training behaviour of runners for all the training sessions that took place in that period .we , further , assume that there is an unknown real - valued weight function , that weights the time spent at each speed according to its importance in determining performance . where is an unknown parameter and with being the average of the smoothed training distribution profiles for the training period .the quantity is the average session length in the period . as a final note , in contrast to individual session speed profiles the concept of training distribution profiles appears well - suited for visualising large volumes of training session data .for example , the right panel of figure [ trainingdistribution ] shows the estimated training distribution profiles for all sessions in the training periods of runner r06 .the left panel shows the negative derivatives of those and reveals the clear concentration of training at around 4 m which would have otherwise been hard to visualise .the average curves are shown in black .table [ covariates ] lists the available covariate information that is used to characterize each of the effects in model ( [ model ] ) .taking logarithms on both sides , model ( [ model ] ) is a linear regression model with response the logarithm of the performance at the distance , scalar parameters ( one for each scalar covariate in table [ covariates ] and the intercept ) and a functional parameter .we wish to be able to use that regression model to simultaneously assess the importance of scalar and functional covariates , also taking into account the correlation between the scalar covariates , and that between the scalar and the functional covariates .particularly , the importance of the functional covariates can be assessed by identifying the training speed intervals that are important for performance changes .if there were no scalar covariates , one way to do so is the flirti method in , which induces sparseness simultaneously on and on derivatives of it of a preset order . in order to simultaneously identify important training speeds and important training covariates, we propose an alternative procedure which we term the multi - resolution elastic net and which consists of the following steps .for each resolution from a set of resolutions : * partition the union of the observed domains of the functional covariate across observations into intervals of the same length . * construct covariates by calculating a summary of the functional covariate for each observation and on each interval ( e.g. integral , difference at endpoints and so on ) . *apply the elastic net on the covariates constructed in step b ) along with any other available scalar covariates .the penalty of the elastic net in step iii ) controls for the extreme collinearity that the covariates of step ii ) and any other scalar covariates can have , and the penalty of the elastic net imposes sparseness , if necessary .note that the above makes no direct assumption on the ordering of the regression parameters corresponding to the functional covariate , as a functional regression approach would do .instead , multi - resolution elastic net takes advantage of the grouping properties of the elastic net ( * ? ? ?* section 2.3 ) for the formation of contiguous groups of non - zero estimates for the parameters of the highly correlated summaries of the functional covariates in step ii ) . in this way , if the non - zero elastic net coefficients of those parameters form contiguous groups , then there is strong evidence that the corresponding intervals are important for the response. a further persistence of those contiguous groups as the resolution increases will further strengthen any conclusions . for the current application , in steps i ) andii ) of the multi - resolution elastic net we replace ( [ trainingeffect ] ) with the discretized version where is the average training time spent in the speed interval before the end - of - period field test , and is an equi - spaced grid of speeds with and for some resolution . then specification ( [ trainingeffect1 ] ) and the scalar covariates are handled simultaneously in step iii ) of the procedure .this last step will return estimates on and , hence , estimate the mean of the logarithmic performance for various resolutions .the above setup requires the determination of an optimal resolution for the training effects and the selection of the two tuning constants of the elastic net . in order to do so ,the data set is first split into an estimation and a test set ( the commonly used terminology for the `` estimation set '' is `` training set '' but we diverge from that in order avoid a terminology clash with the training effect ) .then , for each resolution , the tuning constants of the elastic net are selected as the ones that minimise the mean squared prediction error estimated using 10-fold cross - validation repeated for 10 randomly selected fold allocations in the estimation set .the selection of the two tuning constants of elastic net via cross - validation was implemented using the ` caret ` and ` elasticnet ` * r * packages .the resolution is then determined as the one that minimises the mean squared prediction error estimated using the test set . another outcome of this process is the estimated model for the chosen optimal resolution , which can be used for predictive purposes .the six records that correspond to the last period for runners r12 and r14 were dropped from the data as uninformative because that period contained no or only one training records ( see figure [ study ] ) .the resultant data set of 162 observations was then split into an estimation and test set .the test set is built from the records of 4 randomly selected runners .the reason for selecting amongst runners instead directly amongst records is to avoiding choosing an overoptimistic model in terms of prediction ( note that the physiological status covariate information is repeated across distances in model ( [ model ] ) ) .the 114 records for the remaining 10 runners form the estimation set .figure [ significance ] shows the non - zero estimated parameters for the chosen tuning constants of the elastic net for resolutions , where the maximal resolution of corresponds to speed intervals of length m each .the figure provides a quick assessment of the relevance of each covariate .the symbols and in figure [ significance ] indicate negative and positive elastic net estimates respectively .the signs of the estimated parameters are all as expected . as is apparent from the top plot in figure [ significance ] the distance of the field test , and the physiological status covariates of height ( cm ) , running economy ( ml ) and running speed at obla are influential determinants of performance , irrespective of the resolution used .particularly , the model indicates that without varying the training effects , shorter runners with higher speeds at obla and superior running economy ( ml ) perform better over a fixed race distance .the importance of each of obla , economy and height for running performance has been established and is consistent with previous study findings .for example , marathon performance has been shown to be predicted by running speed at obla and suggest that obla is reflective of the underlying physiological status of endurance runners .the influence of running economy on endurance performance has been previous studied in for a cohort of highly - trained runners similar to those of the present study . in striking similarity to the current analysis, also found significant evidence that runners economy associates with performance whereas is not .the relevance of height for endurance performance does not appear to have been widely studied . in this respect, examined a number of characteristics in a group of elite runners by dividing them according to by their best 10 kilometres performance time .they found significant evidence that the runners with larger performance times ( slower ) than the group s median were taller than those who had run faster .the middle plot in figure [ significance ] indicates that the time spent training at speeds in the approximate interval from 5.3 to 5.7 m is influential to the improvement in performance .the plot also shows that this result persists across all resolutions considered by the multi - resolution elastic net , and its importance is enhanced by the fact that within that interval and for all resolutions the non - zero estimates form a contiguous group . to the authors knowledge, this finding is the most specific to date in terms of analysing the contribution of training to subsequent performance .overall , the current analysis allows us to identify influential training speeds for subsequent performances within any training programme .this is in contrast to previous research where , typically , training speeds have been defined according to an underlying physiological model adopted prior to the commencement of the study , e.g. as percentages of the speed at or obla .for example , identify 4.3 m as the speed at obla , and then examine whether training at all speeds higher or lower than this links to changes in performance .the records in the test set were used to calculate the squared prediction test error for each , where is the exponential of ( [ logmu ] ) and is the subset of that contains the indices of the observations in the test set .the elastic net estimates have been rescaled as in to avoid over - shrinkage due to the double penalization that takes place in elastic net .the test errors and the corresponding 95% normal theory intervals ( 1.96 standard deviations ) are shown at the bottom plot in figure [ significance ] .the minimum test error is achieved for .figure [ paths ] shows the elastic net solution paths for that resolution as a function of the fraction of the norm .particularly , the fit identified by the dashed line on the solution paths corresponds to the non - zero estimates at in figure [ significance ] ( grey line ) .the elastic net estimates for the fit can be used to form the expression that predicts performance ( in seconds ) using the race distance , physiological status determinants as measured in the laboratory , and the average training distribution profile .this expression has been estimated to be where are the training period average times in minutes spent training within the speed intervals ] , $ ] , respectively . in order to reduce rounding errorthe units adopted in the above equation are rescaled from those in table [ covariates ] so that height is calculated in metres and economy in litres per kilogram per kilometre .the exponent of distance in ( [ predict ] ) is in agreement with previous studies where it has been found to be around 1.1 .expression ( [ predict ] ) can be used to determine the performance of an endurance runner for a specified race distance , by supplying the runner s height , the measurement of economy and obla from a laboratory test prior to the race , and the average time spend training at the specified speed intervals during the period prior to the race .a multiplicative effects model has been used to link the training and physiological status of highly - trained endurance runners to their best performances .the model extends previous work that uses the power - law to describe the relationship between performance times and distance , by also including information on the physiological status of the runner as measured under laboratory conditions , and the runners training as extracted directly from gps timestamped distance records for the period prior to the performance assessment .the relevance of the training and physiology covariates in the model was assessed using multi - resolution elastic net , which is described in detail in subsection [ multi ] .we argue that multi - resolution elastic net is a useful procedure to quickly check for the existence of influential intervals on the domain of one or more functional covariates in the presence of other scalar covariates . to provide some more evidence for our claim , a reproducible analysis of a popular dataset in functional regressionis revisited in the supplementary material and the results of multi - resolution elastic net are sensible and in agreement with those of the flirti procedure of .note that , the latter procedure is designed to handle regression models with only a single functional covariate .this is in contrast to multi - resolution elastic net that can simultaneously handle arbitrary number of scalar and functional covariates .an important and novel aspect of the present study was to determine the direct effect of training on performance .multi - resolution elastic net identified that the time spent training between 5.3 and 5.7 m relates to improvements in the performance of the endurance runners .another important aspect of the current study is that it was able to reproduce well - established relationships between performance and other physiological status measures , without limitations posed by an underpinning physiological or training model .specifically , it was found that without varying training effects , shorter runners with higher speeds at obla and superior running economy ( ml ) are found to perform better .as mentioned in the introduction section , the effect of training on performance has been modelled in studies like and using a training impulse response model . the model adopted in those and similar studies ,though , can not be used for predictions beyond the training session and runner studied .furthermore , the training inputs for the models in those studies are aggregated into arbitrary units thereby limiting their value to theoretical rather than practical applications .indeed subsequently , stress the need for the development of new modelling strategies by stating that _ `` it is likely that the expected accuracy between model prediction and actual data will greatly suffer from the simplifications made to aggregate total training strain in a single variable and , more generally , from the abstraction of complex physiological processes into a very small number of entities''_. in this respect , expression ( [ predict ] ) of our work generalises beyond the study design described in section [ studydata ] and , in addition , it has been chosen as the best in terms of the predictive quality from the models arising from different levels of aggregation of the training inputs .training programmes and the individual sessions within them can be complex .this complexity can make it difficult to visualise and analyse a large training dataset without prior transformation or simplification .the current work contributes in this direction with the introduction of the concept of the training distribution profile .the training distribution profile promotes clear and straightforward visualisation of large volumes of training data ( see for example figure [ trainingdistribution ] ) .more importantly , the training distribution profile allows the use of a wide range of contemporary statistical methods for the modelling of training data .for example , the training distribution profiles or their derivatives can directly be used as responses or covariates in functional regression models ( see , for example * ? ? ?* chapters 1517 for details ) and/or for the detection of training regimes and changes in training practices , for example by a cluster analysis .a more fruitful , if not more involved , analysis would , for example , take into account the variability of the training distribution profiles and/or their derivatives , as well as any serial correlation between them .the methods in seem to provide a good basis in this direction . given the widespread use of wearable data recording devices in training , distribution profiles of other aspects of a runner s trainingcan also be produced , such as heart rate distribution profiles and/or power - output profiles in cycling for example .their influence on performance can then be determined using similar procedures as in the present study .overall , we try to reverse the prevailing scientific paradigm for investigating the effects of training on performance . rather than evaluating the effects of a pre - specified training programme, we instead identify those aspects of training that link to a measurable effect .this presents an exciting and promising new approach to developing training theory .importantly , the ability to identify important speeds presents an obvious focus for subsequent training interventions on the performance of endurance runners , and motivates further subject - specific work on the design and the study of the effectiveness of such interventions .if this work is successful , it has the potential to lead to the development of a new model of training which could be tuned towards maximising performance gains , or enhancing the health benefits arising from a prescribed amount of exercise .the supplementary material is appended at the end of the current preprint , and contains a reproducible analysis of the canadian weather data ( see , for example , ( * ? ? ?* section 6 ) or ( * ? ? ? * section 1.3 ) ) using multi - resolution elastic net and the flirti procedure of .the supplementary material also contains a technical note that details the process for extracting the training sessions and speed profiles from timestamped gps measurements .
|
a multiplicative effects model is introduced for the identification of the factors that are influential to the performance of highly - trained endurance runners . the model extends the established power - law relationship between performance times and distances by taking into account the effect of the physiological status of the runners , and training effects extracted from gps records collected over the course of a year . in order to incorporate information on the runners training into the model , the concept of the training distribution profile is introduced and its ability to capture the characteristics of the training session is discussed . the covariates that are relevant to runner performance as response are identified using a procedure termed multi - resolution elastic net . multi - resolution elastic net allows the simultaneous identification of scalar covariates and of intervals on the domain of one or more functional covariates that are most influential for the response . the results identify a contiguous group of speed intervals between 5.3 to 5.7 m as influential for the improvement of running performance and extend established relationships between physiological status and runner performance . another outcome of multi - resolution elastic net is a predictive equation for performance based on the minimization of the mean squared prediction error on a test data set across resolutions . + _ keywords : _ regularization , grouping effect , collinearity , training distribution profile , power law , wearable gps devices = 1
|
we assume the reader is familiar with fountain codes , lt - codes and belief propagation ( bp ) decoding . for details ,the reader is referred to , .we consider lt - codes with parameters , where is the message length and is the degree distribution of the output symbols during encoding . an important set to consider is the set of output symbols of degree ( the _ ripple _ ) .the size of the ripple varies during the decoding process , as high - degree output symbols become of degree after the removal of their edges , and as ripple elements become useless after the recovering of their unique neighbor .the decoding is in error if and only if the ripple becomes empty before all the input symbols are recovered .a natural question is thus whether we can track the size of the ripple , in the expectation , during the decoding process .karp et al . proved that the expected ripple size is linear in throughout most of the decoding process .their asymptotic analytic expressions for the expected ripple size can be found in section [ prelim ] .they also derive an expression for the expected _ cloud _ size throughout decoding , where the cloud is defined at each decoding step as the set of output symbols of degree strictly higher than . in this paper , we extend their analysis in two ways .first , we consider higher moments of the cloud and ripple size in order to upper bound the error probability of the lt decoder .more specifically , we use similar methods to derive an expression for the variance of the ripple size and prove that it is also linear in throughout most of the decoding process . we can then use this expression together with the expression for the expectation to offer a guarantee for successful decoding , as follows : if , for fixed lt - code parameters , is the expectation and is the standard deviation of the ripple size when symbols are unrecovered , then if the function for some parameter never takes negative values , we can upper bound the error probability of the lt decoder by the probability that the ripple size deviates from its mean by more than standard deviations .second , we take the first step towards an analytic finite - length analysis of the lt decoder , by providing exact expressions for the expectation ( variance ) of the ripple size up to ( constant ) terms .this is done by considering lower - order terms in the difference equations , but also by getting tight bounds on the discrepancy introduced by approximating difference equations by differential equations .it is worthy to note that the expressions we deal with are valid for `` most of the decoding process , '' that is , the analysis breaks down when the number of unrecovered symbols is no longer a constant fraction of .this is no issue , however , when one considers raptor codes , which need only a constant fraction of the input symbols to be recovered by the lt decoder .let be the number of unrecovered ( _ undecoded _ ) input symbols at a given decoding step . define the decoder to be in state the cloud size is and the ripple size is at this decoding step . to each state , we can associate the probability of the decoder being in this state .define the _ state generating function _ of the lt decoder when symbols are undecoded as the following theorem by karp et al .gives a recursion for the state generating function of the lt decoder .+ suppose that the original code has input symbols and that output symbols have been collected for decoding .further , denote by the probability that an output symbol is of degree where is the maximum degree of an output symbol .then we have for , \end{split}\ ] ] where for } { \left [ \begin{array}{c } k-2\\ d-2\\ \end{array } \right]}}{1 - u \sum_{d=1}^d \omega_d d \frac{\left [ \begin{array}{c } k - u\\ d-1\\ \end{array } \right]}{\left [ \begin{array}{c } k\\ d\\ \end{array } \right ] } - \sum_{d=1}^d \omega_d \frac{\left [ \begin{array}{c } k - u\\ d\\ \end{array}\right]}{\left [ \begin{array}{c } k\\ d\\ \end{array}\right]}},\ ] ] and : = \binom{a}{b } b!,\ ] ] and further , + this recursion gives a way to compute the probability of a decoding error at each step of the bp decoding as and the overall error probability of the decoder as if we approximate the lt process by allowing output symbols to choose their neighbors with replacement during encoding , becomes : where with this assumption , karp et al .use the recursion to derive difference equations for the expected size of the ripple and the cloud , and further approximate these difference equations by differential equations that they solve to get closed - form expressions for the expected ripple and cloud size .formally , let denote the expected number of output symbols in the ripple , and denote the expected number of output symbols in the cloud , when input symbols are undecoded , where is assumed to be a constant fraction of the total number of input symbols .then the following theorem shows that is linear in for an appropriate choice of the lt code parameters .+ [exp ] consider an lt - code with parameters and assume symbols have been collected for decoding . during bp decoding , let and be respectively the expected size of the cloud and ripple as a function of the number of undecoded input symbols . then, under the assumptions that is a constant fraction of and we have in what follows , we let be a continuous approximation of a normalized version of can be shown to be the solution of the differential equation with initial condition and is given by with similarly , we define as a continuous approximation of is the solution of with initial condition and is given by with then we can write be the variance of the ripple size as a function of the number of undecoded symbols . in what follows we will always assume that is a constant fraction of . is given by where we define it is thus enough to find an expression for to get an expression for we start by differentiating both sides of the recursion ( [ recursion ] ) twice with respect to and evaluating at this gives us a recursion for .\end{split}\ ] ] before we can proceed with solving this difference equation , we need to find expressions for the second - order derivatives and we do so by following exactly the same method that we are currently outlining for an expression for define let be a continuous approximation of the normalized function it can be shown that is the solution of the differential equation with initial condition and is given by the expression with similarly , let be a continuous approximation of it is the solution of with initial condition and an expression for it is with then the following theorem gives closed - form expressions for and + [ ml ] as for the `` dirt '' term ,\ ] ] it does not involve derivatives and we can not use the same method to find an expression for it independant the state generating function . however , we can bound it under an assumption on the ripple size . more specifically, it is not difficult to prove that for , the dirt term is of constant order . in what follows ,we assume that the size of the ripple does not go below the constant .replacing and by their expressions and bounding the dirt term in the recursion ( [ n_rec ] ) , we obtain the following difference equation for note that as defined in equation ( [ defn ] ) can be as large as a constant fraction of .we thus need to normalize if we want to say something meaningful about the difference we define to be the _ fraction _ of undecoded symbols , and let be a normalized version of we similarly normalize the other functions of and represent them as functions of : normalizing equation ( [ n_difference ] ) and replacing the functions by their continuous approximations , we obtain neglecting lower - order terms , we approximate by the function which satisfies with initial condition + [ claim1 ] for any on which is defined , and differ by a term of the order of + we skip the proof of this and subsequent claims for reasons of space , and refer the reader to the final version of this paper .we further approximate the discrete function by the continuous function and by the first - order derivative of . satisfies the differential equation with initial condition + [ claim2 ] for any on which is defined , and differ by a term of the order of + the general solution of the differential equation ( [ diffnhat ] ) is given by where the value of the constant can be found to be , by the initial conditions , by claims [ claim1 ] and [ claim2 ] we thus have where is given by equation ( [ nhat ] ) .this gives us an expression for up to a term of the order of : comparing this expression to that for given by equations ( [ r_hat ] ) and ( [ r ] ) , it is easy to see that these two expressions agree up to terms of the order of , so that the variance of the ripple size is of the order of . + consider an lt - code with parameters and let be the standard deviation of the ripple size throughout bp decoding. then ultimate goal is to be able to bound the error probability of the decoder as a function of , without the assumption that goes to infinity .we thus need to find an expression for the variance of the ripple size , instead of simply determining its order . for this purpose, we must find an expression for up to terms of constant order , and an expression for up to terms of the order of we illustrate the analysis for from the recursion given by equation ( [ n_rec ] ) , we proceed by first , assuming that the ripple size does not go below so that the `` dirt '' term is of the order of ; and second , replacing and by finer approximations as follows : where is a discrepancy term introduced by approximating by and are defined similarly .these discrepancy terms are all of the order of and are given by the following expressions . where and are constants for most of the decoding process and are given by these expressions are obtained by the same method that we are now following to obtain a more precise approximation of the next step is to write a recursion for which is exact up to terms of the order of we then approximate by which satisfies the same recursion except that we neglect terms of the order of : [ claim3 ] for any on which is defined , and differ by a term of the order of + we further approximate by which satisfies the differential equation ( [ diffnhat ] ) and is given by expression ( [ nhat ] ) . a more careful analysis of the discrepancy beween and leads to the following claim : + [ claim4 ] for any on which is defined , and differ by a term of the order of + more precisely , where \cdot \prod_{j = i+1}^{k(1-x)-1}\left(1-\frac{2}{k(1-j / k)}\right)+o(1/k^2 ) .\end{split}\ ] ] by claims [ claim3 ] and [ claim4 ] we thus have where is given by equation ( [ nhat ] ) . using the resulting expression for , and the expression for given by equation ( [ precise_vals ] ) , we finally get an expression for the variance of the ripple size up to terms of constant order .+ consider an lt - code with parameters and overhead and let be the variance of the ripple size throughout bp decoding .then figure [ capsol ] shows a plot of the expected ripple size and the functions and given by equation ( [ hc ] ) , throughout the decoding process , for an lt - code with and and with the `` capped soliton '' degree distribution ,\ ] ] inspired from luby s ideal soliton distribution .the plot also shows the result of real simulations of this code , and confirms that the problem zones of the decoder are those predicted by the functions : the closer they are to the -axis , the more probable it is that the decoder fails .as can be seen , there is a fair chance that the decoder fails when the fraction of decoded input symbols is between 0 and 0.2 , and there is a very good chance that the decoder fails when the fraction of decoded input symbols is close to 0.95 .we have given an analytic expression for the variance of the ripple size throughout the lt decoding process .this expression is asymptotically of the order of , and we have expressed it as a function of as a first step toward finite - length analysis of the lt decoding . the next step is to work around the assumption that is a `` constant fraction '' of . then we would obtain a guarantee for successful decoding as a function of the lt - code parameters and overhead for practical values of .this would then allow us to solve the corresponding design problem , namely to choose degree distributions that would make the function stay positive for as large a value of as possible , for a fixed code length .
|
we analyze the second moment of the ripple size during the lt decoding process and prove that the standard deviation of the ripple size for an lt - code with length is of the order of together with a result by karp et . al stating that the expectation of the ripple size is of the order of , this gives bounds on the error probability of the lt decoder . we also give an analytic expression for the variance of the ripple size up to terms of constant order , and refine the expression in for the expectation of the ripple size up to terms of the order of , thus providing a first step towards an analytic finite - length analysis of lt decoding .
|
the scientific community has been recently interested in the definition of new generalized fading models , aiming to provide a better fit to real measurements observed in different scenarios . in such context , the - and - fading models have become very popular in the literature due to their versatility to accommodate to different propagation conditions and their relatively simple tractable form .the - and - fading models , first introduced in and , were independently derived to characterize very different propagation conditions . on the one hand , the - distribution can be regarded as a generalization of the classic rician fading model for line - of - sight ( los ) scenarios , extensively used in spatially homogeneous propagation environments . on the other hand ,the - distribution can be considered as a generalization of the classic nakagami- ( hoyt ) fading model for non - los scenarios , often used in non - homogeneous environments . therefore , and because they arise from different underlying physical models , there is no clear connection between the - and - fading models .one of the most appealing properties of the - and fading models is that they include most popular fading distributions as particular cases .for instance , the rician , nakagami- , rayleigh and one - sided gaussian models can be derived from the - fading by setting the parameters and to specific real positive values .similarly , the - fading model includes the nakagami- , nakagami- , rayleigh and one - sided gaussian as special cases .very recently , the - shadowed fading model was introduced in , with the aim of jointly including large - scale and small - scale propagation effects .this new model exhibits excellent agreement when compared to measured land - mobile satellite , underwater acoustic and body communications fading channels , by considering that the dominant components are affected by random fluctuations .this model includes the popular rician shadowed fading distribution as a particular case , and obviously it also includes the - fading distribution from which it originates .however , as we will later see , the versatility of the - shadowed fading model has not been exploited to the full extent possible . in this paperwe show that the - shadowed distribution unifies the set of homogeneous fading models associated with the - distribution , and strikingly , it also unifies the set of non - homogeneous fading models associated with the - distribution , which may seem counterintuitive at first glance .in addition to a formal mathematical proof of how the main probability functions introduced by yacoub originate from the ones derived in , we also establish new underlying physical models for the - shadowed distribution that justify these phenomena .in fact , we propose a novel method to derive the nakagami- ( hoyt ) and the - distributions which consists in using the shadowing of the dominant components to recreate a non - homogeneous propagation environment .this connection , which is here proposed for the first time in the literature , has important implications in practice : first , and contrary to the common belief , it shows that the - and - fading distributions are connected .hence we can jointly study the and - fading models by using a common approach instead of separately .besides , it implies that when deriving any performance metric for the - shadowed fading model , we are actually solving the same problem for the simpler - and - distributions at no extra cost .leveraging our novel approach , we derive simple and closed - form asymptotic expressions for the ergodic capacity of communication systems operating under - shadowed fading in the high signal - to - noise ratio regime , which can be evidently employed for the - and - distributions . unlike the exact analyses in and which require the use of the meijer g- and bivariate meijer g - functions , our results allow for a better insight into the effects of the fading parameters on the capacity .the remainder of this paper is structured as follows . in sectionii , we introduce the notation , as well as some definitions and preliminary results . in section iii, we propose new physical models for the - shadowed distribution . in sectioniv we show how the - and - distributions naturally arise as particular cases of the - shadowed fading model . in sectionv , we use these results to investigate the ergodic capacity in shadowed fading channels and thus for the - and channels . in sectionvi , numerical results are presented .finally conclusions are drawn .throughout this paper , we differentiate the complex from the real random variables by adding them a tilde on top , so that is a real random variable and is a complex random variable . ] ._ definition 2 : the generalized hypergeometric function_. + the generalized hypergeometric function of one scalar argument is defined as where is the pochhammer symbol ( * ? ? ?* eq . ( 6.1.22 ) ) , and ._ definition 3 : the gamma distribution_. + let be a random variable which statistically follows a gamma distribution with shape parameter and rate parameter , i.e , , then its pdf is given by where is the gamma function ( * ? ? ? * eq .( 6.1.1 ) ) and . _ definition 4 : the - shadowed distribution _ .+ let be a random variable which statistically follows a shadowed distribution with mean ] and non - negative real shape parameters and , i.e , , then its pdf is given by where is the -th order modified bessel function of first kind , which can be defined in terms of the bessel hypergeometric function ( * ? ? ?* eq . ( 9.6.47 ) ). _ definition 6 : the - distribution ( format 1 ) _ .+ let be random variable which statistically follows an - distribution with mean ] .it is known that this physical model follows a - shadowed distribution , i.e. , the instantaneous signal - to - noise ratio ( snr ) , with ] and , is distributed as , where the parameter represents the ratio between the total power of the dominant components and the total power of the scattered waves .it is worth noticing that this result can be extended for taking non - integer positive values , despite the model loses its physical meaning .however , this model imposes that the shadowing is statistically distributed as a nakagami- random variable , which is a strict condition that is relaxed in the next section . the previous model in eq .( [ model_paris ] ) clearly separates each cluster in the real and imaginary power components , so that the model can be defined by only using real random variables .thus , using this time complex random variables , it can be reformulated as where and can be related to the variables of the previous model in form of and .hence , represents the scattering wave of the -th cluster and is the deterministic dominant power of the -th cluster .a straightforward generalization of the previous model is to consider a complex shadowing component , so that we obtain the following model where is now a complex random variable , with and arbitrary phase .this new model obviously represents a similar scenario as the previous one in section iii.a , since all the clusters suffer from the same shadowing , which can be justified by the fact that the shadowing can occur near the transmitter or receiver side .the pdf of the instantaneous snr of the model in eq .( [ modelo_general_paris ] ) is derived as follows ._ lemma 1 _ : let , with ] , be the instantaneous snr of the model in eq .( [ model_propuesto ] ) .then where and ._ proof _ : the conditioned signal power conditioned to the sum of the i.i.d . shadowed dominant component powers follows a - distribution .moreover , since , then , where .thus , again we have the same conditional form as in and so we can follow the same steps to prove that the snr of the model follows a - shadowed distribution . do not have to be identically distributed to complete the proof .all that is needed is that the normalized rate parameter of each shadowing power must be equal .although from a mathematical point of view this is a valid model , this scenario is hard to imagine in practical conditions . therefore , we will restrict ourselves to the case with i.i.d . shadowing components . ]therefore , the snrs of both physical models presented in eq .( [ modelo_general_paris ] ) and eq .( [ model_propuesto ] ) follow a - shadowed distribution .the closed - form expressions for the cumulative distribution and the moment generating functions can be found in .in the previous section , we have introduced two different physical models which lead to the - shadowed distribution .now , we show how each of these models reduces to the general - and - fading distributions , respectively . by doing so ,we show that the - shadowed distribution can unify all classic fading models , both for homogeneous and non - homogeneous propagation conditions , and their most general counterparts .the - distribution is destined to model homogeneous environments , where the scattering for each cluster can be modeled with a circularly symmetric random variable .the derivation of the - distribution from lemma 1 is given in the following corollary ._ corollary 1 _ : let , with ] , be the instantaneous snr of the model in eq .( [ modelo_general_paris ] ) , i.e. , .if , ._ proof _ : by taking the limit in eq .( [ pdf ] ) and applying the following property we obtain the pdf in eq .( [ pdf_gamma ] ) .notice that eq .( [ limite1 ] ) can be carried out by simply exploiting the series expression of the hypergeometric function of scalar argument , where the first term has the unit value and the rest of the terms are powers of the scalar argument ( * ? ? ?* eq . ( 13.1.2 ) ) , so that they become zero when taking the limit .we give the following interpretation about corollary 2 . by tending ,we eliminate all the dominant components of the model , regardless the value of the shadowing parameter , so that we only have scattering components in each cluster , i.e. , we obtain a model which follows a nakagami- distribution or one of its particular cases , rayleigh or one - sided gaussian , depending on the value of . the nakagami- ( hoyt ) and the - distributions are employed in non - homogeneous propagation conditions environments , where the scattering model is non - uniform and can be modeled by elliptical ( or non - circularly symmetric ) random variables . at first glance , such scenario does not seem to fit with the - shadowed fading model .however , we can give a different interpretation to the cluster components of the physical model in eq .( [ model_propuesto ] ) : they can be interpreted as a set of uniform scattering waves with random averages .these random fluctuations in the average , which are different for each cluster , are responsible for modeling the non - homogeneity of the environment and ultimately lead to breaking the circular symmetry of the scattering model. we must note that a similar connection was inferred in , where the distribution was shown to behave as a rayleigh distribution with randomly varying average power .we show next how the circular symmetry of the model can be broken by using the result of lemma 2 ._ corollary 3 _ : let , with ] and ] , be the instantaneous snr of the model in eq . ( [ model_propuesto ] ) , i.e , . if , ._ proof _ : the result is straightforward by applying and making some algebraic manipulations. notice that by setting , we transform the i.i.d .random dominant components of the eq .( [ model_propuesto ] ) into scattering components .in fact , since , then we set and the -th random dominant component becomes a gaussian random variable .thus we are adding two gaussian random variables together in each cluster , which leads to an equivalent gaussian random variable , so that the one - sided gaussian , rayleigh or nakagami- models are obtained depending on the number of clusters considered .the table i summarizes all the models that are derived from the - shadowed fading model , where the - shadowed model parameters are underlined for the sake of clarity .when the - shadowed parameters are fixed to some specific real positive values or tend to some specific limits , we can obtain all the classic central models , i.e. , the rayleigh , one - sided gaussian , nakagami- and nakagami- , the classic noncentral rician fading , and their general counterparts , the rician shadowed , - and - fading models .it is remarkable that there are two ways for deriving the one - sided gaussian , rayleigh and nakagami- models , depending on whether the approaches in section iv.a or section iv.b are used .c|c channels & - shadowed parameters + one - sided gaussian & a ) , + & b ) , + & a ) , + & b ) , + & a ) , + & b ) , + nakagami- ( hoyt ) & , , = 0.5 + rician with parameter & , , + - & , , + - & , , + rician shadowed & , , +the characterization of the ergodic channel capacity in fading channels , defined as \triangleq\int_0^{+\infty}\log_2(1+\gamma)f_\gamma(\gamma)d\gamma,\ ] ] where is the instantaneous snr at the receiver side , has been a matter of interest for many years .while for the case of rayleigh fading it is possible to obtain relatively simple closed - form expressions for the capacity , the consideration of more general fading models leads to very complicated expressions that usually require the use of meijer g - functions . in order to overcome the limitation of the exact characterization of - shadowed channel capacity due to its complicated closed - form , it seems more convenient to analyze the high - snr regime . in this situation, the ergodic capacity can be approximated by ( * ? ? ?* eq . ( 8) ) which is asymptotically exact and where is a constant value independent of the average snr that can be given by in fact , the parameter can be interpreted as the capacity loss with respect to the additive white gaussian noise ( awgn ) case , since the presence of fading causes .when there is no fading , and this reduces to the well - known shannon result . using this approach , we derive a simple closed - form expression for the asymptotic capacity of the - shadowed model , which is a new result in the literature ._ lemma 3 _ : in the high - snr regime , the ergodic capacity of a - shadowed channel can be accurately lower - bounded by where is the binary logarithm , is the base of the natural logarithm , is the average snr at the receiver side , i.e. $ ] , and can be expressed as where is the digamma function ( * ? ? ?* eq . ( 6.3.1 ) ) and is a generalized hypergeometric function of one scalar argument . _proof _ : see appendix a. notice that when , we obtain the ergodic capacity of the rician shadowed in the high - snr regime . as opposed to the exact analysis in , which requires for the evaluation of a bivariate meijer g - function , lemma 3 provides a very simple closed - form expression for the capacity in the high - snr regime .more interestingly , since the - and - fading channel models are but particular cases of the shadowed distribution , we also obtain the capacity in these scenarios without the need of evaluating meijer g - functions as in .this is formally stated in the following corollaries ._ corollary 5 _ : in the high - snr regime , the ergodic capacity of a - channel can be accurately lower - bounded by where can be expressed as _ proof _ : the eq .( [ t_kmu ] ) is derived by applying the limit in eq .( [ t ] ) , so that the collapses in a hypergeometric function since _ corollary 6 _ : in the high - snr regime , the ergodic capacity of an - channel can be accurately lower - bounded by where can be expressed as _ proof _ : we obtain the - asympototic capacity loss from eq .( [ t ] ) by setting , and as table i indicates .hence , the expressions of the - and - asymptotic capacities have been jointly deduced from the result of , which are also new results . moreover , deriving the asymptotic capacity of the - shadowed has not been harder than deriving the - or - asymptotic capacities directly , since the - and - moments are expressed , like in the - shadowed case , in terms of a gauss hypergeometric function .thus , we are hitting two ( actually three ) birds with one stone . using the equivalences in table i, we can obtain even simpler expressions for classic fading models which reduce to existing results in the literature , for nakagami - m , rician and hoyt .for the sake of clarity , we omit the straightforward derivations of the rest of asymptotic capacities .instead , we summarize in table ii their capacity losses with respect to the awgn channel in the high - snr regime , where is the incomplete gamma function ( * ? ? ?* eq . ( 6.5.3 ) ) and is the euler - mascheroni constant , i.e. , .c|c channels & ergodic capacity loss ( ) [ bps / hz ] + one - sided gaussian & + rayleigh & + nakagami- & + nakagami- ( hoyt ) & + rician with parameter & + - & + & + & + - & + & + & + rician shadowed & + & +we now study the evolution of the capacity loss for the shadowed , - and - fading models with respect to the awgn case . in fig . [fig : c_clasicos ] and fig .[ fig : c_general ] , we plot the ergodic capacity of the classic and generalized fading models , respectively .table[x = x , y = y ] capacidad_awgn.dat ; table[x = x , y = y ] capacidad_rician_k10.dat ; table[x = x , y = y ] capacidad_nakagami_m1p5.dat ; table[x = x , y = y ] capacidad_rayleigh.dat ; table[x = x , y = y ] capacidad_hoytq0p2.dat ; table[x = x , y = y ] capacidad_one_q1e-3.dat ; table[x = x , y = y ] capacidad_asin_kappa0m1e4mu1.dat ; table[x = x , y = y ] capacidad_asin_kappa12m0p5mu1_eqhoytq0p2.dat ; table[x = x , y = y ] capacidad_asin_kappa10m1e4mu1_eqriciank10.dat ; table[x = x , y = y ] capacidad_asin_kappa0m1e4mu0p5_eqone.dat ; table[x = x , y = y ] capacidad_asin_kappa0m1e4mu1p5_eqnakagami_m10.dat ; table[x = x , y = y ] capacidad_awgn.dat ; table[x = x , y = y ] capacidad_kappa2p7mu2p4.dat ; table[x = x , y = y ] c_eta0p5mu2p4.dat ; table[x = x , y = y ] capacidad_kappa1p5m2p3mu1p2.dat ; table[x = x , y = y ] capacidad_asin_eta0p5mu1p2.dat ; table[x = x , y = y ] capacidad_asin_kappa1p5m2p3mu1p2_eqkappamushadowed.dat ; table[x = x , y = y ] capacidad_asin_kappa2p7m1e4mu2p4eqkappamu.dat ; we observe that all the models converge accurately to their asymptotic capacity values , remaining below the shannon limit , i.e , the capacity of the awgn channel .therefore , the asymptotic ergodic capacity expression derived in lemma 3 for the - shadowed model is here validated for the one - sided gaussian , rayleigh , nakagami- , nakagami- , rician , rician shadowed , - and - ergodic capacities in the high - snr regime . in figs .[ fig : m0p5]-[fig : m20 ] we show the evolution of the - shadowed asymptotic capacity loss when grows .when the shadowing can not be negligible , i.e , for which corresponds to figs .[ fig : m0p5]-[fig : m3 ] , having more power in the dominant components does not always improve the ergodic capacity , but sometimes raises considerably the capacity loss , especially for a great number of clusters . when , the shadowing can be neglected and the model actually tends to the - fading , where an increase in the power of the dominant components is obviously favorable for the channel capacity . therefore , receiving more power through the dominant components does not always increase the capacity in the presence of shadowing .in fact , we observe two different behaviors in the capacity loss evolution with respect to the parameter . for ,increasing the parameter is detrimental for the capacity .conversely , when the capacity is improved as is increased , i.e. in the presence of a stronger los component . in the limit case of , we see that the capacity loss is independent of .we also see that the capacity loss decreases as grows , since having a larger number of clusters reduces the fading severity of the small - scale propagation effects .table[x = x , y = y ] tparam_mu0p5m0p5.dat ; table[x = x , y = y ] tparam_mu0p7m0p5.dat ; table[x = x , y = y ] tparam_mu1m0p5.dat ; table[x = x , y = y ] tparam_mu1p5m0p5.dat ; table[x = x , y = y ] tparam_mu3m0p5.dat ; table[x = x , y = y ] tparam_mu20m0p5.dat ; ( axis cs:0.2,0.86 ) ( axis cs:1.2,1.08 ) node[above]rayleigh ; ( axis cs:0,0.83 ) circle ( 0.15 cm ) ; table[x = x , y = y ] tparam_mu0p5m1.dat ; table[x = x , y = y ] tparam_mu0p7m1.dat ; table[x = x , y = y ] tparam_mu1m1.dat ; table[x = x , y = y ] tparam_mu1p5m1.dat ; table[x = x , y = y ] tparam_mu3m1.dat ; table[x = x , y = y ] tparam_mu20m1.dat ; ( axis cs:0.25,1.83 ) ( axis cs:1,1.83 ) node[right]one - sided gaussian ; ( axis cs:0,1.83 ) circle ( 0.15 cm ) ; table[x = x , y = y ] tparam_mu0p5m3.dat ; table[x = x , y = y ] tparam_mu0p7m3.dat ; table[x = x , y = y ] tparam_mu1m3.dat ; table[x = x , y = y ] tparam_mu1p5m3.dat ; table[x = x , y = y ] tparam_mu3m3.dat ; table[x = x , y = y ] tparam_mu20m3.dat ; ( axis cs:0.25,1.83 ) ( axis cs:1,1.83 ) node[right]one - sided gaussian ; ( axis cs:0,1.83 ) circle ( 0.15 cm ) ; table[x = x , y = y ] tparam_mu0p5m20.dat ; table[x = x , y = y ] tparam_mu0p7m20.dat ; table[x = x , y = y ] tparam_mu1m20.dat ; table[x = x , y = y ] tparam_mu1p5m20.dat ; table[x = x , y = y ] tparam_mu3m20.dat ; table[x = x , y = y ] tparam_mu20m20.dat ; ( axis cs:0.25,1.83 ) ( axis cs:1,1.83 ) node[right]one - sided gaussian ; ( axis cs:0,1.83 ) circle ( 0.15 cm ) ; we have also marked in figs .[ fig : m0p5]-[fig : m20 ] some models that can be deduced from the - shadowed model .in fact , we can see them in the different legends and also at some specific points rounded by a circle in different curves . finally , fig .[ fig : kappa - mu ] and fig .[ fig : eta - mu ] depict the asymptotic ergodic capacity loss for the - and - fading models respectively .we observe that fig .[ fig : kappa - mu ] is quite similar to fig .[ fig : m20 ] because , as mentioned before , the - shadowed model with can be approximated by the - fading model . in fig . 8 , we see that , regardless the number of clusters , there is a minimum in the channel capacity loss at which divides in two symmetric parts the fading behavior as expected. it also noticeable that in fig .[ fig : eta - mu ] we have specified the limit cases for and .in fact , when , the - collapses into the one - sided gaussian model for or , whereas for it collapses into the rayleigh model . in turn , when , the - is reduced to the rayleigh case for or .this is shown in the figure by including also the rayleigh and one - sided gaussian capacity loss values with horizontal dotted and dashed lines respectively .we have proved that the - shadowed model unifies the - and - fading distributions . by a novel physical interpretation of the shadowing in the dominant components, we have shown that the - shadowed model can also be employed in non - homogeneous environments , which gives the - shadowed distribution a stronger flexibility to model different propagation conditions than other alternatives , when operating in wireless environments .thus , the - shadowed model unifies all the classic fading models , i.e. , the one - sided gaussian , rayleigh , nakagami- , nakagami- and rician fading channels , and their generalized counterparts , the - , and rician shadowed fading models . using this connection , simple new closed - form expressions have been deduced to evaluate the ergodic capacity in the high - snr regime for the - shadowed , and hence for the - and fading models , giving us clear insights into the contribution of the fading parameters on the capacity improvement or degradation . as a closing remark , one can think of whether the name of - shadowed distribution is still appropriate for this model , since its flexibility transcends the original characteristics presented in .table[x = x , y = y ] tparam_mu0p5.dat ; table[x = x , y = y ] tparam_mu0p7.dat ; table[x = x , y = y ] tparam_mu1.dat ; table[x = x , y = y ] tparam_mu1p5.dat ; table[x = x , y = y ] tparam_mu3.dat ; table[x = x , y = y ] tparam_mu20.dat ; ( axis cs:0.25,1.83 ) ( axis cs:1,1.83 ) node[right]one - sided gaussian ; ( axis cs:0.1,0.86 ) ( axis cs:2.5,1.23 ) node[right]rayleigh ; ( axis cs:0,1.83 ) circle ( 0.15 cm ) ; ( axis cs:0,0.83 ) circle ( 0.15 cm ) ; table[x = x , y = y ] tparam0a1000_etamu0p5.dat ; table[x = x , y = y ] tparam0a1000_etamu0p7.dat ; table[x = x , y = y ] tparam0a1000_etamu1.dat ; table[x = x , y = y ] tparam0a1000_etamu1p5.dat ; table[x = x , y = y ] tparam0a1000_etamu3.dat ; table[x = x , y = y ] tparam0a1000_etamu20.dat ; ( axis cs:1,0.86 ) ( axis cs:1,1.05 ) node[above]rayleigh ; ( axis cs:1000,0.86 ) ( axis cs:300,0.86 ) ; ( axis cs:0.001,0.86 ) ( axis cs:0.004,0.86 ) ; ( axis cs:1000,1.78 ) ( axis cs:300,1.71 ) ; ( axis cs:0.001,1.78 ) ( axis cs:0.004,1.71 ) ; ( axis cs:1,0.83 ) circle ( 0.15 cm ) ; ( axis cs:0.001,1.83 ) ( axis cs:1000,1.83 ) ; ( axis cs:0.001,0.83 ) ( axis cs:1000,0.83 ) ;in the high - snr regime , it is well - known that the ergodic capacity can be lower - bounded by the eq .( [ c_asin ] ) , where the parameter is related with the - order derivative of the amount of fading , such as thus , thanks to eq .( [ af ] ) , we first have to compute the moments of the snr at the receiver side before deriving , i.e. \triangleq&\int_{0}^{+\infty}\gamma^nf_{\gamma}(\gamma)d\gamma\\ = & \frac{\mu^\mu m^m(1+\kappa)^\mu}{\gamma(\mu)\bar{\gamma}^\mu(\mu\kappa+m)^m}\\&\times\int_{0}^{+\infty}\gamma^{\mu+n-1}\text{e}^{-\frac{\mu(1+\kappa)\gamma}{\bar{\gamma}}}\\ & \times_1\mathcal{f}_1\left(m,\mu;\frac{\mu^2\kappa(1+\kappa)}{\mu\kappa+m}\frac{\gamma}{\bar{\gamma}}\right)d\gamma .\end{split}\ ] ] observing that the remaining integral correspond to a laplace transform evaluated in , we then have ( * ? ? ?* eq . ( 4.23.17 ) )=&\frac{\gamma(\mu+n)}{\gamma(\mu)}\frac{\bar{\gamma}^n m^m(1+\kappa)^{-n}}{\mu^n(\mu\kappa+m)^m}\\ & \times_2\mathcal{f}_1\left(m;\mu+n;\mu;\frac{\mu\kappa}{\mu\kappa+m}\right ) , \end{split}\ ] ] where is the gauss hypergeometric function of scalar argument ( * ? ? ?* eq . ( 15.1.1 ) ) . by making a well - known transformation involving the arguments of the gauss hypergeometric function (* eq . ( 15.3.3 ) ) , we obtain =&\frac{\gamma(\mu+n)}{\gamma(\mu)}\big(\frac{\mu\kappa+m}{\mu m(1+\kappa)}\big)^n\\ & \times\ _ 2\mathcal{f}_1\left(\mu - m ,-n;\mu;\frac{\mu\kappa}{\mu\kappa+m}\right ) .\end{split}\ ] ] the amount of fading is then deduced by using the product rule in eq .( [ moments2 ] ) with the gauss hypergeometric function expressed in series form ( * ? ? ?* eq . ( 15.1.1 ) ) .as the derivative of a pochhammer symbol can be given by a difference of digamma functions , we obtain \\ & \times\ _ 2\mathcal{f}_1\big(\mu - m ,-n;\mu;\frac{\mu\kappa}{\mu\kappa+m}\big)\\ & -\sum_{r=1}^{+\infty}\frac{(\mu - m)_r(-n)_r}{(\mu)_r}(\psi(-n+r)-\psi(-n))\\ & \times\frac{\big(\frac{\mu\kappa}{\mu\kappa+m}\big)^r}{r!}\big\ } , \end{split}\ ] ] where the infinite sum starts at because the first term equals zero. setting the moments order , we get by applying some algebraic manipulations , we finally obtain where the infinite sum can be expressed in terms of the generalized hypergeometric function and so we have the result of eq .( [ t ] ) . notice that this result gives a simple new expression for the derivative of the gauss hypergeometric funcion with respect to or , when this same parameter or equals zero .in fact , this derivative is expressed in terms of the generalized hypergeometric function instead of in terms of a kamp de friet function as proposed in .g. d. durgin , t. s. rappaport and d. a. de wolf , new analytical models and probability density functions for fading in wireless communications , " _ ieee trans .50 , no . 6 , p. 1005 - 1015 , 2002 .x. wang and n. c. beaulieu , switching rates of two - branch selection diversity in - and - distributed fadings , _ ieee trans .wireless commun .1667 - 1671 , apr . 2009 .k. p. peppas , f. lazarakis , a. alexandridis and k. dangakis , error performance of digital modulation schemes with mrc diversity reception over - fading channels , " _ ieee trans .wireless commun ._ , vol.8 , no.10 , pp.4974 - 4980 , october 2009 r. cogliatti , r. a. a. de souza , and m. d. yacoub , practical , highly efficient algorithm for generating - and variates and a near-100% efficient algorithm for generating - variates , _ ieee commun .1768 - 1771 , nov . 2012 .k. p. peppas , sum of nonidentical squared - variates and applications in the performance analysis of diversity receivers , _ ieee trans .413 - 419 , jan . 2012 .p. sofotasios , e. rebeiz , l. zhang , t. tsiftsis , d. cabric , and s. freear , energy detection based spectrum sensing over - and - extreme fading channels , _ ieee trans . veh .3 , pp . 1031- 1040 , mar .2013 .a. abdi , w. c. lau , m .- s .alouini , and m. kaveh , a new simple model for land mobile satellite channels : first- and second - order statistics , " _ ieee trans .wireless commun .3 , pp . 519528 , may 2003 .a. snchez , e. robles , f. j. rodrigo , f. ruiz - vega , u. fernndez - plazaola and j. f. paris , `` measurement and modelling of fading in ultrasonic underwater channels , '' _ proc .underwater acoustic conf ._ , rhodes , greece , june 2014 .s. l. cotton , shadowed fading in body - to - body communications channels in an outdoor environment at 2.45 ghz , " _ ieee - aps topical conf . on antennas propag .wireless commun.(apwc ) , 2014 _ , vol ., no . , pp .249 - 252 , 3 - 9 aug . 2014 .s. l. cotton , seong ki yoo and w.g .scanlon , a measurements based comparison of new and classical models used to characterize fading in body area networks , " _ ieee mtt - s international microwave workshop series on rf and wireless technologies for biomedical and healthcare applications ( imws - bio ) 2014 _ , pp.1 - 4 , 8 - 10 dec .2014 n.y .ermolova , useful integrals for performance evaluation of communication systems in generalised - and - fading channels , " _ iet communications _ , vol .3 , no 2 , pp . 303 - 308 , feb .a. annamalai , e. adebola , asymptotic analysis of digital modulations in - , - and - fading channels , " _ iet communications _ , vol3081 - 3094 , nov .2014 .c. garca - corrales , f. j. caete , and j. f. paris , capacity of - shadowed fading channels , " _ international journal of antennas and propagation _2014 , article i d 975109 , 8 pages , 2014 .f. yilmaz and m .- s .alouini , novel asymptotic results on the high - order statistics of the channel capacity over generalized fading channels , " in _ proc .2012 ieee int .workshop on signal processing advances in wireless commun ._ , pp . 389393 .alouini and a. j. goldsmith , capacity of rayleigh fading channels under different adaptive transmission and diversity - combining techniques , " _ ieee trans .1165 - 1181 , 1999 .l. u. ancarani and g. gasaneo , derivatives of any order of the gaussian hypergeometric function 2f1 ( a , b , c ; z ) with respect to the parameters a , b and c , " _ journal of physics a : mathematical and theoretical _ , vol .42 , no 39 , p. 395208
|
this paper shows that the recently proposed - shadowed fading model includes , besides the - model , the - fading model as a particular case . this has important relevance in practice , as it allows for the unification of these popular fading distributions through a more general , yet equally tractable , model . the convenience of new underlying physical models is discussed . then , we derive simple and novel closed - form expressions for the asymptotic ergodic capacity in - shadowed fading channels , which illustrate the effects of the different fading parameters on the system performance . by exploiting the unification here unveiled , the asymptotic capacity expressions for the - and - fading models are also obtained in closed - form as special cases .
|
the fourier transform is widely used in timing analysis in astronomy . through a discrete fourier transform, a light curve can be decomposed into sine wave components in the frequency domain where is the fourier amplitude of sine wave components at frequency with and represents the photon counts during a time interval , , with .the fourier spectrum can be a powerful technique in the search for periodic signals , pulsations , and quasi - periodic oscillations in the light curve of a x - ray source .the fourier power density is used to describe the variability amplitude at different frequency .a peak with power in excess of the noise distribution power in the fourier spectrum indicates the existence of a periodic component in the light curve .when estimating the amplitude of a periodic signal from the fourier power density spectrum , one must assume that the shape of a signal pulse is sinusoidal , which is usually not true for variability due to real physical processes .even more care should be taken in interpreting the fourier spectrum of an aperiodic process in the time domain .it is a prevalent misconception that the fourier power spectrum is the only way to express the distribution of variability amplitude vs. time scale , and that is a quantity describing the variability amplitude for the time scale .the actual relation between the fourier power and the variability process occurring in the time domain is given by parseval s theorem : or equivalently , where is the counting rate at .parseval s theorem states that the integral of the fourier power density over the whole frequency range is equal to the variability power of the same process in the time domain .in fact , that is just the reason why the quantity is called _ power _ density .however , parseval s theorem says nothing about the power density _ distribution _ over the time domain .the fourier power at any given frequency within the limited range , , actually represents a sum of complex amplitudes from an infinite number of aliased frequencies , each contribution of which is reduced by a factor of sinc , with the sum of the squares of all the factors being unity .in contrast , the power at any given frequency in a time - based spectrum is most sensitive to the _ rate of change _ of the fourier spectrum with frequency near the nyquist frequency ( ) and its aliases , since the sinc function given above is changing most rapidly for such frequencies . the rms variation vs. time scale of a time seriesmay , in fact , differ substantially from its fourier spectrum . even in the simplest case , where the signal is purely sinusoidal with a frequency , the fourier spectrum is a function for the continuous fourier transform or approximately a function for the discrete case .such a function has little power density at frequencies except those near , which , although in this case is a clear representation of what physical process is going on , is still not an accurate representation of the variability of the amplitude in a time series .only when the time step is set equal to or much greater than the period , , can the corresponding light curve ( or pulse profile ) of a sinusoidal signal be completely flattened to make the time variation vanish . at time scales shorter than , however , a variation of intensity still exists in the light curve , and under some circumstances one would wish that power over this region should not be zero .in fact , different but mathematically equivalent representations with different bases or functional coordinates in the frequency domain do exist for certain light curves .the fourier transform with its trigonometric basis is just one of many possible transforms between the time and frequency domains , and may not necessarily represent the true power distribution of a _ physical _ process in the time domain .one can also argue that any observable physical process always occurs in the time domain , and this is why real variability amplitudes at different time scales are useful in understanding a time varying process , in addition to the understanding gleaned from the conventional fourier spectrum .an algorithm to calculate the power density spectrum directly in the time domain without using the fourier transform has been proposed ( ) .we introduce the algorithm and compare the power spectra in the time domain with the fourier spectra for different kinds of time process model in 1 .we have studied the power density spectra of a sample of neutron stars and black hole binaries , in both the frequency _ and _ the time domains .the results are presented in 3 .a discussion of the potential gain in the understanding the intrinsic nature of physical processes occurring near compact objects via comparison of the two kinds of power density spectrum is given in 4 .the initial definition of the variation power in a light curve is where the light curve , is a counting series obtained from a time history of observed photons with a time step , is the corresponding rate series , and and are the average counts and count rates , respectively .the unit used for a power in spectral analysis in astronomy is usually rms , where rms refers to the analysed time series . in the case of being a counting series ,the power can be expressed in units of counts/s . represents an integral of variability power for the region of time scale .it is easy to prove that if .thus we can define the power density in the time domain as the rate of change of with respect to the time step in counts/s or rms/s .( 5 ) and ( 6 ) , we can calculate the power density spectrum in the time domain for a light curve . in practice , the differential calculus in eq .( 6 ) can be performed numerically with and being the two powers at the time scale and ( ) , respectively , and .to detect the source signal in a power spectrum against a noise background , we need to know the noise power of a time series consisting only of noise . for a white noise series where the follows the poisson distribution ,the noise power is where is the expectation value of , but with units of counts , which also the variance of the poisson variable , and is the expectation value of the counting rate which can be estimated by the global average of counting rate of the studied observation . the noise power density at is the signal power density can be defined as and the fractional signal power density as in units of 1/s or ( rms / mean)/s , where both rms and mean refer to the time series . to study the signal power density in the time domain over a background of noise in an observed photon series , we divide the observation into segments . for each segment , the signal power density calculated by eq .. the average power density of the studied observation is and its standard deviation is . we can use the statistical methods based on the normal distribution to make statistical inference , e.g. significance test , on . at short time steps, the number of counts per bin may be too small for it to behave as a normally distributed variable .but it is easy to get a large enough total number ( ) of segments from a certain observation period to satisfy the condition for applying the central limit theorem as well as for using the normal statistics on the mean . to compare the power spectrum in the time domain with the fourier spectrum for the same process , we study three different kinds of time series .\(1 ) _ periodic signal _we use eq.(5 ) and ( 6 ) to calculate the power density spectrum in the time domain for a sinusoidal process with a period of 0.5 s. in figure 1 , a piece of the counting rate curve of the studied process is shown in the top panel and the power density distribution of time scale of the sinusoidal signal is shown by the solid line in the middle panel . from the rate curve , we derive a corresponding counting series with a time step 2 ms and use 8192-point ffts to get the fourier power densities and the corresponding densities in the time area .the fourier spectrum is shown by the dashed line in the middle panel of fig .1 . the areas under the two power density curves in fig . 1 are the same , as dictated by parseval s theorem .it is obvious that the fourier spectrum can not be interpreted as the distribution of variability amplitude vs. time scale .the power being concentrated at the sinusoid frequency does not mean that the intensity variation of the process exists only at the corresponding time scale .from the light curves with time steps s and 0.28 s , shown in the bottom panel of fig . 1 , we can see that variation of the intensity definitely exists at time scales s , but almost no power does in the fourier spectrum at those time scales .in contrast with the fourier spectrum , the power density spectrum determined in the time domain ( the solid line in the middle panel of fig.1 ) gives a proper description of the variability amplitude distribution of time scales .figure 2 shows the power density spectrum ( the solid line in the bottom panel ) derived in the time domain and the corresponding fourier spectrum ( the dashed line in the bottom panel ) for a periodic triangular signal ( the top panel ) .the fourier spectrum is obtained with 8192-point fft for a light curve with time step 1 ms .the many peaks at short time scales in the fourier spectrum ( high frequency harmonics ) do not mean that there really exist strong variations in the process at those time scales , which are just necessitated by mathematically decomposing the triangular signal into sine waves . as a result , a fourier spectrum may overestimate power densities at short time scales ( high frequency harmonics region ) for a periodic signal with pulse shape being far from a sinusoid .\(2 ) _ stochastic shots _we now consider the power spectra of a signal consisting of stochastic shots .both the rise and decay fronts of the shots are an exponential with a time constant taken uniformly from the range between 5 ms and 0.2 s. the separation between two successive shots is exponentially distributed with an average separation s. the average signal rate is assumed as cts / s .the peak height of the shot follows a uniform distribution between zero and the maximum .the top panel of fig .3 shows a piece of the signal counting curve with time step s and the solid line in the bottom panel of fig .3 is the power density spectrum in the time domain expected for the signal . a simulated 2000 s light curve with 1 ms time step is produced by a random sampling of the signal curve with poisson fluctuations plus a white noise with mean rate 5000 cts / s .the middle panel of fig .3 shows a piece of the light curve with s obtained from the simulated 1 ms light curve . in calculating the signal power density at a time scale ,the total light curve with time step was divided into data segments with bins each . if the segments number in cases of large time scales , we let and decreased the number of data points in each segment accordingly .for each segment of the light curve with time bin , the total powers at two time scales and through eq .( 5 ) were calculated .the corresponding noise powers were calculated by eq .( 8) with set equal to the average counting rate .the total power density and noise power density at can be calculated by eq .( 7 ) and ( 9 ) , respectively .finally , the signal power density in the time domain is . in the bottom panel of fig . 3, the plus signs mark the average signal power densities at different time scales . for the same light curve with 1ms time binning , we also calculated the leahy density for each s segment , where is the fourier amplitude at frequency determined from a 4096-point fft .it is well known that the noise leahy density , so the signal leahy density can be written as and the fourier power density of the signal is expressed by ( , ) . the dashed line in the bottom panel of fig .3 shows the average fourier power density spectrum of the signal with respect to time scale . as the characteristic time of a shotis taken in the range between 5 ms and 0.2 s , there should exist considerable variability over this time scale range as represented by the power density spectrum in the time domain expected for the signal ( the solid line in the bottom panel of fig . 3 ) and by that obtained from the simulated data including noise ( pluses in the bottom panel of fig .figure 4 represents the power spectra for another shot model with shorter characteristic time constant between 0.5 ms and 2 ms , average separation between two successive shots 3 ms , average signal rate cts / s , and noise rate 5000 cts / s .> from fig . 3 and fig .4 we can see that for a random shot series the fourier spectrum is more or less consistent with the power spectrum in the time domain at time scales greater than the characteristic time scale of the model , but significantly underestimates the power densities at shorter time scales .\(3 ) _ markov process_ the markov process or autoregressive process of first order can describe the character of the variability for many physical processes .a markov process can be expressed as the following stochastic time series where is a gaussian random variable with zero mean and unit variance , and the relaxation time of the process is with being the time step .the observed light curve for the signal is where is the average rate of the signal .we make a light curve of the signal with s , s , cts / s and . a piece of the produced signal light curve is shown in the top panel of fig . 5 . the final observed light curve with being a poisson noise with mean rate 5000 cts / s , is shown in the middle panel of fig . 5 .in the bottom panel of fig . 5 , the solid line shows the power density distribution expected for the signal , the plus signs indicate excess power densities in the light curve estimated by eqs .( 5)-(10 ) and the dashed line by fft . from fig . 5 , we can see , similar to the case of stochastic shot process , that the proposed algorithm of evaluating power densities in the time domain is capable of extracting the power spectrum of the signal from noisy data and that the fourier spectrum significantly underestimate the power densities at time scales around or shorter than the characteristic time scale of a stochastic process .as revealed by above study based on simulations , the power density spectrum derived in the time domain can describe the real power distribution with respect to time scale for different time processes .the two kinds of power spectrum , fourier spectrum and spectrum in the time domain , differ in ways depending on models of the processes . by comparing the two power spectra ,more information about the nature of the variability of an object can be extracted . for this purposewe calculate both the power density spectra in the time domain and fourier spectra for the publicly available data of the proportional counter array ( pca ) aboard the rossi x - ray timing explorer ( rxte ) for a sample of x - ray binaries , 7 neutron stars and 7 black hole binaries .the pca observations of x - ray objects included in our study are listed in table 1 . for each analyzed observation, we use the version 4.1 of standard rxte ftools to extract the pca data . at 18 time steps between 0.001s and 2.5 s , we make the corresponding light curves with s duration in an energy band as noted in table 1. then we remove all ineffective data points caused by failure due to the satellite , detector or data accumulation system , and calculate the power density spectra in the time domain , being based on a similar procedure in our simulation .the corresponding fourier spectra are constructed by using s ( ) time resolution light curves divided into parts containing 8192 bins .figure 6 shows the results from 6 observations for 5 neutron star binaries : ks 1731 - 260 , 4u 1705 - 44 , gs 1826 - 24 , 4u 0614 + 091 , 4u 1608 - 522 in the low state and 4u 1608 - 522 in the high state .the two kinds of power spectrum in the studied accreting neutron stars are generally consistent with each other , at least for the continuum dominated region .the feature is rather complicated for such sources whose fourier spectra have significant quasi - periodic oscillation ( qpo ) structure .the left panel of fig .7 is from the qpo source , sco x-1 , whose power density spectrum in the time domain also shows a strong qpo structure but the peak shifts to shorter time scales due to the steep slope of the sinc function near the nyquist frequencies ( 7 - 12.5 hz ) as more and more of the 6 + hz qpo is accommodated by the time sampling .the difference between the spectrum of time domain and the fourier spectrum at short time scales for cyg x-2 ( see the right panel of fig .7 ) may also be caused by qpos . for black hole candidates ,we first analyze the canonical source cygnus x-1 . on may 10 , 1996 ( day 131 of 1996 ) ,the all - sky monitor on rxte revealed that cyg x-1 started a transition from the normal low ( hard ) state to a high ( soft ) state .after reaching the high state , it stayed there for about 2 months before going back down to the low state ( ) . during this period ,11 pointing observations of cyg x-1 were made by rxte .we use one observation of pca / rxte for each of the four states of cyg x-1 .the signal power density spectra in the time domain of cyg x-1 are shown by the plus signs in fig .8 and the corresponding fourier spectra by the dots in figthe power spectra of cyg x-1 in different states ( fig .8) have a common trait that the fourier spectra are significantly lower than the corresponding power spectra of time domain in the time scale region of s. besides cyg x-1 , the black hole candidates grs1915 + 105 , gro j1655 - 40 , gx 339 - 4 , and xte j1550 - 564 also demonstrate a significant excess in the power spectra in the time domain in comparison with the corresponding fourier spectra at time scales shorter than s ( see figure 9 ) . but the another analyzed black hole binary , grs 1758 - 258 , with qpo structures in its fourier spectrum behaves differently ( fig .a fourier power density spectrum presented in the time domain can not be interpreted as the real power density distribution of the physical process studied . in principal ,the power densities in the time domain can be derived from a fourier spectrum only when one knows every power density spectrum in the time domain for each sinusoidal function at all fourier frequencies and adds them up with weight factors being the fourier amplitudes .we propose here studying power spectra directly in the time domain . the definition of the power ( eq .5 ) is based just on the original meaning of rms variation and the power density spectrum ( eq .6 ) represents the distribution of the variability amplitude vs. time scale .the power density spectrum in the time domain obtained from an observation depends only on the intrinsic nature of the signal process and the statistical property of the observed data , as does the fourier power spectrum in the frequency domain .our simulation studies show that the proposed algorithm , eqs .( 7 ) , ( 9 ) and ( 10 ) , is capable of extracting power densities of the signal from noisy data ( comparing the expected signal spectra , the solid lines in the bottom panels of figs . 3 and 5 and in fig .4 , with the spectra calculated from the noisy data , plus signs in corresponding plots ) .these results indicate that the technique of spectral analysis in the time domain is a useful tool in timing and worth applying in temporal analysis for different sources . from figs .( 1 ) - ( 5 ) one can see that the difference of the fourier spectrum with the spectrum in the time domain is dependent on the model of time series and sensitive to the characteristic time of a stochastic process .we can then use the difference between two kinds of spectrum to study the intrinsic nature of a studied process .the power spectra shown in figs .( 8) and ( 9 ) for black hole candidates have a common character that the fourier spectra are significantly lower than the corresponding time - based spectra in the time scale region of s , which is similar to the simulation results for the stochastic shot model ( fig .3 ) or the markov process ( fig . 5 ) .the existence of stochastic shots in x - ray light curves of cyg x-1 has been noticed for a long time . with an improved searching and superposing algorithm and pca / rxte data of cyg x-1 in different states , feng ,li & chen ( 1998 ) find that the average shot profiles can be described by exponentials with characteristic time scales s. pottschmidt et al ( 1998 ) point out that an autoregressive process of first order with a relaxation time of about 0.1 s can reproduce approximately the variability of cyg x-1 .thus , the characteristics of time process in cyg x-1 revealed from our power spectral analysis is consistent with that from modeling its light curves .the black hole candidates grs 1915 + 105 , gro j1655 - 40 , gx 339 - 4 and xte j1550 - 564 also demonstrate characteristics similar to cyg x-1 , indicating that a stochastic process with a characteristic time s may be common in accreting black holes . while the absence of the broad - band noise above approximately 100 hz in black hole candidates has been noticed before ( ) , our analysis shows that the apparent absence in fourier spectra is caused by the existence of a stochastic process with characteristic time s and by the insensitivity of the fourier technique to detecting rapid variability in a stochastic process . at the same time ,the two kinds of power spectrum are more or less consistent for accreting neutron stars with continuum dominated fourier spectrum , as shown in fig .our simulation results , figs .( 3 ) - ( 5 ) , show that for a stochastic process the two kinds of spectrum can be consistent with each other at time scales greater than the characteristic time of the process . assuming that a significant variability of an accreting neutron star comes from a stochastic process with very short characteristic time constant ms can explain the consistence of the two kinds of power spectrum observed in neutron star systems .sunyaev & revnivtsev ( 2000 ) find that the power density spectra of accreting neutron stars with a weak magnetic field have significant broad noise component at the frequency 500 - 1000 hz .they suggest that those x - ray transients which demonstrate significant noise in their x - ray flux at frequencies above hz should be considered neutron stars .most sources studied in this work ( neutron star systems 4u 0614 + 091 , 4u 1608 - 522 , gs 1826 - 24 , 4u 1705 - 44 , ks 1731 - 260 , cyg x-2 , and black hole candidates cyg x-1 , gx 339 - 4 , grs 1915 + 105 , gro j1655 - 40 , xte j1748 - 288 , grs 1758 - 258 ) are also studied by sunyaev & revnivtsev ( 2000 ) and our results support their claim under the condition that the spectra and noise in their statement are restricted within the fourier framework .in contrast to the results from sunyaev & revnivtsev ( 2000 ) , our results are based on inferring the characteristic time from the relation between two kinds of spectrum , no matter what the absolute magnitude of power spectrum is .the characteristic feature we find appears in all spectral states of the black hole candidate cyg x-1 and the same is true with the neutron star binary 4u 1608 - 522 .the characteristic features in rapid variability for two different kinds of x - ray binary in our sample , which associate significant stochastic processes at the time scale s for black hole candidates and ms for neutron stars , are revealed only in continuum or broad noise .the qpo components behave in a more complicated fashion , but can be understood through the sensitivity of the time - based spectrum to variation at 2 t , i.e. , at half the fourier sampling frequency ( the nyquist frequency ) .it has been found that qpo structures in power spectra can be caused by different kinds of signal , e.g. , modulated periodic signals and stochastic autoregressive processes with order ( ) . in the case of the neutron star binary sco x-1 ( the left plot of fig .7 ) , the power density spectrum in the time domain reveals the qpo feature surprisingly well , though with a peak shifting to shorter time scales and with worse resolution . different processes withessentially different natures could result in almost the same fourier power spectrum , thus distinguishing them is difficult through timing analysis only with the fourier technique .this is a reason why we need to develop and apply alternative methods to supplement the fourier technique in spectral analysis .our results , though only preliminary , show that simultaneous use of both the fourier and the time domain methods can help in probing the intrinsic nature of timing phenomena , and further , in distinguishing between different kinds of accreting compact object .the authors thank the referee for helpful comments and suggestions and dr .qu jinlu for help in data treatment .this work is supported by the special funds for major state basic research projects and the national natural science foundation of china .the data analyzed in this work are obtained through the heasarc online service provided by the nasa / gsfc .cui w. , zhang s.n ., focke w. , & swank j.h .1997 , apj , 484 , 383 feng y.x . ,& chen l. 1999 , apj , 514 , 373 leahy d. a. , darbro w. , elsner r.f .et al . , 1983 , apj , 266 , 160 li t. p. , 2001 , chin. j. astron .astrophys . , 1 , 313 pottschmidt k. , konig m. , wilms j. , & staubert r. 1998 , a&a , 334 , 201 priestly m. b. , 1981 , spectral analysis and time series , london : academic press , 241 sunyaev r. , & revnivtsev m. 2000 , a&as , 358 , 617 van der klis m. , 1986 , in : the physics of accretion onto compact objects , ed by mason k. o. , watson m. c. , and white n. e. , lecture notes in physics , 266 , 157 van der klis m. , 1988 , in : ogelmen e. p. j. , van den heuvel , eds . , timing neutron stars , kluwerbb academic publishers , 27 crrrr & ks 1731 - 260&10416 - 01 - 02 - 00 & 3 - 21 & & 4u 1705 - 44&20073 - 04 - 01 - 00 & 3 - 20 & & gs 1826 - 24&30054 - 04 - 01 - 00 & 3 - 20 & neutron star & 4u0614 + 091&30054 - 01 - 01 - 01 & 3 - 21 & & 4u1608 - 522&30062 - 01 - 01 - 04 & 3 - 21 & low state & 4u1608 - 522 & 30062 - 02 - 01 - 00&3 - 21 & high state & sco x-1&30035 - 01 - 02 - 000 & 2 - 18 & & cyg x-2&30418 - 01 - 01 - 00 & 2 - 21 & & cyg x-1 & 10412 - 01 - 01 - 00 & 2 - 13&lowto high & cyg x-1 & 10512 - 01 - 08 - 00 & 2 - 13&high state & cyg x-1 & 10412 - 01 - 05 - 00 & 2 - 13&high to low & cyg x-1 & 10236 - 01 - 01 - 03 & 2 - 13&low stateblack hole&grs1915 + 105&20402 - 01 - 05 - 00 & 5 - 22 & & gro j1655 - 40 & 20402 - 02 - 25 - 00 & 5 - 22 & & gx 339 - 4 & 20181 - 01 - 01 - 00 & 4 - 22 & & xtej1550 - 564 & 30191 - 01 - 14 - 00 & 2 - 13 & & grs 1758 - 258&30149 - 01 - 01 - 00 & 3 - 21 & & xte j1748 - 288&30185 - 01 - 01 - 00 & 2 - 21 &
|
the interpretation of fourier spectra in the time domain is critically examined . power density spectra defined and calculated in the time domain are compared with fourier spectra in the frequency domain for three different types of variability : periodic signals , markov processes and random shots . the power density spectra for a sample of neutron stars and black hole binaries are analyzed in both the time and the frequency domains . for broadband noise , the two kinds of power spectrum in accreting neutron stars are usually consistent with each other , but the time domain power spectra for black hole candidates are significantly higher than corresponding fourier spectra in the high frequency range ( 101000 hz ) . comparing the two kinds of power density spectra may help to probe the intrinsic nature of timing phenomena in compact objects .
|
polar codes , invented by arikan , are the first provably capacity - achieving codes with low encoding and decoding complexity .arikan s presentation of polar codes includes a successive cancellation decoding algorithm , which generally does not perform as well as the state - of - the - art error - correcting codes at finite block lengths . to improve the performance of polar codes , tal and vardy devised a list decoding algorithm .the initial work of arikan considers binary symmetric memoryless channels .there have been attempts to study polar codes for other channels , e.g. , the awgn channel . however , there are not many constructions of polar codes for channels with memory. see and references therein .the deletion channel is a canonical example of a non - stationary , non - ergodic channel with memory .it deletes symbols arbitrarily and the positions of the deletions are unknown to the receiver . a survey by mitzenmacher discusses the major developments in the understanding of deletion channels in greater detail .to date , the shannon capacity of deletion channels , in general , remains unknown .however , there have been attempts to find upper and lower bounds on the capacity of deletion channels .our motivation is partly the work of dolecek and anantharam , in which the run length properties of reed - muller ( rm ) codes were exploited to correct a certain number of substitutions together with a _deletion ; our work involves correcting _ erasures _ rather than substitiutions .rm codes and polar codes have similar algebraic structures and therefore polar codes are also potential candidates for correcting single deletions .however , they can not be used directly on deletion channels since the polarization of a channel with memory has not been well - studied .developing polarization techniques for deletion channels is beyond the scope of this study . instead , motivated by decoders that are possibly defective and delete symbols arbitrarily , we consider polar codes over a binary erasure channel ( bec ) and an adversarial version of the deletion channel with one deletion , and provide a list decoding algorithm to successfully recover the original message with high probability as the blocklength of the code tends to infinity . ]( w.h.p . ) .unlike rm codes , polar codes do not have rich run length properties .instead , we use the successive cancellation algorithm for decoding . in addition , we provide a detailed analysis of the error probability , which was lacking in .channel cascades were studied previously in but our model has not been previously considered in the literature .we argue that the capacity of the cascade can be achieved ; in constrast , does not discuss capacity issues .[ sec : prelim ] [ sec : twoa ] we consider polar codes of length constructed recursively from the kernel .given an information vector ( message ) where , a codeword is generated using the relation where is the -th kronecker product of and is a bit - reversal permutation matrix , defined explicitly in .the vector is transmitted through independent copies of a binary discrete memoryless channel ( bdmc ) with transition probabilities and capacity .as grows , the individual channels start polarizing .that is , a subset of the channels tend to noise - free channels and others tend to completely noisy channels .the fraction of noise - free channels tends to the capacity .the polarization behavior suggests using the noise - free channels to transmit information bits , while setting the inputs to the noisy channels to values that are known _ a priori _ to the decoder ( i.e. , the frozen bits ) .that is , a message vector consists of information bits and frozen bits ( often set to zero ) where of size is the information set and is the set of frozen bits .this scheme achieves capacity .denote the channel output by and the -th synthesized subchannel with input and output by for .the transition probability matrix is defined as where and is the codeword corresponding to the message .the encoding complexity of polar coding is .arikan proposed a successive cancellation ( sc ) decoding scheme for polar codes .given and the estimates of , the sc algorithm estimates .the following logarithmic likelihood ratios ( llr ) are used to estimate each for : the estimate of an unfrozen bit is determined by the signs of the llrs , i.e. , if and otherwise .it is known that polar codes with sc decoding achieve capacity with decoding complexity of .we suppose that bits are sent over a channel and exactly bits are deleted .we call this a _ -deletion channel_. that is , for bits sent , the decoder only receives bits after deletions and the positions of deletions are not known to the receiver .note that this is not the probabilistic deletion channel in which each symbol is independently deleted with some fixed probability .consider the _ 1-deletion channel _( in the definition in section [ sec : adv ] ) , where exactly one bit is deleted .we suppose that where .a message vector is encoded using the polar encoder and is sent across uses of a bec , each with erasure probability .the output vector is passed through a 1-deletion channel .we denote this cascade of and as and call this a _bec-1-deletion cascade_. this model is shown in fig . [ model ] .the output of is denoted as .note that permits erasures and a single deletion .that is , a message is sent across and a vector is received .a decoder is designed in such a way that w.h.p . , a list ( of linear size in ) containing an estimate of the original message is returned .( 210 , 60 ) ( 3,54 ) ( 98,54 ) ( 98,14 ) ( 3,14 ) ( 190,30 ) ( 48,48 ) ( 48,8 ) ( 138,48 ) ( 138,8 ) ( 5,50)(1,0)40 ( 45,60)(1,0)40 ( 45,40)(1,0)40 ( 45,40)(0,1)20 ( 85,40)(0,1)20 ( 85,50)(1,0)40 ( 125,60)(1,0)40 ( 125,40)(1,0)40 ( 125,40)(0,1)20 ( 165,40)(0,1)20 ( 165,50)(1,0)20 ( 185,50)(0,-1)40 ( 185,10)(-1,0)20 ( 125,20)(1,0)40 ( 125,00)(1,0)40 ( 125,00)(0,1)20 ( 165,00)(0,1)20 ( 125,10)(-1,0)40 ( 45,20)(1,0)40 ( 45,00)(1,0)40 ( 45,00)(0,1)20 ( 85,00)(0,1)20 ( 45,10)(-1,0)40a message is sent over a bec-1-deletion cascade using a polar encoder described in section [ sec : twoa ] and is received . in order to decode , we use the sc algorithm ( refer to section [ sec : twob ] ) .since the position of the deletion is unknown , we first identify a set of vectors , called the _candidate set _ , which contains as a sub - sequence .a nave algorithm to construct the candidate set would be to insert in the locations before and after each symbol of .we then apply the sc algorithm to each vector in the candidate set . for example , suppose and the received vector is .then the following set includes all vectors which contain the subsequence : the size of this set can be further reduced if we notice that inserting at positions is enough to identify all possible messages those can output after a single deletion .this is because of the following : suppose the -th symbol is deleted from . instead of inserting 0 or 1 at position , we insert an erasure symbol . since a polar code correcting ( where ) erasures also corrects erasures w.h.p ., under the sc decoding algorithm , this new length- vector decodes to the correct message w.h.p .no matter which symbol was at position .we state this observation formally : [ pro : cand ] suppose is sent over a bec-1-deletion cascade .( see fig .[ model ] . )the size of the candidate set ( constructed above ) is where is the number of erasures present in the received string .the candidate set is where is the received string .suppose that the -th symbol of is .inserting another before the -th symbol forms vector .this vector repeats if we insert again after the the -th symbol .therefore , considering non - erasure bits of and inserting exactly one erasure symbol at positions before and after these non - erasure bits produces unique vectors in the candidate set .since the number of erasure symbols is , the total number of vectors in is .we remark that as , by the law of large numbers and hence where is the erasure probability of the bec .after the construction of the set , the problem reduces to the decoding of each vector in using the sc algorithm .since , we get a list of messages of size at most at the end of the whole decoding procedure .let denote the sc decoding of , and define as the list of messages returned by the set where is the information set .since we insert the erasure symbol at each of the possible positions ( including the deleted position ) , the original message sent belongs to w.h.p .arikan proved that the probability of error vanishes asymptotically for polar codes over any bdmc .a more precise estimate was provided by arikan and telatar who showed that for any , for sufficiently large block lengths .therefore , under sc decoding , vectors in return all possible messages that can produce the string under a single ( adversarial ) deletion .naturally , there can be multiple that belong to the list and it may not be easy to single out the original message .however , by applying a simple pre - coding technique using an -bit crc ( or a code having an parity check matrix ) , the original message can be detected from the list , albeit with some additional probability of error .we describe how to recover the correct message w.h.p . here .recall that we have frozen bits that we usually set to zero . instead of setting all of them to zero, we set frozen bits to zero , where is a small number we optimize in section [ sec : analysis ] . these bits will contain the -bit crc value of the unfrozen bits ( or simply the parity bits ) . to generate a -bit crc, we select a polynomial of degree , called a _crc polynomial _ , having coefficients .we then divide the message ( by treating it as a binary polynomial ) by this crc polynomial to generate a remainder of degree at most , with total number of coefficients .we append these coefficients at the end of the -bit message to generate a -bit vector . to verify that the correct message is received , we perform the polynomial division again to check if the remainder is zero . for more details on the choice of crc polynomials, please refer to .we send these bits across the cascade .this new encoding is a slight variation the original polar coding scheme .also , note that the original information rate is preserved .however , the rate of the polar code is slightly increased to . to summarize, we encode the message of length into a length vector having redundancy where .then we apply the polar coding scheme for the codebook .this will result in a polar code of length and size where only the subset carries information that we wish to transmit .the codeword corresponding to the original message is then passed through the bec-1-deletion channel and outputs a vector . after constructing the set by inserting at each possible positions ,we apply the sc algorithm on . however , not all of these resulting vectors in carry information. we can check this using the initial -bit crc ( or the parity check matrix ) .all vectors which fail under the crc check are removed and we then select the message with the maximum likelihood from the list .suppose denotes the parity check matrix with rows that is being used for adding parity to the bit message .then the set of messages that carries any information can be identified as where is the modified version of ( [ eq : listl ] ) according to the new polar coding scheme defined as and where is the set of parity bits ( is the set of frozen bits ) . if the rows of are chosen uniformly and independently from , the probability that a vector is in is where .that is , a message in is wrongly identified as the original message with probability .however , the true message sent satisfies the parity - check condition .therefore , by the union bound , the total probability that an incorrect message is returned is upper bounded as where is the probability of error of the sc decoding algorithm and for a single deletion . to maintain that ( that is , as the block length grows , converges to ) and the upper bound on in ( [ eq : err ] ) is minimized , we have to choose carefully .for a single deletion , the size of the candidate set and hence w.h.p . from hassani _ et al . _ , the rate - dependent error probability of the polar code for the bec with rate is where , is the complementary gaussian cumulative distribution function , and is the capacity of the channel cascade . from ( [ eq : err ] ) , \label{eqn : two_terms } . \ ] ] it can be verified easily that the first term in the square parentheses in is decreasing and the second term with is increasing in . to optimize the upper bound in, we set the exponents of two terms to be equal ( neglecting the insignificant term ) , i.e. , where we used the fact that .now we find an expression for in terms of the backoff from capacity . to transmit the code at a rate close to the capacity , for a small constant , assume that where since a polar code over the bec 1-deletion cascade achieves the capacity of the bec ; this is a simple consequence of ( * ? ? ?* problem 3.14 ) and the fact that the list size is polynomial .then the rate for large enough .therefore , let . since , .then and hence .since decays as as , . then .therefore , the optimal value of the number of parity bits is this is a rate - dependent choice of ( through ) that simultaneously ensures that and the upper bound on in ( [ eq : err ] ) is minimized .now consider the cascade of a bec and a -deletion channel where is finite .this model can be analyzed using the same techniques presented here .the only difference is the size of the candidate set . by using the same arguments as in the -deletion case ,we construct by inserting erasure symbols at positions and . therefore , the list size .since the models are similar , a crc construction and error probability analysis for the bec--deletion cascade similar to that presented in sections [ sec : crc ] and [ sec : analysis ] respectively can be performed .in addition , we see that even if the list size is , the capacity of the bec is achieved because is still subexponential .the encoding complexity of the bec-1-deletion cascade is same as that for standard polar codes , i.e. , .however , the sc decoding algorithm has to be applied to all vectors in the candidate set of size ( cf .[ pro : cand ] ) .thus , the complexity of the decoding algorithm of the bec-1-deletion cascade is and that for the bec--deletion cascade is .although the complexity of the decoding algorithm increases by for each additional deletion , it can still be performed in polynomial time .in this section , we demonstrate the utility of the proposed algorithm by performing numerical simulations .the simulations are carried out in matlab using code provided in with the following parameters .let vary from to .the erasure probability of the bec is .thus , the capacity of the cascade is .we consider three different code rates : and .we fix and the -bit crc polynomial is chosen according to .the error probability is computed by averaging over independent runs .we encode a random length- message using a -bit crc polynomial so that the input of the encoder is a length input vector and the output is an -bit vector .this vector is then transmitted through a bec-1-deletion cascade and received a length- vector .the crc list decoder then computes a list of possible messages given the channel output .[ fig : errprob ] shows that , with a suitable choice of the number of crc bits and crc polynomials , as grows , the list is of size and contains only the original message w.h.p .1 e. arikan , channel polarization : a method for constructing capacity - achieving codes for symmetric binary - input memoryless channels " , _ ieee trans .inform . theory _55 , no . 7 , pp . 3051 - 3073 , jul 2009 .s. h. hassani , k. alishahi and r. l. urbanke , finite - length scaling for polar codes " , _ ieee trans .inform . theory _5875 - 5898 , oct 2014 .i. tal and a. vardy , list decoding of polar codes " , _ ieee trans .inform . theory _2213 - 2226 , may 2015 .e. abbe and a. barron , polar coding schemes for the awgn channel " , _ proceedings of the isit _ , 2011 , pp .194 - 198 .r. wang , j. honda , h. yamamoto and r. liu , construction of polar codes for channels with memory " , _ proceedings of the fall itw _ , jeju island , south korea , 2015 , pp .187 - 191 .m. mitzenmacher , a survey of results for deletion channels and related synchronization channels " , _ probability surveys _ , vol. 6 , pp 1 - 33 , 2009 .r. venkataramanan , s. tatikonda , and k. ramchandran , achievable rates for channels with deletions and insertions " , _ ieee trans .inform . theory _6990 - 7013 , nov 2013 . s. diggavi , m. mitzenmacher and h. d. pfister , capacity upper bounds for the deletion channel " , _ proceedings of the isit _ , 2007 ,1716 - 1720 .l. dolecek and v. anantharam , using reed - muller codes over channels with synchronization and substitution errors " , _ ieee trans .inform . theory _1430 - 1443 , apr 2007 .a. kiely and j. coffey , on the capacity of a cascade of channels " , _ ieee trans .inform . theory _1310 - 1321 , apr 1993 .k. niu and k. chen , crc - aided decoding of polar codes " , _ ieee comm .1668 - 1671 , oct 2012 .p. koopman and t. chakravarty , cyclic redundancy code ( crc ) polynomial selection for embedded networks " , _ international conference on dependable systems and networks _ , 2004 ,145 - 154 .s. h. hassani , r. mori , t. tanaka and r. l. urbanke , rate - dependent analysis of the asymptotic behavior of channel polarization " , _ ieee trans .inform . theory _2267 - 2276 , apr 2013 .a. el gamal and y .- h .kim , network information theory " , _ cambridge university press _ , 2012 .h. vangala , y. hong and e. viterbo , efficient algorithms for systematic polar encoding " , _ ieee comm .17 - 20 , jan 2016 .
|
we study the application of polar codes in deletion channels by analyzing the cascade of a binary erasure channel ( bec ) and a deletion channel . we show how polar codes can be used effectively on a bec with a single deletion , and propose a list decoding algorithm with a cyclic redundancy check for this case . the decoding complexity is , where is the blocklength of the code . an important contribution is an optimization of the amount of redundancy added to minimize the overall error probability . our theoretical results are corroborated by numerical simulations which show that the list size can be reduced to one and the original message can be recovered with high probability as the length of the code grows . draft paper polar codes , deletions , binary erasure channel , cascade , list decoding , cyclic redundancy check , candidate set
|
the _ voter model _ is an interacting particle system in which individuals ( particles ) of two species , _ red _ and _ blue _ , compete for `` territory '' on a ( locally finite ) graph . at each time , every vertex ( site ) of the graph is occupied by a single particle , either red or blue . at any time , a particle of color at a vertex may spontaneously die , at rate equal to the degree of , and be replaced by a clone of a randomly chosen neighbor .thus , a vertex of color spontaneously flips to the opposite color at rate equal to the number of neighboring vertices of color .see for a formal construction of this process and an exposition of its basic properties . the _ richardson model _ was introduced as a model for the spatial spread of a population in a favorable environment .the environment is once again a locally finite graph . at any time a vertex may be occupied by _ at most _ one particle ( some vertices may be unoccupied ) ; all particles are of the same species .once occupied , a vertex remains occupied forever .each unoccupied vertex is spontaneously occupied at instantaneous rate equal to the number of occupied neighbors . in this paperwe study a hybrid of the voter and richardson models on the integer lattice , which we dub the _ two - species competition model _ , or simply the _ competition model_. the dynamics are as in the voter model , but unlike the voter model , vertices may be unoccupied .an unoccupied vertex is colonized at rate equal to the number of occupied neighbors , as in the richardson model ; at the instant of first colonization , the vertex flips to the color of a randomly chosen occupied neighbor .once occupied , a vertex remains occupied forever , but its color may flip , as in the voter model : the flip rate is equal to the number of neighbors occupied by particles of the opposite color .the state of the system at any time is given by the pair , where and denote the set of sites occupied by red and blue particles , respectively .note that the set of occupied sites evolves precisely as in the richardson model , and so the growth of this set is governed by the same _ shape theorem _( see section [ sec : richardson ] below ) as is the richardson model .our primary interest is in the possibility of long - term coexistence of the two species , given initial conditions in which only finitely many vertices are occupied ( with at least one vertex of each color ) .it is clear that at least one of the two species must survive , and that for any nondegenerate finite initial configuration of colors there is positive probability that red survives and positive probability that blue survives .however , it is not at all obvious ( except perhaps in the case where the ambient graph on which the competition takes place is the integer lattice see section [ sec:1d ] below ) that the event of mutual survival has positive probability .our main result concerns the competition model on the graph .say that a compact , convex set with boundary is _ uniformly curved _ if there exists such that for every point there is a ball of radius with on its surface that contains .[ newman : theorem1 ] if the limit shape for the richardson model is uniformly curved , then for any nondegenerate initial finite configuration the event of mutual survival of the two species has positive probability .the proof will be carried out in sections [ sec : preliminaries][sec : proof ] below .theorem [ newman : theorem1 ] is by no means a complete solution to the coexistence problem , because it remains unknown whether the limit shape for the richardson model is uniformly curved , or even if its boundary is strictly convex .nevertheless , simulations give every indication that it is , and suggests a possible explanation of what lies behind the strict convexity of .the two - species complete model is superficially similar to the _ two - type richardson model_ studied by haggstrom and pemantle , but differs in that it allows displacement of colors on occupied sites : in the two - type richardson model , once a vertex is occupied by a particle ( either red or blue ) it remains occupied by that color forever .the main result of is similar to theorem [ newman : theorem1 ] , but requires no hypothesis about the richardson shape : it states that mutual unbounded growth has positive probability .because no displacements are allowed , the behavior of the two - type richardson model is very closely tied up with the first - passage percolation process with exponential edge passage times .the two - species competition model is also closely related to first - passage percolation , but the connection is less direct , because the possibility of displacements implies that not only the first passages across edges play a role in the evolution .we have run simulations of the competition model with initial configuration and .figure [ 800competition ] shows two snap shots of the same realization of the process taken at the times when the region occupied by both types , , hits the boundary of the rectangles \times [ -300,\ 300] ] respectively .observe that the overall shape of the red and the blue clusters did not change significantly .we believe that the shape of the regions occupied by the red and blue types stabilizes as times goes to infinity . for any subset ,define where denotes distance in the on . forany subset and any scalar , let . there exist random sets and such that with probability one if this is true , we expect that the limit sets and will be finite unions of angular sectors , as the simulation results shown in figure [ 800competition ] suggest .the sizes and directions of these angular sectors ( and even their number ) will , we expect , be random , with distributions depending on the initial configuration .this is illustrated by simulation results summarized in figure [ mixedcompetition ] with initial configuration three time progressive snap shots of the process were taken .the plots in the figure [ mixedcompetition ] suggest that stabilization of the shape was taking place on the considered time interval . [cols="^,^ " , ] xthe coexistence problem for the competition model in one dimension is considerably simpler than in higher dimensions . since the limit shape of the richardson model in one dimension is an interval , no auxiliary hypothesis is needed .[ proposition:1d ] for any nondegenerate finite initial configuration on , the event of mutual survival in the two - species competition model has positive probability . without loss of generality, we may assume that the initial configuration consists of a finite interval of red sites with rightmost point and a finite interval of blue sites with leftmost point , since [ a translate of ] such a configuration may be reached in finite time , with positive probability , from _ any _ nondegenerate initial configuration .let and be the left- and right - most occupied sites ( of either color ) at time , and let be the leftmost blue site .note that as long as , there will be both red and blue sites : all sites to the left of are red , and all sites to the right are blue .each of the processes and is a pure jump process , with jumps of size occurring at rate ; hence , with probability one , as , the process behaves , up to the time of first exit from , as a continuous - time simple nearest - neighbor random walk on the integers .consequently , there is positive probability that never exits the interval .but on this event , both species survive .this simple argument clearly shows what the difficulty in higher dimensions will be : in one dimension , the interface between ( connected ) red and blue clusters is just a point ; but in higher dimensions , it will in general be a hypersurface , whose time evolution will necessarily be somewhat complicated .the richardson model , the voter model , and the two - species competition model all admit _ graphical contructions _ using _ percolation structures_. such constructions make certain comparison arguments and duality relations transparent .we briefly review the construction here , primarily to emphasize that the same percolation structure can be used to simultaneously build versions of all three processes with all possible initial configurations .see , for instance , for further details in the case of the richardson model and the voter model .the _ percolation structure _ is an assignment of independent , rate- poisson processes to the directed edges of the lattice .( for each pair of neighboring vertices , there are two directed edges and . ) above each vertex is drawn a timeline , on which are placed marks at the occurrence times of the poisson processes attached to directed edges emanating from ; at each such mark , an arrow is drawn from to .a _ directed path _ through the percolation structure may travel upward , at speed , along any timeline , and may ( but does not have to ) jump across any outward - pointing arrow that it encounters .a _ reverse path _ is a directed path run backward in time : thus , it moves downward along timelines and jumps across inward - pointing arrows .voter - admissible _ path is a directed path that does not pass any inward - pointing arrows . observe that for each vertex and each time there is a unique voter - admissible path beginning at time and terminating at : its reverse path is gotten by traveling downward along timelines , starting at , jumping across all inward - pointing arrows encountered along the way ._ richardson model : _ a version of the richardson model with initial configuration is obtained by setting to be the set of all vertices such that there is a directed path in the percolation structure that begins at for some and ends at ._ voter model : _ a version of the voter model with initial configuration , is gotten by defining and to be the set of all vertices such that the unique voter - admissible path terminating at begins at for some . _ two - species competition model : _ fix an initial configuration , .erase all arrows that lie _ only _ on paths that begin at points such that ; denote the resulting sub - percolation structure .define ( respectively , ) to be the set of all vertices such that there is a voter - admissible path relative to that ends at and starts at with ( respectively , ) . the graphical construction yields as by - products comparison principles for the richardson , voter , and competition models .first , the set of vertices occupied by either red or blue particles at time in the competition model coincides with the set of occupied vertices in the richardson model when .second , if is the voter model with initial configuration , and is the competition model with initial configuration , , then for all , how long does it take for a red vertex to be overrun by blue ? clearly , for either the voter model or the competition model the answer will depend , at least in part , on how far away the nearest blue vertices are .the comparison principle implies that , for any given value of the distance to the nearest blue vertex , the worst case ( for either model ) is the voter model with initial configuration and , where denotes the disk of radius centered at ( more precisely , its intersection with the lattice ) .[ lemma : invasion ] fix , and denote by the state of the voter model at time .there exist constants ( depending on ) such that for all and all $ ] , if contains the disk of radius centered at the vertex , then * remark .* this holds for _ any _ norm on , not just the euclidean norm : in particular , it holds for the _ richardson norm _ defined below .the constant may , of course , depend on the norm .the dual process of the voter model is the coalescing random walk ( see or ) .thus , the probability that the vertex is blue at time coincides with the probability that a continuous - time simple random walker started at at time will land in the set at time .( this is not difficult to deduce directly from the graphical construction above : the event occurs if and only if the reverse voter - admissible path started at will terminate at for some ; but the reverse voter - admissible path is a simple random walk . ) hence , if then this probability is dominated by the probability that the continuous - time simple random walk exits the ball by time .the first - order asymptotic behavior of the richardson model on the integer lattice is described by the _ shape theorem _ .denote by the set of vertices of that are occupied at time , and by the probability measure describing the law of the process given the initial condition . for any subset ,define where denotes distance in the on . forany subset and any scalar , let .[ the shape theorem ] [ richshape ] there exists a nonrandom compact convex set , invariant under permutation of and reflection in the coordinate hyperplanes , and with non - empty interior , such that for any finite initial configuration and any , with one , eventually ( i.e. , for all sufficiently large ) the exact shape of the limiting set remains unknown .a simple argument shows that is convex , but nobody has succeeded in proving that it is _ strictly _ convex .let be the norm on associated with the shape set , that is , for , . that this is in fact a norm follows from the convexity of .the shape theorem is equivalent to the statement that the set of occupied sites grows at speed one , relative to the norm , in every direction .the richardson model admits a description as a first passage percolation model , as follows . to each edge of the lattice ,attach a mean one exponential random variable , the `` passage time '' , in such a way that the passage times of distinct edges are mutually independent .for any self - avoiding path , define the traversal time to be the sum of the passage times of the edges in . for any finite set of vertices and any vertex ,define the passage time from to to be the infimum of the traversal times of all self - avoiding paths connecting to .a version of the richardson model with initial configuration is given by the first - passage percolation representation gives simultaneous realizations of richardson evolutions for all initial configurations .since traversal times of paths are the same backwards and forwards , the following _ duality property _ is immediate : for any finite subsets and any , kesten and alexander have established large deviation results for the passage times in first passage percolation that specialize to the richardson model as follows .[ kesten ] there exist constants and such that for any , and the hypothesis of theorem [ newman : theorem1 ] is that the richardson shape is _ uniformly curved _ , that is , that there exists such that for each there is a ( euclidean ) ball of radius containing with on its surface .denote by the natural projection onto the boundary of the richardson shape , that is , for any such that , [ newman : lemma : circledistance ] suppose that is uniformly curved , then there exists a constant such that , for all and let be the euclidean norm ( -norm ) on .any two norms on are equivalent , and so the euclidean norm is equivalent to the richardson norm : in particular , there is a constant such that , for any , let be the tangent hyperplane to at . for any , denote by the ( orthogonal ) projection of on . since is uniformly curved ( and hence strictly convex ) , elementary trigonometric observationsimply that , since and is on the tangent line , there exists a constant that depends only on and such that for all such s hence , which proves the inequality of the lemma .we begin by showing that it suffices to restrict attention to a special class of initial configurations , which we dub _ sliced richardson shapes_. these are obtained as follows : run the richardson model ( starting from the initial configuration with two adjacent occupied sites , one at the origin ) for a ( large ) time , and let be the occupied set .let be the subset of consisting of all points with positive first coordinates , and let .observe that , starting from _ any _ nondegenerate finite initial configuration the competition model can evolve to a sliced richardson shape in finite time , with positive probability .( this will occur if , following the first time that there are adjacent red and blue sites , only these sites reproduce , and only on their sides of the hyperplane separating them . )thus , it suffices to prove that for all sufficiently large , with positive probability the sliced richardson shape is such that here and in the sequel will denote the probability measure governing the evolution of the competition model under the initial condition , . by the bonferroni inequality, it suffices to prove that the idea behind the proof of ( [ newman : equationinitialconfigurationred ] ) is this : if the initial condition is such that and are , approximately , the intersections of with complementary angular sectors in based at the origin , for large , then at time the sets should , with high probability , be approximately the intersections of with the same angular sectors .this is because ( 1 ) the shape theorem for the richardson model implies that should be close to ; ( 2 ) the uniform curvature of implies that the _ first _ occupations of vertices in and should ( except for those near the boundaries ) be by red and blue , respectively ; and ( 3 ) lemma [ lemma : invasion ] implies that , once a region is totally occupied by red , it must remain so ( except near its boundary ) for a substantial amount of time afterward .the key step is to show that once one of the species ( say red ) has occupied an angular sector in the richardson shape , it is very unlikely for the opposite species ( blue ) to make a large incursion into this sector for some time afterward . henceforth , let be the metric associated with the richardson norm . for any set and any vertex , define the distance between and to be the infimum of the distances for all vertices in . for any point and any , denote by the disk of radius centered at relative to the metric .( we shall not attempt to distinguish between open and closed disks , as this distinction will not matter in any of the estimates . ) for denote by the annular region . for each and any , define the _ angular sector_ of aperture centered at by fix , , and such that , and let be angular sectors with common center and apertures , respectively .fix , and define [ lemma : stabilization ] there exist constants such that the following is true , for any . if the initial configuration is such that and , then to prove ( [ eq : stabilization ] ) we find exponential upper bounds on [ newman : distancecomparisonequation ] for all sufficiently large and for all [ proof .] first , observe that by lemma [ newman : lemma : circledistance ] , for every such that , we have also , these inequalities imply claim [ newman : distancecomparisonequation ] for all and all sufficiently large . next , if is such that , then also , by lemma [ newman : lemma : circledistance ] , for all , these two inequalities imply claim [ newman : distancecomparisonequation ] for all and all sufficiently large .[ claim:2 ] with probability as , for every , every site of the ball will be colonized by time . in particular , there exist constants and ( not depending on ) such that for every [ proof .] fix and let . for every , notice that the number of sites in is of order . by theorem [ kesten ] and ( [ newman : c_n : distance ] ), it follows that for some and ( not depending on ) hence , with probability as , for every , every site of the ball will be colonized at time .define the boundary of a set as the set of all that have at least one nearest neighbor that is not in .next , for a set let be the first time at which the blue species reaches .[ claim3 ] with probability as , for every , the blue species will not reach the ball by time .in particular , there exist constants and such that for every notice that by claim [ newman : distancecomparisonequation ] , for large we have .hence , obviously , for every and , we have apply theorem [ kesten ] to each pair of such vertices to get : the number of vertices in is of order , and the number of vertices in is of order at most .hence , this finishes the proof of claim [ claim3 ] .[ claim4 ] with probability as , there are no blue particles in at time . in particular , there exist constants and such that by claim [ claim3 ] , for all with , next , for all with , claim [ claim:2 ] and claim [ claim3 ] imply that for some and hence , by lemma [ lemma : invasion ] , there exist constants and such that for every such we have the number of vertices in is of order . thus , by combining ( [ r_1:tau_x > delta n ] ) and ( [ r_1:tau_x < delta n ] ) , we get [ claim5 ] with probability as , the blue species will not reach the set by time .in particular , there exist constants and such that for large the distance between the sets and is greater than .the number of vertices on the boundary of is of order . using the same line of argument as in the proof of claim[claim3 ]we get now , claim [ claim4 ] and claim [ claim5 ] imply ( [ eq : stabilization ] ) and finish the proof of lemma [ lemma : stabilization ] : let be the set of sites occupied by the richardson evolution ( started from the default initial configuration ) at time .fix and , and set fix , and for each define events by the kesten - alexander large deviation theorems ( theorem [ kesten ] ) , and so , for sufficiently large , the probability is nearly that the configuration will be such that fix an initial configuration so that the preceding estimate holds for , and use this to construct the split richardson shape as in section [ ssec : strategy ] above : and are the subsets of with positive and nonpositive first coordinates , respectively .since the union of the red and blue sites in the competition model evolves as the richardson model , it follows from ( [ eq : ldsum ] ) that if the initial configuration is , then with probability in excess of , for all .denote by the event that ( [ eq : ka ] ) holds for all . on the event , the union of the red and blue regions will , at each time , fill a region close enough to a richardson shape that the estimate ( [ eq : stabilization ] ) will be applicable whenever the red and blue populations are restricted ( at least approximately ) to angular sectors .thus , define sequences of concentric angular sectors with apertures such that and with chosen so that is the halfspace consisting of all points in with positive first coordinates . here as in section [ ssec : stabilization ] above .note that the second equality guarantees that .this in turn , together with the fact that the sequence is increasing , implies that that the angular sectors are nested : .moreover , because is an exponentially growing sequence and , provided is sufficiently large .therefore , the intersection is an angular sector with nonempty interior . finally , for each define to be the event that at time there are no blue sites in outside the ( richardson norm ) disk of radius .( for , set . ) on the event , the set of all occupied sites is close to , and the red sites fill at least the outer layer of this set in the sector .we claim that for all sufficiently large , to see this , let be the smallest index such that occurs . since can only occur on , inequality ( [ eq : ldsum ] ) provides a bound on the sum of the first of these terms , and lemma [ lemma : stabilization ] bounds the second .thus , for sufficiently large , this proves ( [ eq : final ] ) . on the event , the red species must at time occupy at least the outer layer of the occupied set in the angular sector . consequently , on the event , red survives ! this proves ( [ newman : equationinitialconfigurationred ] ) .the preceding argument , in addition to proving that the event that mutual survival has positive probability , also goes part of the way towards proving conjecture [ conj ] : if at a large time one of the colors ( say red ) occupies the outer layer of an angular sector , then with conditional probability approaching as it will occupy a slightly smaller angular sector forever after . since the same is true for the other species , it follows that in at least some evolutions red and blue will each occupy angular sectors . unfortunately , it remains unclear what happens near the interface at large times .although the preceding arguments show that neither red nor blue can make too deep an incursion into the other species sector(s ) , it may be possible for one to repeatedly make small incursions across the interface that engender more ( and necessarily thinner ) angular sectors in its zone of occupation .thus , it may be that the limit shapes exist , but consist of countably many angular sectors .finally , it remains unclear if stabilization must eventually occur on the event of mutual survival , that is , if it is necessarily the case that at large times the outer layer of the occupied region must segregate into well - defined red and blue zones .since local coalescence occurs in the voter model , one naturally expects that the same will be true in the competition model ; thus far , we have been unable to prove this .
|
we consider a two - type stochastic competition model on the integer lattice . the model describes the space evolution of two `` species '' competing for territory along their boundaries . each site of the space may contain only one representative ( also referred to as a particle ) of either type . the spread mechanism for both species is the same : each particle produces offspring independently of other particles and can place them only at the neighboring sites that are either unoccupied , or occupied by particles of the opposite type . in the second case , the old particle is killed by the newborn . the rate of birth for each particle is equal to the number of neighboring sites available for expansion . the main problem we address concerns the possibility of the long - term coexistence of the two species . we have shown that if we start the process with finitely many representatives of each type , then , under the assumption that the limit set in the corresponding first passage percolation model is uniformly curved , there is positive probability of coexistence . key words : coexistence , first passage percolation , shape theorem .
|
the problem of explaining the emergence of self - organized , macroscopic , patterns from a limited set of rules governing the mutual interaction of a large assembly of microscopic actors , is often faced in several domains of physics and biology .this challenging task defines the realm of complex systems , and calls for novel paradigms to efficiently intersect distinct expertise .population dynamics has indeed attracted many scientists and dedicated models were put forward to reproduce in silico the change in population over time as displayed in real ecosystems ( including humans ) .two opposite tendencies are in particular to be accomodated for . on the one hand , microscopic agents do reproduce themselves with a specific rate , an effect which translates into a growth of the population size . on the other ,competition for the available resources ( and death ) yields a compression of the population . in a seminal work by verhulst , these ingredients were formalized in the differential equation : is the so called carrying capacity and identifies the maximum allowed population for a selected organism , under specific environmental conditions .the above model predicts an early exponential growth , which is subsequently antagonized by the quadratic contribution , responsible for the asymptotic saturation .the adequacy of the verhulst s model was repeatedly tested versus laboratory experiments : colonies of bacteria , yeast or other simple organic entities were grown , while monitoring the time evolution of the population amount . in some cases , an excellent agreement with the theory was reported , thus supporting the biological validity of eq .( [ eq : ver ] ) .conversely , the match with the theory was definetely less satisfying for e.g. fruit flies , flour beetles and in general for other organisms that rely on a more complex life cycle .for those latter , it is necessary to invoke a somehow richer modelling scenario which esplicitly includes age structures and time delayed effects of overcrowding population . fora more deailed account on these issues the interested reader can refer to the review paper and references therein .clearly , initial conditions are crucial and need to be accurately determined .an error in assessing the initial population , might reflect in the estimates of the parameters and , which are tuned so to adjust theoretical and experimental data .in general , the initial condition relative to one specific experimental realization could be seen as randomly extracted from a given distribution .this , somehow natural , viewpoint is elaborated in this paper and its implications for the analysis of the experiments thoroughly explored .in particular we shall focus on the setting where independent population communities are ( sequentially or simultaneously ) made to evolve .the experiment here consists in measuring collective observables , as the average population and associated momenta of the ensemble distribution .as anticipated , sensitivity to initial condition do play a crucial role and so need to be properly addressed when aiming at establishing a link with ( averaged ) ensemble measurements , or , equivalently , drawing reliable forecast . to this end, we will here develop two analytical approaches which enable us to reconstruct the sought distribution .the first , to which section [ sec : momevo ] is devoted , aims at obtaining a complete description of the momenta , as e.g. the mean population amount .this is an observable of paramount importance , potentially accessible in real experiments .the second , discussed in section [ sec : pdfevo ] , introduces a master equation which rules the evolution of the relevant distribution .it should be remarked that this latter approach is a priori more general then the former , as the momenta can in principle be calculated on the basis of the recovered distribution. however , computational difficulties are often to be faced which make the analysis rather intricate . in this perspectivethe two proposed scenario are to be regarded as highly complementary . in the following , for practical purposes, we shall assume each population to evolve as prescribed by a verhulst type of equation .the methods here developed are however not limited to this case study but can be straightforwardly generalized to settings were other , possibly more complex , dynamical schemes are put forward .imagine to label with the population relative to the -th realization , belonging to the ensemble of independent replica .as previosuly recalled , we assume each to obey a first order differential equation of the logistic type , namely : that can be straightforwardly obtained from ( [ eq : ver ] ) by setting and renaming the time . the initial condition will be denoted by .a natural question concerns the expected output of an hypothetic set of experiments constrained as above .more concretely , can we describe the distribution of possible solutions , once the collection of initial data is entirely specified ?the -th momentum associated to the discrete distribution of repeated measurements acquired at time reads : to reach our goal , we introduce the _ time dependent moment generating function _, , this is a formal power series whose taylor coefficients are the momenta of the distribution that we are willing to reconstruct , task that can be accomplished using the following relation : by exploiting the evolution s law for each , we shall here obtain a partial differential equation governing the behavior of .knowing will eventually enables us to calculate any sought momentum via multiple differentiation with respect to as stated in ( [ eq : momen ] ) . deriving ( [ eq : mmom2 ] ) and making use of eq .( [ eq : ver1 ] ) immediately yields : on the other hand , by differentiating ( [ eq : formf ] ) with respect to time , one obtains : where used has been made of eq .( [ eq : xmom ] ) .we can now re - order the terms so to express the right hand side as a function of ] and renaming the summation index , , one finally gets ( note the sum still begins with ) : ] and finally obtain the following non homogeneous linear partial differential equation : such an equation can be solved for close to zero ( as in the end of the procedure we shall be interested in evaluating the derivatives at , see eq .( [ eq : momen ] ) ) and for all positive . to this endwe shall specify the initial datum : i.e. the initial momenta or their distribution . before turning to solve ( [ eq : forf ] ) ,we first simplify it by introducing then for any derivative where or , thus ( [ eq : forf ] ) is equivalent to with the initial datum this latter equation can be solved using the _ method of the characteristics _ , here represented by : which are explicitly integrated to give : where denotes at . then the function defined by : is the solution of ( [ eq : forf ] ) , restricted to the characteristics .observe that , so ( [ eq : funcu ] ) solves also the initial value problem. finally the solution of ( [ eq : initdatf ] ) is obtained from by reversing the relation between and , i.e. : where is the value of the integral in the right hand side of ( [ eq : funcu ] ) .this integral can be straightforwardly computed as follows ( use the change of variable ) : which implies according to ( [ eq : soluf ] ) the solution is then from which straightforwardly follows : as anticipated , the function makes it possible to estimate any momentum ( [ eq : momen ] ) . as an example , the mean value correspond to setting , reads : \big |_{\xi=0}\nonumber\\ & = & \frac{e^{t}}{1-e^{t}}\phi(1-e^{t})\ , .\end{aligned}\ ] ] in the following section we shall turn to considering a specific application and test the adequacy of the proposed scheme .in this section we will focus on a particular case study in the aim of clarifying the potential interest of our findings .the inital data ( i.e. initial population amount ) are assumed to span uniformly a bound interval \\0 & otherwise}\ , , \ ] ] and cosequently the initial momenta are : hence the function as defined in ( [ eq : inidat ] ) takes the form : a straightforward algebraic manipulation allows us to re - write ( [ eq : phiud ] ) as follows : thus we can now compute the time dependend moment generating function , , given by ( [ eq : solff ] ) as : \ , , \ ] ] and thus recalling ( [ eq : momen ] ) we get for large enough times , the distribution of the experiments outputs is in fact concentrated around the asymptotic value with an associated variance ( calculated from the above momenta ) which decreases monotonously with time . in fig .[ fig:1mom ] direct numerical simulations are compared to the analytical solution ( [ eq : ameda2t]a ) , returning a good agreement .a naive approach would suggest interpolating the averaged numerical profile with a solution of the logistic model whose initial datum acts as a free parameter to be adjusted to its best fitted value . as testified by visual inspection of fig .[ fig:1mom ] this procedure yields a significant discrepancy , which could be possibly misinterpreted as a failure of the underlying logistic evolution law . for this reason , and to avoid drawing erroneous conclusions when ensemble averages are computed , attention has to be payed on the role of initial conditions . .the ( blue ) solid line stands for direct simulations averaged over independent realizations .the ( green ) dashed line represents the analytical solution ( [ eq : ameda2t]a ) .the ( red ) dot - dashed line is the solution of the logistic eq .( [ eq : ver1 ] ) , where the initial datum is being adjusted to the best fit value .inset : the solid ( resp . dashed ) line represents the difference between the analytical ( resp .fitted ) and numerical curves . ] in the preceding discussion the role of initial condition was elucidated . in a more general setting onemight imagine , the logistic parameter , to be an unknown entry to the model ( see eq .( [ eq : ver1 ] ) ) .one could therefore imagine to proceed with a fitting strategy which adjusts both and so to match the ( averaged ) data .alternatively , and provided the distribution of initial conditions is assigned ( here assumed uniform ) , one could involve the explicit solution ( [ eq : ameda2t]a ) where time is scaled back to ist original value : and let the solely parameter to run freely so to search for the optimal agreement with the data . as an example, we perfomed repetead numerical simulations of the logistic model with parameter and intial data uniformly distributed in $ ] .using the straightforward solution of the logistic equation where and are adjusted , returns .the analysis based on ( [ theory_r ] ) leads to , which is definitely closer to the true value .the above discussion is rather general and clearly extends beyond the uniform distribution case study .the analysis can be in fact adapted to other settings , provided the distribution of initially allowed population amount is known .we shall here briefly discuss the rather interesting case where a normal distribution is to be considered .let us assume that are random normally distributed values with mean and standard deviation , one can compute all the intial momenta as : assuming , to be negligible with respect to , the function specifying the initial datum in eq .( [ eq : inidat ] ) reads : \ , .\ ] ] collecting together the terms for we obtain : while the remaining terms read : it is then easy to verify that their contributution to the required funcion results in to proceed further we again calculate the derivatives of ( defined through the function ) , evaluate them at , and eventually get the evolution of in time , for all .as opposed to the above procedure , one may focus on the distribution function of expected outputs , rather then computing its momenta .the starting point of the analysis relies on a generalized version of the celebrated liouville theorem .this latter asserts that the phase - space distribution function is constant along the trajectory of the system . for a non hamiltonian system this condition results in the following equation ( for convenience derived in the appendix [ sec : app ] ) for the evolution of the probability density function under the action of a generic ordinary differential equation , here represented by the vector field : where . for the case under inspectionthe vector field reads and hence .thus , introducing eq . ( [ eq : pdfevolv ] ) can be cast in the form : to solve this equation we use once again the methods of characteristics , which are now solutions of , namely : the solution of ( [ eq : alphapdfeq ] ) is hence : where is related to the probability distribution function at and must be evaluated at , seen as a function of .the integral can be computed as follows : such an expression has to be introduced into ( [ eq : solf ] ) once we explicit for as : hence : and finally back to the original : which stands for the probability density function which describes for all the expected distribution of s . in fig .[ fig : pdfevolvnorm ] we compare the analytical solutions ( [ eq : ffinal ] ) with the numerical simulation of the logistic model ( [ eq : ver1 ] ) under the assumption of initial data normally distributed with mean and variance .( green online ) , ( red , online ) , ( blue online ) .the lines represent the corresponding analytical solution ] notice that having calculated the distribution will enable in turn , at least in principle , to to calculate all the associated momenta .forecasting the time evolution of a system which obeys to a specifc governing differential equation and is initialized as follows a specific probability distribution , constitutes a central problem in several domains of applications .assume for instance a set of independent measurements to return an ensemble average which is to be characterized according to a prescribed model .biased conclusion might result from straightforward fitting strategies which do not correctly weight the allowed distribution of initial condition . in this paperwe address this problem by providing an exact formula for the time evolution of momenta and probability distribution function of expected measurements , which is to be invoked for a repeaded set of indipendent experiments .though general , the method is here discussed with reference to a simple , demonstrative problem of population dynamics .we wish to thank m. villarini for several discussion and , in particular , for suggesting eq .( [ eq : therelat ] ) .let be a vector field to which we associate the ordinary differential equation : where is the phase space .suppose to define a probability density function of the initial data on .namely we have a function defined in the phase space , such that for all , denotes the probability that a randomly drawn initial datum will belong to and .we are interested in determining for any , the probability that a solution of ( [ eq : ode1 ] ) will fall in a open set .let us call such probability , by continuity we must have and for all . for any , denotes the probability to find a point in at time .we can then assume that this probability does not change if the set is transported by the flow of ( [ eq : ode1 ] ) , where , being the flow at time of the vector field .namely the change of coordinates allows to rewrite the previous relation as follows : being the jacobian of the change of variables .the relation ( [ eq : step2 ] ) should be valid for any set , thus : for all and for all .deriving with respect to and evaluating the derivative at we get the required relation ( recall ) : 99 j.d .murray , mathematical biology : an introduction , springer ( 1989 ) .verhulst , _ notice sur la loi que la popolation poursuit dans son accroissement _ , correspondance mathmatique et physique , * 9 * 113 - 121 ( 1838 ) c.j .krebs,_ecology : the experimental analysis of distribution and abundance _ , harper and row , new york ( 19729 s.h .strogatz , _ non linear dynamics and chaos _ , westview press ( 2000 )
|
we here discuss the outcome of an hypothetic experiments of populations dynamics , where a set of independent realizations is made available . the importance of ensemble average is clarified with reference to the registered time evolution of key collective indicators . the problem is here tackled for the logistic case study . theoretical prediction are compared to numerical simulations .
|
rough set theory , proposed by pawlak , is an extension of set theory for the study of intelligent systems characterized by insufficient and incomplete information . in theory ,rough sets have been connected with matroids , lattices , hyperstructure theory , topology , fuzzy sets , and so on .rough set theory is built on an equivalence relation , or to say , on a partition .but equivalence relation or partition is still restrictive for many applications . to address this issue ,several meaningful extensions to equivalence relation have been proposed . among them , zakowski has used coverings of a universe for establishing the covering based rough set theory .many scholars have done deep researches on this theory , and some basic results have been presented .neighborhood is an important concept in covering based rough set theory .many scholars have studied it from different perspectives .lin augmented the relational database with neighborhood .yao presented a framework for the formulation , interpretation , and comparison of neighborhood systems and rough set approximations . by means of consistent function based on the concept of neighborhood , wang et al . dealt with information systems through covering based rough sets .furthermore , the concept of neighborhood itself has produced lots of meaningful issues as well , and it is one of them that under what condition neighborhoods induced by a covering are equal to the covering itself . in paper , wang et al . provided a necessary and sufficient condition about this issue . in this paper , through a counter - example , we firstly point out that the necessary and sufficient condition provided by wang et al .second , we propose the concepts of repeat degree and core block , and then study some properties of them .third , we propose the concept of invariable covering based on core block . andby means of invariable covering , we present a necessary and sufficient condition for neighborhoods induced by a covering to be equal to the covering itself .fourth , we concentrate on the inverse issue of computing neighborhoods by a covering , namely giving an arbitrary covering , whether or not there exists another covering such that the neighborhoods induced by it is just the former covering . by means of a property of neighborhoods obtained by liu et al. and us independently , we present a necessary and sufficient condition for covering to be a neighborhoods induced by another covering .the remainder of this paper is organized as follows . in section [s : preliminaries ] , we review the relevant concepts and point out that the necessary and sufficient condition provided by wang et al .is false . in section [s : some new concepts and their properties ] , we propose the concepts of repeat degree and core block , and then study some properties of them . in section [ s : condition for neighborhoods induced by a covering to be equal to the covering itself ] , we present a necessary and sufficient condition for neighborhoods induced by a covering to be equal to the covering itself . in section [ s : condition for covering to be a neighborhoods ] , we present a necessary and sufficient condition for covering to be a neighborhoods induced by another covering .section [ s : conclusions ] presents conclusions .the concepts of partition and covering are the basis of classical rough sets and covering based rough sets , respectively . and covering is the basis of the concept of neighborhood as well .so we introduce the two concepts at first .( partition ) [ d : partition ] let be a universe of discourse and a family of subsets of .if , and , and for any , , then is called a partition of .every element of is called a partition block . in the following discussion , unless stated to the contrary , the universe of discourse is considered to be finite and nonempty .( covering ) [ d : covering ] let be a universe and a family of subsets of .if , and , then is called a covering of .every element of is called a covering block .it is clear that a partition of is certainly a covering of , so the concept of covering is an extension of the concept of partition . in the following ,we introduce the concepts of neighborhood and neighborhoods , two main concepts which will be discussed in this paper .( neighborhood ) [ d : neighborhood ] let be a covering of .for any , is called the neighborhood of .a relationship between two different neighborhoods is presented by the following proposition . [ p:1 ] let be a covering of .for any , if , then . so if and , then the concept of neighborhood has been given , we can introduce the concept of neighborhoods . [ d:2 ] let be a covering of . is called the neighborhoods induced by .there is an important property of neighborhoods presented by the following proposition . [ p:0 ] for any , is not a union of other blocks in . by the definition of , we see that is still a covering of universe .in particular , if is a partition , we have that . in paper , wang et al . said that if and only if was a partition .the following counter - example indicates that the necessity of this proposition is false .[ e:3 ] let , , where , , .we have that , , , thus . but is not a partition . in the following sections , we firstly propose some new concepts , and then study on their properties .by means of them , we present a necessary and sufficient condition for neighborhoods induced by a covering to be equal to the covering itself .there is a difference between a partition and a covering of a same universe .the difference is embodied in that for any , there exists only one partition block which include but there might exist more than one covering block which include .then it is necessary to concern with how many blocks including there are in a covering . inspired by this , we propose the following concept .( membership repeat degree ) [ d : membership repeat degree ] let be a covering of a universe .we define a function , , and call the membership repeat degree of with respect to covering . when the covering is clear , we omit the lowercase for the function . that an element of has the membership repeat degree of means that there are blocks in covering which include element .to illustrate the above definition , let us see an example .[ e:4 ] let , , where , .then , , , thus , , .in order to learn more about the neighborhoods , a special kind of covering , it is not enough using membership repeat degree of single element .we need research further that how many blocks including and simultaneously there are in a covering .( common block repeat degree ) [ d : common block repeat degree ] let be a covering of a universe .we define a function .we write as for short , and for any , we call the common block repeat degree of binary group with respect to covering . when the covering is clear , we omit the lowercase for the function . that a binary group of universe has the common block repeat degree of with respect to covering means that there are blocks in covering which include element and simultaneously .to illustrate the above definition , let us see an example .[ e:5 ] let , , where , , .then , , .the common block repeat degree has some properties as follows .[ p:6 ] ( 1 ) ; ( 2 ) . [ p:7 ]it follows easily from definition [ d : membership repeat degree ] and definition [ d : common block repeat degree ]. it can be expressed by repeat degree that the set of the covering blocks including is equal to the set of the covering blocks including and simultaneously .[ p:8 ] let be a covering of a universe .for any , . : it is straightforward .+ : it is clear that . if , therefore is the proper subset of .taking into account the finiteness of set , we have that , thus .this is a contradiction to that .this completes the proof .based on the concepts of membership repeat degree and common block repeat degree , we propose the concept of core block .core block is a special kind of covering block and is closely related to the issue that under what condition neighborhoods induced by a covering are equal to the covering itself .( core block ) [ d : core block ] let be a covering of a universe .for any and any , is called the core block of if and only if and for any , .the core block of is denoted as . for any element of ,say , if it has a core block , are there some other different covering blocks which are the core blocks of as well ?the following proposition answer this issue .[ p:9 ] let be a covering of a universe .for any , if are both the core block of , then . by definition [ d : core block ] ,we have that and . for any ,again , by definition [ d : core block ] , we have that . then by proposition [ p:8 ] , we have that . as , thus .so , then , thus .hence .similarly , .therefore .this completes the proof .this proposition indicates that the core block of any element of is unique .it is possible that an element of a universe have no core block in a covering of the universe . to illustrate this ,let us see an example .[ e:12 ] let , , where , , . by the definition of core block, we see that is the core block of 1 as well as 2 , namely , and is the core block of 4 , namely , but 3 have no core block . by this example , we can also see that a block of a covering might be the core block of some different elements of the universe simultaneously .the following proposition give a necessary and sufficient condition for a covering block to be a core block .[ p:10 ] let be a covering of a universe .for any , is the core block of if and only if is the intersection of all the blocks of that include .let .by and proposition [ p:8 ] , we have that + .this completes the proof . by proposition [ p:10 ], we obtain the following corollary .let be a covering of a universe .for any , if there exists the core block of , then for any , that holds . by example[ e:12 ] , we can also see that is not a core block of any element of .the following proposition shows the characteristic of this kind of block in a covering .[ p:13 ] let be a covering of a universe and . if is not a core block of any element of , then and for any , .suppose that , without loss of generality , suppose that . then is the intersection of all the blocks of that include . by proposition [ p:10 ], we see that is the core block of element .this is a contradiction to that is not a core block of any element of .it is clear that for any , .suppose that there exists an element of , say , such that .then for any , it follows that .thus is the core block of element .this is a contradiction to that is not a core block of any element of .this completes the proof . in a covering of a universe ,it is possible that none of the whole blocks is a core block . to illustrate this ,let us see an example .[ e:14 ] let , , where , , .then , and are not core blocks of any element of .there might exist a block in a covering which is not a core block of any element of the universe , and even none of the whole blocks is a core block . when every element of the universe has its core block in the covering , is there a block in covering which is not a core block of any element of the universe ? to solve this issue , we need to introduce the concept of reducible element . furthermore , based on the concept of reducible element and the concept of invariable covering proposed in the following , we present a necessary and sufficient condition for neighborhoods induced by a covering to be equal to the covering itself .to solve the issue of under what conditions two coverings generate the same covering lower approximation or the same covering upper approximation , zhu and wang first proposed the the concept of reducible element in 2003 . in order to obtain a necessary and sufficient condition under which neighborhoods induced by a covering are equal to the covering itself ,we also need to use this concept .( reducible element ) [ d:15 ] let be a covering of a universe and .if is a union of some blocks in , we say is a reducible element of , otherwise is an irreducible element of . [ d:16 ] let be a covering of .if every element of is an irreducible element , we say is irreducible ; otherwise is reducible .the following two proposition reveal the relationship between reducible element and core block .[ p:17 ] reducible element of a covering is not core block .let be a reducible element of covering of universe .then there exists a subset of , say , such that . for any ,it is clear that is a subset of .furthermore , we say that is a proper subset of . otherwise , we have that .by , we have that .this is impossible .suppose be a core block of some element of , say .then , thus there exists some , such that . by corollary [ c:11 ] ,we have that .this is a contradiction to that is a proper subset of .this completes the proof .the converse of this proposition is not true .from example [ e:14 ] , we can see that , and are not core blocks of any element of , but neither of them is reducible element .however , we have the following proposition which is related to this converse proposition . [ p:18 ] let be a covering of a universe .suppose that for any , there exists the core block of in covering and that there exists which is not a core block of any element of , then is a reducible element of . by proposition [ p:13 ] , we have that .let , where . by hypothesis , we see that for any , and . by corollary [ c:11 ], we have that , then .by , we have that . thus .this prove that is a reducible element of .the following example indicates that there exists the case described in proposition [ p:18 ] .[ e:19 ] let , , where , , , .then elements 1 , 2 and 3 have their core blocks in covering , respectively . but is not a core block of any element of . and is a reducible element of .when all of the blocks of a covering are core blocks , is there an element of the universe which has no core block in ?the following example indicates that there exists this kind of case .[ e:20 ] let , , where , .then is the core block of 1 , is the core block of 3 .but element 2 has no core block in .based on the above conclusions , we propose the following concept .( invariable covering ) [ d:21 ] let be a covering of a universe . is called an invariable covering if and only if is irreducible and for any , there exists the core block of .invariable covering has the following property .[ p:22 ] let be a universe . is an invariable covering of if and only if for any , there exists the core block of and for any , is the core block of some elements of . : by the definition of invariable covering , we only need to prove that is irreducible .we use an indirect proof .suppose be reducible .then there exists at least one reducible element , say , in covering .by proposition [ p:17 ] , we see that is not a core block of any element of .this is a contradiction to the hypothesis . : let be an invariable covering of . then for any , there exists the core block of . we only need to prove that for any , is a core block of some elements of .we use an indirect proof .suppose that there exists some block of , say , which is not a core block of any element of . by proposition [ p:18 ], we see that is a reducible element of .this is a contradiction to that is irreducible .this completes the proof .proposition [ p:22 ] can be considered as another definition of invariable covering .now , we present one of the main results in this paper . from this theorem , we will see that invariable covering is the only kind of covering which is equal to the neighborhoods induced by it .[ t:23 ] if and only if is an invariable covering . : let be an invariable covering of . for any , by proposition [ p:22 ] , there exists some element of , say , such that . by proposition [ p:10 ], we have that . then .thus .conversely , for any , we see that there exists some element of , say , such that .since there exists the core block of in , by proposition [ p:10 ] , we have that . then .thus .hence . : let . then and . on the one hand , for any , that holds .so there exists some element of , say , such that . by proposition [ p:10 ] ,we have that .this indicates that all the blocks of are core blocks . on the other hand , for any , that holds .thus . by proposition [ p:10 ] and , we have that . then .this indicates that every element of has its core block . by proposition [ p:22 ], is an invariable covering .this completes the proof .giving any covering of a universe , it is easy to calculate the neighborhoods out .but conversely , giving any covering of the universe , it is not clear whether or not there exists a covering of the universe , say , such that .certainly , by the concept of and some its properties , we know that if the amount of the blocks of covering is more than the amount of the elements of universe , or there exists some block of which is a union of some other blocks of , namely , is reducible , must not be neighborhoods of any covering of universe .but if a covering does not belong to the cases as above mentioned , is it certainly a neighborhoods of some covering of universe ? to solve this issue , we firstly prove the following proposition about .[ t:24 ] for any covering of universe , it holds that .we provide two proofs for this proposition .the method one . by theorem [ t:23] , we only need to prove that is an invariable covering . by proposition [ p:0 ], we see that is irreducible . for any ,it is clear that . and .this means that is the intersection of all the blocks of that include . by proposition [ p:10 ] , we know that is the core block of .thus is an invariable covering .hence .the method two .let and .for any , it is clear that . and if , we have that .thus .hence .this completes the proof .this proposition is found and proved by ourselves independently .afterward , we found that it is had been proved by liu et al . . by this proposition, we have the following theorem . [ t:25 ] a covering of universe is a neighborhoods of some covering of if and only if . : if , then is the neighborhoods of covering . : suppose be a neighborhoods of some covering of , say , i.e. . by theorem [ t:24 ] , we have that .this completes the proof . of course, different coverings of universe can induce the same neighborhoods .neighborhood is an important concept in covering based rough sets . through some concepts based on neighborhood and neighborhoods such as consistent function, we may find new connections between covering based rough sets and information systems .so it is necessary to study the properties of neighborhood and neighborhoods themselves . in this paper, we mainly studied on two issues induced by neighborhood and neighborhoods .the one is that under what condition neighborhoods induced by a covering is equal to the covering itself .the other one is that given an arbitrary covering , whether or not there exists another covering such that the neighborhoods induced by it is just the former covering . through the study on the two fundamental issues ,we have gained a deeper understanding of the relationship between neighborhoods and the covering which induce the neighborhoods .there are still many issues induced by neighborhood and neighborhoods to solve. we will continually focus on them in our following research .this work is supported in part by the national natural science foundation of china under grant no .61170128 , the natural science foundation of fujian province , china , under grant nos . 2011j01374 and 2012j01294 , andthe science and technology key project of fujian province , china , under grant no . 2012h0043 .
|
it is a meaningful issue that under what condition neighborhoods induced by a covering are equal to the covering itself . a necessary and sufficient condition for this issue has been provided by some scholars . in this paper , through a counter - example , we firstly point out the necessary and sufficient condition is false . second , we present a necessary and sufficient condition for this issue . third , we concentrate on the inverse issue of computing neighborhoods by a covering , namely giving an arbitrary covering , whether or not there exists another covering such that the neighborhoods induced by it is just the former covering . we present a necessary and sufficient condition for this issue as well . in a word , through the study on the two fundamental issues induced by neighborhoods , we have gained a deeper understanding of the relationship between neighborhoods and the covering which induce the neighborhoods . * keywords . * neighborhood ; reducible element ; repeat degree ; core block ; invariable covering .
|
a year ago , i was vaguely familiar with the notion of virtual worlds .i had read some newspaper articles about second life , which seemed mildly interesting , but i had no clear idea about what it would be like to enter such a world .all that changed when i was invited to give a popular talk on astronomy , in videoranch , another much smaller virtual world .i realized how different this type of medium of communication is from anything i had tried before , whether telephone or email or instant messaging or shared screens .there was a sense of presence together with others that was far more powerful and engaging than i had expected .i quickly realized the great potential of these worlds for remote collaboration on research projects .since then , i have explored several virtual environments , with the aim of using them as collaboration tools in astrophysics as well as in some interdisciplinary projects in which i play a leading role . by and largemy experiences have been encouraging , and i expect these virtual worlds to become the medium of choice for remote collaboration in due time , eventually removing the need for most , but not all , long - distance travel .the main question seems to be not so much whether , but rather when this will happen .my tentative guess would be five to ten years from now , but i may be wrong : the technology is evolving rapidly , and things may change even sooner .in any case , i predict that ten years from now we will wonder how life was before our use of virtual worlds , just like we are now wondering about life before the world wide web , and the way we were wondering ten years ago about life before email .twenty years ago , there was a lot of hype about virtual reality , with demonstrations of people wearing goggles for three - dimensional vision and gloves that gave a sense of touch .these applications have been slow to find their way into the main stream , partly because of technical difficulties , partly because it is neither convenient nor cheap to have to wear all that extra gear .in contrast , a very different form of virtual reality has rapidly attracted millions of people : game - based technology developed for ordinary computers , without any need for special equipment .about ten years ago , on - line 3d games , shared by many users , made their debut .in such a game , each player is represented by a simple animated figure , called an avatar , that the player can move around through the three - dimensional world .what appears on the screen is a view of the virtual world as seen through the eyes of your avatar , or as seen from a point of view several feet behind and a bit above your avatar , as you prefer . in this way ,a virtual world is a form of interactive animation movie , in which each participant plays one of the characters .currently , many millions of players take part in these games , the most popular of which is world of warcraft . in addition , other virtual worlds have sprouted up that have nothing to do with games , or with killing dragons or other characters .players enter these worlds for social reasons , to meet people to communicate with , or to find entertainment of various forms .currently the most popular one is second life ( sl ) .a lot has been written about sl , as a quick google search will show you .businesses have branches in sl , various universities including harvard and mit have taught classes , and political parties in the french elections earlier this year have been represented there .sl has its own currency , the linden dollar , convertible into real dollars through a fluctuating exchange rate , as if it were a foreign currency . in many ways , sl functions like a nation with its own economic , social and political structure .the world wide web has revolutionized global exchange of information . the notion of global connectivity has been novel , but the arrangement of content has not proceeded much beyond that of the printed press , with an element of tv or movies added . the dominant model is a bunch of loose - leaved pages , which are connected through a tree of pointers , allowing the user to travel in an abstract way through the information structure . as a result , it is often difficult to retrace your steps , to remember where you ve been , or to take in the whole layout of a site .in contrast to the abstract nature of the two - dimensional web , virtual worlds offer a very concrete three - dimensional information structure , modeled after the real world .while these worlds are virtual in being made up out of pixels on a screen , the experience of the users in navigating through such a world is very concrete .virtual worlds call upon our abilities of perception and locomotion in the same way as the real world does .this means that we do not need a manual to interpret a three - dimensional information structure modeled on the world around us : our whole nervous system has evolved precisely to interact with such a three - dimensional environment . remembering where you have seen something , storing information in a particular location , getting an overview of a situation , all those functions are far more natural in a 3d environment than in an abstract 2d tree of web pages .one might argue that the technological evolution of computers , beyond being simply ` computing devices ' , has moved in this direction from the beginning .the only reason that it has taken so long is the large demand on information processing needed to match our sensory input .fortunately , the steady increase in processing power of personal computers is now beginning to make it possible for everyone to be embedded in a virtual world , whenever they choose to do so , from the comfort of their own home or office .as long as you have a relatively new computer with a good graphics card and broadband internet access , there are many virtual worlds waiting for you to explore . some of them , like second life , offer a free entry - level membership , only requiring payment when you upgrade to more advanced levels of activity .getting started only requires you to download the client program to your own computer ; after only several minutes you are then ready to enter and survey that virtual world .email and telephone have given us the means to collaborate with colleagues anywhere on earth , in a near - instantaneous way . yetboth of them have severe limitations , compared to face - to - face meetings . in neither mediumcan you simply point to a graph as an illustration of a point you want to make , nor can you use a blackboard to scribble some equations or sketch a diagram .three new types of tools have appeared that attempt to remedy these shortcomings .one approach is to use video conferencing .each person can see one or more others , in a video window on his or her own computer .while this gives more of a sense of immediate contact , compared to a voice - only teleconference call , it is not easy to use this type of communication to share any but the simplest types of documents .another approach has been to give each participant within an on - line collaboration access to a window on his or her computer that is shared between all of them .whatever one person types will be visible by all others , and in many cases everybody is connected through voice as well , as in a conference call .the third approach , the use of a virtual world , not only combines some of the advantages of both , it also adds extra features . unlike a video conference , where participants have rather limited freedom of movement , virtual worlds offer the possibility of exploring the whole space . and in some worlds at least , everybody present in a room can gather in front of a shared screen that is embedded in the virtual world , in order to discuss its contents .after exploring a few different virtual worlds , i settled on qwaq as the company of choice for my initial experiment in using virtual spaces as collaboration tools .qwaq is a new start - up company that provides the user with ready - made virtual offices and other rooms , called _forums_. there you can easily put up the contents of various files on wall panels .whether they are pdf files , jpeg figures , powerpoint or openoffice presentations , or even movies , you can simply drag them with your mouse from your desktop onto the qwaq screen and position them on a wall within the virtual world shown on your screen .as soon as you do that , the file becomes visible for all other users present in the same virtual room .the rooms persist between sessions : when users later visit the same room , your files are still there to be seen .in addition to such useful files , that can be watched and discussed by a group of users , qwaq also allows web browsers to be opened in a wall panel . in that way, any piece of information on the web becomes instantly available for perusing by the participants in a qwaq forum .this is not only convenient , it helps give the people present a sense of embedding and actual presence in the room , given that their whole discussion takes place in the same virtual space , without a need to jump out of qwaq into other applications . and watching movies together , avatars can even enjoy meta - virtual presentations within their virtual environments! one of the most interesting features is the presence of blackboards and editors in wall panels . in this way, users can illustrate their discussions with drawings and they can type their main points directly into a file that can be jointly edited by those present . later , each user can easily download a copy of that file onto their own computer .almost all of our discussions are held through direct voice communication .while there is an option for text chatting , the advantage of using a headset with a microphone to directly talk to each other is so large that we hardly ever use text .the main exception is to exchange a few words while someone is giving a lecture , in order not to interrupt the speaker , or to ask a question to the speaker which can then be answered in due time .the underlying software environment used in qwaq is croquet , derived from squeak , a language based originally on smalltalk . unlike the more traditional server - centered virtual world architectures ,croquet is based on peer - to - peer communication , with potentially far better scaling properties .alan kay , a pioneer of the 2-d windowing system for personal computers , was the primary visionary behind the croquet system , which now has accrued a thriving community of open source contributors .after i learned about qwaq , at the mediax conference at stanford in april 2007 , i started two independent experiments by launching two independent initiatives , or in qwaq terminology , two ` organizations ' .the first qwaq organization , later called mica , was aimed at my astrophysics colleagues .the second organization , called wok forums , was aimed at a widely interdisciplinary group of scholars .mica stands for meta - institute for computational astrophysics , with _ meta _ derived from the term _ metaverse _ which is sometimes used to describe virtual worlds . during a couple months in the summer of 2007 , we started to explore the use of qwaq forums .one function was to simply provide a meeting place for people to talk informally , a place that can play a role similar to that of a drinking fountain or a tea room in an academic department .other activities were the organization of seminars and meetings focused on particular topics of research .an example of the latter was the muse initiative , which stands for multi - scale multi - physics scientific environment for simulating dense stellar systems . during the modest-7a meeting in split , croatia ,all participants of the workshop were given an account in mica , to give them a chance to follow up their discussions and collaborations after the end of the workshop .wok stands for ways of knowing , a broadly interdisciplinary initiative that was started in 2006 , with the aim of comparing the scientific approach to knowledge with other approaches such as those of art , spirituality , philosophy and every - day life . for half a year now , starting in the spring of 2007 , we have had daily meetings in wok forums , with many in - depth discussions about notions such a using your own life as a lab .currently we have about two dozen active participants , mostly from europe and north america , attending on average one or more meetings a week .they range from leading figures in fields such as cognitive science , psychology , medicine , physics and finance , to graduate students and postdocs as well as independent scholars and other professionals .when i started the two qwaq organizations , mica and wok forums , in may 2007 , i did not know what to expect in any detail , given the novelty of the medium of virtual worlds as a collaborative tool for academic investigations .however , i had some rough picture of what i thought was likely to happen : * a quick start for my astronomy group , a slow start for my interdisciplinary group ; * virtual worlds as a way to facilitate existing collaborations ; * an emphasis on using tools : web browsers , 3d objects , etc . to my great surprise , all three expectations turned out to be wrong .what happened instead was : * my interdisciplinary group took off right away ; * i found myself and others creating new collaborations ; * 3d presence was far more important than specific tools .i had expected that the computational astronomers whom i had invited to mica would quickly take to the new environment .after all , most of them had many years of experience working with rather advanced computer tools , and many had designed and written their own code and toolboxes . in contrast , many of the broadly interdisciplinary researchers that i had gathered were not particularly computer savvy .i was wondering whether they would have any interest at all in getting into a new kind of product that they first would have to download , and then would have to learn to navigate in .i was wrong on both counts .the latter group showed an immediate interest .even though i had started slowly with weekly meetings , there was strong interest in more frequent gatherings , and soon we began to meet on a daily basis .in contrast , the former group , for whom i had started off with a daily ` astro tea time ' showed little interest initially , and most meetings found me being in the tea room all by myself .it took a while before it dawned upon me what was happening .the main reason was that widely interdisciplinary activities do not have any traditional infrastructure , in terms of journals , workshops , societies and other channels to fall back on .those people interested in transcending the borders of their own discipline , not only into the immediately adjacent discipline but into a range of other disciplines , have very little to lean on .by offering a forum for discussions , i was effectively creating an oasis in a desert , attracting many thirsty fellow travelers .in contrast , many of my astrophysics colleagues complain that nowadays there are already too many meetings and joint activities , and that it has become increasingly harder to find time to sit down and do one s own original research , amidst the continuing barrage of email , faxes , and cell phone conversations . for themi had created yet one more fountain in a fountain - filled park .however , once my astro colleagues started to trickle in , many of them did find the new venue to be of great interest .and i had a trick to increase the trickle : threatening to close mica sufficed to catch people s attention , and to increase attendance .switching from the initial daily meetings to weekly meetings also helped considerably . having a dozen people in a room discussing the latest news in computational astrophysicsclearly is a lot more fun that being by yourself or with just one other random person during a daily tea time .meanwhile , the daily wok forums meetings continue to attract between half a dozen to a dozen participants on a daily basis , and the attendance continues to grow .i had expected to kick start my virtual world activities by bringing in existing teams of collaborators , offering them the chance to continue what they were doing already , but in a new medium .perhaps this new approach would later attract other individuals , who might be interested in joining or in starting their own projects , but that was not my initial objective . rarely in my life havei so completely misjudged a situation .getting an existing group to make the transition to a totally new mode of communication turned out to be effectively impossible . trying to change given ways of doingthings provoked far more resistance than i had expected , in both my astrophysics and my interdisciplinary collaborations .simply put , that just did nt work , period .this became so obvious , very early on , that i had no choice but try a completely different tag .i went through my address book , and gathered names of people who just might have some interest in trying out a new medium , providing them with some bait , at the off chance that they might bite .i had no idea what criterion to use , in order to attract potential players , given the novelty of the new setup , so i just threw my net widely , waiting to see what would happen. roughly half the people i contacted did not reply . of the halfthat did reply , roughly half told me that it all sounded fascinating but that they had no time in the foreseeable future to engage in new fun and games .of the people who did want to give it a try , more than half quickly got discouraged after trying once or twice , and not getting immediate gratification one way or another .but many of those who remained at the end of this severe selection process were wildly enthusiastic , considering themselves to be pioneers in a whole new world .even in retrospect , i could never have predicted whom of my colleagues would fall in the ten percent group of early adopters .i still do not see any clear pattern or set of characteristics separating those who rushed in right away from those choosing to remain sitting on the fence .many of those of whom i had been convinced that they would embrace virtual worlds did not , and quite a few whom i had contacted without much expectation turned out to jump in right away .in fact , for some of the early players i had not anticipated their interest at all .i had contacted them mainly so as not to make them feel left out when they would hear that i had contacted their seemingly more promising friends ! given this randomly hit - or - miss way of collecting early players ,any notion of starting with existing teams rapidly went out the window .what i wound up with was a bunch of enthusiastic tourists , eager to look around in the new virtual world that opened up unexpected horizons , with doing any kind of real work seemingly far from their mind .they were lured into a new adventure , with new toys . after a while , though , many of the tourists began to settle down , and they started to behave more like neighbors .they began to get to know each other , although many of them had never met in real life . among the mica participants, there were some old hands in computational astrophysics , but there also was a freshly minted phd in the field of education , jakub schwarzmeier from pilsen , in the czech republic , who happened to have written some astrophysical simulations as part of his educational research .the mica snapshot above shows the room that jakub created , with me visiting him together with alf whitehead who is a graduate student in astrophysics in a remote study course in australia while making a living as a manager of a team of ruby programmers in toronto . both jakub and alfhad independently contacted me by email , without having met me in person , less than half a year before i started mica , offering their help with my acs project , so it was natural to invite them into mica .finally , some of the tourists that had turned into neighbors finally began to turn into collaborators . seeing each other regularly , and becoming familiar with each others interests, they began to spawn new ideas , some of which led to new projects , with little connection to the original motivation for them to enter the virtual world where they had met .this has happened repeatedly in my interdisciplinary organization , even though there the discrepancy between people s background and interests was the largest . in my astrophysics organization ,the first mile stone was reached when evghenii gaburov and james lombardi started to write a paper together within mica , evghenii in amsterdam , holland , and jamie in meadville , pennsylvania , usa , which led to a preprint in july 2007 ( ) . as far as i know , this is the first astrophysics paper that has an explicit acknowledgment to a virtual world as the medium in which it was created .i had expected that the main attraction of a virtual world would surely be the lure of toys : being able to design and build 3d objects , to use web browsers in - world , to travel through output of simulations , all that good stuff .the qwaq software designers had already put an attractive example of a simulation output in their world , in the form of a simple model of an nacl crystal .i had expected my fellow astronomers to quickly come in with their galaxy models , following in the footsteps of the qwaq folks .i also had thought they would quickly start playing directly with the software offered by qwaq . in addition to existing applications , qwaq offers possibilities for scripting new ones , using python , and the underlying croquet offers even more ways to get into the nuts and bolts of the whole setup .i had expected my colleagues , especially students with more time on their hands , to come in to play like kids in a candy store .once more i was wrong . in a place full of toys, it was the place itself , not the collection of toys , that formed a magnet .the main attraction for coming into qwaq forums was presence .presence in a persistent space , a watering hole that quickly became a familiar meeting ground , this is what was felt to be the single most important aspect of the whole enterprise .everything else was clearly secondary .it goes back to the difference between the abstract nature of the two - dimensional world wide web , versus the concrete sense of ` being there ' that we get when we enter a virtual world .hundreds of millions of years of evolution of our nervous system , in all its perceptive , motor , and processing aspects , have prepared us for being at home in a three - dimensional life - like spatial environment .sharing such an environment with others turned out to be a factor that was far more important than i could have guessed .i , too , was amazed to experience the difference between a meeting in mica or wok forums , on the one hand , and being part of a traditional phone conference with the same number of individuals , on the other .teleconference calls are among the least pleasant chores to be part of , in my work .it is not always clear who is talking , there is often little real engagement , and the whole thing just feels uninspired , leading the participants to doodling or reading their email or being otherwise distracted .in contrast , a meeting of half an hour in a virtual world feels totally different . there is a palpable sense of presence .you can see where everybody is located , people can move around and gather in front of a blackboard or poster or powerpoint presentation , and you can even hear where people are , through the stereo nature of the sound communication .of the two groups that i have invited into virtual spaces , interdisciplinary researchers were the most eager early adopters .astrophysicists were much slower to get started , but once they were in and saw the potential of this new medium , they could quickly use the infrastructure they already had in their own field to produce new results , such as writing a preprint within a virtual space .individual early adopters in both groups did not come in as teams .instead , they met whoever else was there , behaving first as tourists , then as neighbors , and only later as potential collaborators , spontaneously creating new research projects . in this way , everything that happened in virtual spaces was serendipitous ; trying to get existing projects moved into virtual spaces encountered too much resistance .but even these serendipitous activities took place only after significant encouragement . to get a group of people to adapt toa new medium seems to take a considerable and ongoing amount of prodding , using whatever carrots and sticks one can find .trying to organize any type of new activity in academia resembles the proverbial challenge of ` herding cats . 'the main attraction of meeting in a virtual space has turned out to be the shared presence in a persisting space that the participants sense and get hooked to . after a number of meetings with various stimulating conversations , the regulars want to keep coming back to the familiar setting , where they know they can meet other interesting people , old friends as well as new acquaintances .being able to visit such a space at the click of a button is a great asset . whether at home or at work , or briefly logged in at an airport , the virtual space is always there , and with enough participants , chances are that you will meet people whenever you log in .it can function like a tea room in an academic department , but then in a portable form , always and everywhere within reach , a curious mix of attributes .one major obstacle that i have encountered is the fact that the earth is round . never beforehave i been so conscious of the fact that we all live in different time zones .spatial distances may drop away , when people meet in virtual spaces , but temporal zone changes do nt . in my interdisciplinary group , where we have experimented for several months now with daily meetings ,i was forced to introduced meetings twice every day , in order to accommodate the fact that the participants live on different continents .in addition to time zone restrictions , some participants prefer to log in from home in the evening , others from work during the day . schedulinga weekly colloquium has been rather difficult , with some people forced to get up very early and others having to stay up till late at night . as a result of all this ,the critical mass needed to sustain a ` tea time ' where enough people show up spontaneously is much larger in a virtual space than it is in an academic department . with ten people in a building , and a fixed tea time at 3 pm , chances are that at least five people show up at any given tea .with twice - daily meetings in a virtual space , and many participants showing up only once a week , you need more like a hundred people in total , to guarantee the presence of five people per meeting . and if the attendance often falls below five , there may not be enough diversity to attract regular attendance . trying to organize people to attend events in a virtual space has something in common both with running a department and with organizing a workshop .like the former , it requires persistent management , unlike putting together a workshop that is a one - shot event . andeven though it is much easier to establish a virtual space , compared to getting the funding and spending the time to build a physical building , it is also easy to underestimate the time it takes to establish an attractive infrastructure .try to image what it would be like to run a never - ending workshop , and you get the idea . in the short run , there is no ideal solution to the management problem .trying to run things purely by committee is unlikely to work , nor will it be easy to find a single individual willing to do the brunt of the work needed to set up and maintain the infrastructure of a purely virtual organization for academic research .progress is likely to come from some kind of middle ground , with a small core group of enthusiasts willing to spend significant amounts of time getting things going , in a typical ` open source ' kind of atmosphere , setting the tone by their personal example .so far , the two organizations that i have founded , mica and wok forums , are still very much in their initial phase where people are getting to know each other and are getting to know the virtual environment and its possibilities .what will happen next is difficult to predict . as always in a new medium, the most interesting developments will be those that nobody expected .even so , there are a few obvious next steps .one thing - to - do is to create some form of library or archive , containing a chronicle of what has happened in a given virtual space .after people give lectures , it will be good to keep at least their powerpoint presentations . when people hold discussions , it would be great to catch their conclusions in a type of wiki or other structure for text that is easy to enter. it would be great if a whole session in a virtual space could be captured on video , and stored for later viewing within a room in that same virtual space . for computational science applications , such as large - scale simulations in astrophysics , virtual spaces can be at the same time places for people to meet , and places where those people can run their experiments . with individuals represented as avatars , it is natural for them to enter virtual laboratories where they are running their simulations . instead of the scientist sitting in front of the computer and the simulation taking place at the other side of the screen, there are many advantages in letting the scientist enter the screen and the simulated world directly . by traveling through a simulation , one can become much more intimately familiar with the details of a simulation .finally , here is one more intriguing possibility .if researchers who are geographically remote start writing code together within a virtual space , we can literally capture all that is said and done while writing the code . by keeping the full digital record of a coding session , andindexing it to the lines of code that were written during that session , future users of that code will always have the option to travel back in time to get full disclosure of all that happened during the writing .many of us , struggling with legacy code that was written decades ago , would be happy to give a minor fortune for the possibility of making such a trip back in time .this approach to massively overwhelming documentation is in the spirit of what jun makino and i have suggested on our art of computational science website , as a move from _ open source _ to _open knowledge _i thank sukanya chakrabarti , derek groen , andrew mcgowan , sean murphy , greg nuyens , rod rees and patrick st - amant for their helpful comments on the manuscript .
|
since we can not put stars in a laboratory , astrophysicists had to wait till the invention of computers before becoming laboratory scientists . for half a century now , we have been conducting experiments in our virtual laboratories . however , we ourselves have remained behind the keyboard , with the screen of the monitor separating us from the world we are simulating . recently , 3d on - line technology , developed first for games but now deployed in virtual worlds like second life , is beginning to make it possible for astrophysicists to enter their virtual labs themselves , in virtual form as avatars . this has several advantages , from new possibilities to explore the results of the simulations to a shared presence in a virtual lab with remote collaborators on different continents . i will report my experiences with the use of qwaq forums , a virtual world developed by a new company ( see http://www.qwaq.com ) .
|
all living systems have evolved to perform certain tasks in specific contexts .there are a lot fewer tasks than there are different biological solutions that the nature has created .some of these problems are universal , while the solutions may be organism - specific .thus a lot can be understood about the structure of biological systems by focusing on understanding of _ what _ they do and _ why _ they do it , in addtion to _ how _ they do it on molecular or cellular scales .in particular , this way we can uncover phenomena that generalize across different organisms , thus increasing the value of experiments and building a coherent understanding of the underlying physiological processes . in this chapter , we will take this point of view while analyzing what it takes to do one of the most common , universal functions performed by organisms at all levels of organization : signal or information processing and shaping of a response ( these are variously known in different contexts as learning from observations , signal transduction , regulation , sensing , adaptation , etc . ) studying these types of phenomena poses a series of well - defined , physical questions : how can organisms deal with noise , whether extrinsic or generated by intrinsic stochastic fluctuations within molecular components of information processing devices ?how long should the world be observed before a certain inference about it can be made ?how is the internal representation of the world made and stored over time ?how can organisms ensure that the information is processed fast enough for the formed response to be relevant in the ever - changing world ?how should the information processing strategies change when the properties of the environment surrounding the organism change ?in fact , such `` information processing '' questions have been featured prominently in studies on all scales of biological complexity , from learning phenomena in animal behavior , to analysis of neural computation in small and large animals , and to molecular information processing circuits , to name just a few . in what follows , we will not try to embrace the unembraceable , but will instead focus on just a few questions , fundamental to the study of signal processing in biology : what is the right way to measure the quality of information processing in a biological system ? and what can real - life organisms do in order to improve their performance in these tasks ? the field of study of biological information processing has undergone a dramatic growth in the recent years , and it is expanding at an ever growing rate .there are now entire conferences devoted to the related phenomena ( perhaps the best example is _ the international q - bio conference on cellular information processing _ , http://q-bio.org , held yearly in santa fe , nm , usa ) . hence , in this short chapter , we have neither an ability , nor a desire to provide an exhaustive literature review .instead the reader should keep in mind that the selection of references cited here is a biased sample of important results in the literature , and i apologize profusely to my friends and colleagues who find their deserving work omitted in this overview .in the most general context , a biological system can be modeled as an input - output device , cf .[ channel ] that observes a time - dependent state of the world ( where may be intrinsically multidimensional , or even formally infinite dimensional ) , processes the information , and initiates a response ( which can also be very large dimensional ) . in some cases , in its turn, the response changes the state of the world and hence influences the future values of , making the whole analysis so much harder . in view of this , analyzing the information processing means quantifying certain aspects of the mapping . in this section, we will discuss the properties that this quantification should possess , and we will introduce the quantities that satisfy them .one typically tries to model molecular or other physiological _ mechanisms _ of the response generation .for example , in well - mixed biochemical kinetics approaches , where may be a ligand concentration , and may be an expression level of a certain protein , we often write where the nonnegative functions and stand for the production / degradation of the response , influenced by the level of the signal , and is a random forcing due to the intrinsic stochasticity of chemical kinetics at small molecular copy numbers . the subscript stands for the values of adjustable parameters that define the response ( such as various kinetic rates , concentrations of intermediate enzymes , etc . ) , which themselves can change , but on time scales much slower than the dynamics of and .in addition , stands for the activity of other , hidden cellular state variables , which change according to their own dynamics , similar to eq .( [ main ] ) .this dynamics can be written for many diverse biological information processing systems , including the neural dynamics , where will would stand for the firing rate of a neuron induced by the stimulus .importantly , because of the intrinsic stochasticity in eq .( [ main ] ) , and because of the effective randomness introduced by the state of the hidden variables , the mapping between the stimulus and the response is non - deterministic , and it is summarized in the probability distribution ] . in addition , itself is not deterministic either : other agents , chaotic dynamics , statistical physics effects , and , at a more microscopic level , even quantum mechanics conspire to ensure that can only be specified probabilistically .therefore , a simple mapping is replaced by a joint probability distribution ( note that we will drop the index in the future where it does nt cause ambiguities ) p[\left\{s(t)\right\}]\\= p\left[\left\{r(t)\right\},\left\{s(t)\right\}|a\right]\equiv p_a\left[\left\{r(t)\right\},\left\{s(t)\right\}\right].\end{gathered}\ ] ] hence the measure of the quality of the biological information processing must be a _ functional _ of this joint distribution .now consider , for example , a classical system studied in cellular information processing : the _ e. coli _ chemotaxis ( see chapter 15 in this book ) .this bacterium is capable of swimming up gradients of various nutrients . in this case , the signal is the concentration of such extracellular nutrients .the response of the system is the activity levels of various internal proteins , like _ chey _ , _ chea _ , _ cheb _ , _ cher _ , etc . , which combine to modulate the cellular motion through the environment .it is possible to write the chemical kinetics equations that relate the stimulus to the response accurately enough and eventually produce the sought after conditional probability distribution ] , and hence the relevant variable , the signal , and the response form a markov chain : =\\p\left[\left\{e(t)\right\}\right]\ , p\left[\left\{s(t)\right\}| \left\{e(t)\right\}\right ] \,p\left[\left\{r(t)\right\}| \left\{s(t)\right\}\right].\end{gathered}\ ] ] the quantity we are seeking to characterize the biological information processing must respect this aspect of the problem .therefore , its value must depend explicitly on the choice of the relevance variable : a computation resulting in the same response will be either `` good '' or `` bad '' depending on what this response is used for .in other words , one needs to know what the problem is before saying if a solution is good or bad .the question of how much can be inferred about a state of a variable from measuring a variable has been answered by claude shannon over sixty years ago . starting with basic , uncontroversial axioms that a measure of information must obey, he derived that the uncertainty in a state of a variable is given by =-\sum_xp(x)\log p(x)=-\langle\log p(x)\rangle_{p(x ) } , \label{entropy}\ ] ] which we know now as the _ boltzmann - shannon entropy_. here denotes averaging over the probability distribution . when the logarithm in eq .( [ entropy ] ) is binary ( which we always assume in this chapter ) , then the unit of entropy is a _ bit _ :one bit of uncertainty about a variable means that the latter can be in one of two states with equal probabilities . observing the variable ( a.k.a ._ conditioning _ on it ) changes the probability distribution of , , and the difference between the entropy of prior to the measurement and the average conditional entropy tells how informative is about : &=s[x]-\langle s[x|y]\rangle_{p(y)}\\ & = -\left < \log p(x)\right>_{p(x)}+\left<\left < \log p(y|x)\right>_{p(x|y)}\right>_{p(y)}\\ & = -\left < \log\frac{p(x , y)}{p(x)p(y)}\right>_{p(x , y)}.\label{mi}\end{aligned}\ ] ] the quantity ] , for , the entropy of the entire series will diverge linearly with . therefore , it makes sense to define entropy and information rates &=&\lim_{t\to\infty}\frac{s[x(0\le t < t)\}]}{t},\\ { \mathcal i}[x;y]&=&\lim_{t\to\infty}\frac{i[\{x(0\le t< t)\ } ; \{y(0\le t < t)\}]}{t } , \label{irate}\end{aligned}\ ] ] which measure the amount of uncertainty in the signal and the reduction of this uncertainty by the response per unit time .entropy and mutual information possess some simple , important properties : 1 .both quantities are non - negative , ] .2 . entropy is zero if and only if ( _ iff _ ) the studied variable is not random .further , mutual information is zero _ iff _ , that is , there are no any kind of statistical dependences between the variables .mutual information is symmetric , =i[y;x] ] will always mean the differential entropy if is continuous and the original entropy otherwise .5 . for a gaussian distribution with a variance of , and , for a bivariate gaussian with a correlation coefficient of , =-1/2\log(1-\rho^2).\ ] ]thus entropy and mutual information can be viewed as generalizations of more familiar notions of variance and covariance .unlike entropy , mutual information is invariant under reparameterization of variables .that is =i[x';y']\ ] ] for all invertible .that is , provides a measure of statistical dependence between and that is independent of our subjective choice of the measurement device .one of the most fascinating properties of mutual information is the data processing inequality .suppose three variables , , and form a markov chain , .in other words , is a probabilistic transformation of , which , in turn , is a probabilistic transformation of .then it can be proven that \le \min \left ( i[x;y],i[y;z]\right).\ ] ] that is , _ you can not get new information about the original variable by further transforming the measured data _ ; any such transformation can not increase the information .together with the fact that mutual information is zero _ iff _ the variables are completely statistically independent , the data processing inequality suggests that if the variable of interest that the organism cares about is unknown to the experimenter , then analyzing the mutual information between the entire input stimulus ( sans noise ) and the response may serve as a good proxy .indeed , due to the data processing inequality , if ] is also small for any mapping of the signal into the relevant variable , whether deterministic , , or probabilistic , . in many cases , such as ,this allows us to stop guessing which calculation the organism is trying to perform and to put an upper bound on the efficiency of the information transduction , whatever an organism cares about .however , as was recently shown in the case of chemotaxis in _, when and are substantially different ( resource consumption rate vs.instantaneous surrounding resource concentration ) , maximizing ] . in more general scenarios ,the maximum log - growth advantage over uninformed peers needs to be discounted by the cost of obtaining the information , by the delay in getting it , and , more trivially , by the ability of the organism to utilize it .therefore , while these brief arguments are far from painting a complete picture of relation between information and natural selection , it is already clear that maximization of the information between the surrounding world and the internal response to it is not an academic exercise , but is directly related to fitness and will be selected for by evolution .it is now well known that probabilistic bet hedging is the strategy taken by bacteria for survival in the presence of antibiotics and for genetic recombination . in both cases , cell division ( and hence population growth ) must be stopped either to avoid dna damage by antibiotics , or to incorporate newly acquired dna into the chromosome .still , a small fraction of the cells choose not to divide even in the absence of antibiotics to reap the much larger benefits if the environment turns sour ( these are called the _ persistent _ and the dna uptake _ competent _ bacteria for the two cases , respectively ) .however , it remains to be seen in an experiment if real bacteria can reach the maximum growth advantage allowed by the information - theoretic considerations .another interesting possibility is that cancer stem cells and mature cancer cells also are two probabilistic states chosen to hedge bets against interventions of immune systems , drugs , and other surveillance mechanisms . in many situations , such as persistence in the face of antibiotics treatmentmentioned above , an organism settles into a certain response much faster than the environment has a chance to change again . in these cases , it is sufficient to consider the same - time mutual information between the signals and the responses , as in , =i[s;r] ] , or studies information rates , as in eq .( [ irate ] ) .the first choice measures the information between the stimulus and the response most constrained by it ; typically this would be the response formed a certain characteristic signal transduction time after the stimulus occurrence .the second approach looks at correlations between all possible pairs of stimuli and responses separated by different delays .while there are plenty of examples of biological systems where one or the other of these quantities is optimized , both of these approaches are insufficient . does nt represent all of the gathered information since bits at different moments of time are not independent of each other .further , it does not take into the account that temporal correlations in the stimulus allow to predict it , and hence the response may be formed even before the stimulus occurs . on the other hand , the information rate does not distinguish among predicting the signal , knowing it soon after it happens , or having to wait for in order to be able to estimate it from the response . to avoid these pitfalls, one can consider the information available to an organism that is relevant for specifying not all of the stimulus , but only of its future .namely , we can define the _ predictive information _ about the stimulus available from observation of a response to it of a duration , =i [ \{r(-t\le t\le0)\};\{s(t>0)\}].\ ] ] this definition is a generalization of the one used in , which had , and hence calculated the upper bound on over all possible sensory schemes ] is given by ] , and hence modifies the conditional probability distribution itself .this would be equivalent to a cell phone being able to change its physical characteristics on the fly .unfortunately , as the recent issues with the iphone antenna have shown , human engineered systems are no match to biology in this regard : they are largely incapable of adjusting their own design if the original turns out to be flawed .the property of changing one s own characteristics in response to the observed properties of the world is called _ adaptation _ , and the remainder of this section will be devoted to its overview . in principle , we make no distinction whether this adaptation is achieved by natural selection or by physiological processes that act on much faster times scales ( comparable to the typical signal dynamics ) , and sometimes the latter may be as powerful as the former .further , we note that adaptation of the response probability distribution and formation of the response itself are , in principle , a single process of formation of the response on multiple time scales .our ability to separate it into a fast response and a slow adaptation ( and hence much of the discussion below ) depends on existence of two well - separated time scales in the signal and in the mechanism of the response formation .while such clear separation is possible in some cases , it is harder in others , and especially when the time scales of the signal and the fast response may be changing themselves. cases without a clear separation of scales raise a variety of interesting questions , but we will leave them aside for this discussion .we often can linearize the dynamics , eq .( [ main ] ) , to get the following equation describing formation of small responses-kr + \eta(t , r , s ) .\label{filter}\ ] ] here may be an expression of an mrna following activation by a transcription factor , or the firing rate of a neuron following stimulation . in the above expression, is the response activation function , which depends on the current value of the signal ; is the rate of the first - order relaxation or degradation ; and is some stochastic process representing the intrinsic noisiness of the system . in this case , depends on the entire history of , , and hence carries some information about it as well . for quasi - stationary signals (that is , the correlation time of the signal , ) , we can write the steady state dose - response ( or firing rate , or ) curve /k , \label{dose - response}\ ] ] and this will be smeared by the noise .a typical monotonic sigmoidal is characterized by only a few large - scale parameters : the range , and ; the argument at the mid - point value ; and the width of the transition region , ( see fig . [ response ] ) .if the mean signal , then , for most signals , and responses to two typical different signals and are indistinguishable as long as where is the precision of the response resolution expressed through the standard deviation of the noise .similar situation happens when and .thus , to reliably communicate information about the signal , should be tuned such that .if a real biological system can perform this adjustment , we call this _ adaption to the mean _ of the signal , _ desensetization _ , or _ adaptation of the first kind_. if , then the adaptation is _perfect_. this kind of adaptation has been observed experimentally and predicted computationally in a lot more systems than we can list here , including phototransduction , neural and molecular sensing , multistate receptor systems , immune response , and so on , with active work persisting to date ( see , e.g. , refs . for a very incomplete list of references on the subject ) .for example , the best studied adaptive circuit in molecular biology , the control of chemotaxis of _( see chapter 15 ) , largely produces adaptation of the first kind .further , a variety of problems in synthetic biology are due precisely to the mismatch between the typical protein concentration of the input signal and the response function that maps this concentration into the rate of mrna transcription or protein translation ( cf . and chapter 4 in this book ) .thus there is an active community of researchers working on endowing these circuits with proper adaptive matching abilities of the first kind .consider now the quasi - stationary signal taken from the distribution with .then the response to most of the signals is indistinguishable from the extremes , and it will be near the midpoint if .thus , to use the full dynamic range of the response , a biological system must tune the width of the sigmoidal dose - response curve to .we call this _ gain control _ , _ variance adaptation _ , or_ adaptation of the second kind_. experiments show that a variety of systems exhibit this adaptive behavior as well , especially in the context of neurobiology , and maybe even of evolution .these matching strategies are well known in signal processing literature under the name of histogram equalization .surprisingly , they are nothing but a special case of optimizing the mutual information ] is the one that produces . in particular ,when is independent of and , this means that each must be used equiprobably , that is , .adaptation of the first and the second kind follows from these considerations immediately . in more complex cases ,when the noise variance is not small or not constant , derivation of the optimal response activation function can not be done analytically , but numerical approaches can be used instead . in particular , in transcriptional regulation of the early _ drosophila _ embryonic development , the matching between the response function and the signal probability distribution has been observed for nonconstant . however , we caution the reader that , even though adaptation _ can _ have this intimate connection to information maximization , and it is essentially omni - present , the number of systems where the adaptive strategy has been analyzed quantitatively to show that it results in optimal information processing is not that large .we now relax the requirement of quasi - stationarity and return to dynamically changing stimuli .we rewrite eq .( [ filter ] ) in the frequency domain , \omega + \eta_\omega}{k+i\omega } , \label{filterw}\ ] ] which shows that the simple first order ( or linearized ) kinetics performs low pass filtering of the nonlinearly transformed signal .as discussed long ago by wiener , for given temporal correlations of the stimulus and the noise ( which we summarize here for simplicity by correlation times and ) , there is an optimal cutoff frequency that allows to filter out as much noise as possible without filtering out the signal .change of the parameter to match the temporal structure of the problem is called the _ time scale adaptation _ or _ adaptation of the third kind_. just like the first two kinds , time scale adaptation also can be related to maximization of the stimulus - response mutual information by means of a simple observation that minimization of the quadratic prediction error of the wiener filter is , under certain assumptions , equivalent to maximizing information about the signal , cf .( [ mutual_rho ] ) .this adaptation strategy is difficult to study experimentally since ( a ) detection of variation of the integration cutoff frequency potentially requires observing the adaptation dynamics on very long time scales , and ( b ) prediction of optimal cutoff frequency requires knowing the temporal correlation properties of signals , which are far from trivial to measure ( see , e.g. , ref . for a review on literature on analysis of statistical properties of natural signals ) .nonetheless , experimental systems as diverse as turtle cones , rats in matching foraging experiments , mice retinal ganglion cells , and barn owls adjusting auditory and visual maps show adaptation of the filtering cutoff frequency in response to changes in the relative time scales and/or the variances of the signal and the noise . in a few rare cases , including fly self - motion estimation and _ e. coli _chemotaxis ( numerical experiment ) , it turned out to be possible to show that the time scale matching not only improves , but optimizes the information transmission .typically one considers adaptation as a phenomenon different from redundancy reduction , and we have accepted this view .however , there is a clear relation between the two mechanisms .for example , adaptation of the first kind can be viewed as subtracting out the mean of the signal , stopping its repeated , redundant transmission and allowing to focus on the non - redundant , changing components of the signal . as any redundancy reduction procedure , this may introduce ambiguities : a perfectly adapting system will respond in the same fashion to different stimuli , preventing unambiguous identification of the stimulus based on the instantaneous response . knowing statistics of responses on the scale of adaptation itself may be required to resolve the problem .this interesting complication has been explored in a few model systems .the three kinds of adaptation we consider here can all be derived from the same principle of optimizing the stimulus - response mutual information , and evolution can achieve all of them . however , the mechanisms behind these adaptations on physiological , non evolutionary time scales and their mathematical descriptions can be substantially different , as we describe below .the adaptation of the first kind has been studied extensively . on physiological scales , it is implemented typically using negative feedback loops or incoherent feedforward loops , as illustrated in fig .[ loops ] . in all of these cases , the fast activation of the response by the signalis then followed by a delayed suppression mediated by a memory node .this allows the system to transmit changes in the signal , and yet to desensetize and return close ( and sometimes perfectly close ) to the original state if the same excitation persist .this response to _ changes _ in the signal earns adaptation of the first kind the name of _ differentiating filter_. in particular , the feedback loop in _e. coli _chemotaxis or yeast signaling can be represented as the feedback topologies in the figure ( see chapter 15 ) , and different models of _ dictyostelium _ adaptation include both feedforward and feedback designs .the different network topologies have different sensitivities to changes in the internal parameters , different tradeoffs between the sensitivity to the stimulus change and the quality of adaptation , and so on .however , fundamentally they are similar to each other .this can be seen by noting that since the goal of these adaptive system is to keep the signal within the _ small _ transition region between the minimum and the maximum activation of the response , it makes sense to linearize the dynamics of the networks near the mean values of the signal and the corresponding response .defining , , and , one can write , for example , for the the feedback topologies in fig .[ loops ] where are noises , and the coefficients are positive for the fourth topology , and some of them change their signs for the third . doing the usual fourier transform of these equations ( see ref . for a very clear , pedagogical treatment ) and expressing in terms of , , and , we see that it is only the product of that matters for the properties of the filter , eq .( [ fb1 ] , [ fb2 ] ) .hence both the feedback topologies in fig . [ loops ] are essentially equivalent in this regime . furthermore ,as argued in , a simple linear transformation of and allows to recast the incoherent feedforward loops ( the two first topologies in fig .[ loops ] ) into a feedback design , again arguing that , at least in the linear regime , the differences among all of these organizations are rather small from the mathematical point of view .is bilinear ; there the distinctions among the topologies are somewhat more tangible . ]the reason why we can make so much progress in the analysis of adaptation to the mean is that the mean is a linear function of the signal , and hence it can be accounted for in a linear approximation .different network topologies differ in their actuation components ( that is , how the measured mean is then fed back into changing the response generation ) , but averaging a linear function of the signal over a certain time scale is the common description of the sensing component of essentially all adaptive mechanisms of the first kind . variance and time scale adaptations are fundamentally different .while the actuation part for them is not any more difficult than for adaptation to the mean , adapting to the variance requires averaging the square or another nonlinear function of the signal to sense its current variance , and estimation of the time scale of the signal requires estimation of the spectrum or of the correlation function ( both are bilinear averages ) .therefore , maybe it is not surprising that the literature on mathematical modeling of mechanisms of these types of adaptation is rather scarce .while functional models corresponding to a bank of filters or estimators of environmental parameters operating at different time scales can account for most of the experimentally observed data about changes in the gain and in the scale of temporal integration , to our knowledge , these models largely have not been related to non - evolutionary , mechanistic processes at molecular and cellular scales that underlie them .the largest inroads in this direction have been achieved when integration of a nonlinear function of a signal results in an adaptation response that depends not just on the mean , but also on higher order cumulants of the signal , effectively mixing different kinds of adaptation together .this may be desirable in the cases of photoreception and chemosensing , where the signal mean is unalieanbly connected to the signal or the noise variances ( e.g , the standard deviation of brightness of a visual scene scales linearly with the background illumination , while the noise in the molecular concentration is proportional to the square root of the latter ) .similarly , mixing means and variances allows the budding yeast to respond to _ fractional _ rather than additive changes of a pheromone concentration .in other situations , like adaptation by a receptor with state - dependent inactivation properties , similar mixing of the mean signal with its temporal correlation properties to form an adaptive response may not serve an obvious purpose . in a similar manner ,integration of a strongly nonlinear function of a signal may allow a system to respond to signals in a gain - insensitive fashion , effectively adapting to the variance without a true adaptation .specifically , one can threshold the stimulus around its mean value and then integrate it to count how long it has remained positive . for any temporally correlated stimulus , the time since the last mean - value crossing is correlated to the instantaneous stimulus value ( it takes long time to reach high stimulus values ) , and this correlation is independent of the gain .it has been argued that adaptation to the variance in fly motion estimation can be explained at least in part by this non - adaptive process .similar mechanisms are easy to implement in molecular signaling systems as well .it is clear beyond that information theory has an important role in biology .it is a mathematically correct construction for analysis of signal processing systems .it provides a general framework to recast adaptive processes on scales from evolutionary to physiological in terms of a ( constrained ) optimization problem .sometimes it even makes ( correct ! ) predictions about responses of living systems following exposure to various signals .the first , and the most important problem that still remains to be solved is that many of the stories we mentioned above are incomplete . since we never know for sure which specific aspect of the world , , an organism cares about , and the statistics of signals are hard to measure in the real world , an adaptation that seems to optimize $ ] may be an artifact of our choice of and of assumptions about , but not a consequence of the quest for optimality by an organism .for example , the time scale of filtering in _ e. coli _chemotaxis may be driven by the information optimization , or it may be a function of very different pressures . similarly , a few standard deviations mismatch between the cumulative distribution of light intensities and a photoreceptor response curve in fly vision can be a sign of an imperfect experiment , or it can mean that we simply got ( almost ) lucky , and the two curves nearly matched by chance .it is difficult to make conclusions based on one data point !therefore , to complete these and similar stories , the information arguments must be used to make predictions about adaptations in novel environments , and such adaptations must be observed experimentally .this has been done in some contexts in neuroscience , but molecular sensing lags behind .this is largely because evolutionary adaptation , too slow to observe , is expected to play a major role here , and because careful control of dynamic environments , or characterization of statistical properties of naturally occuring environments needed for such experiments is not easy .new experimental techniques , such as microfluidics and artificially sped up evolution are about to solve these problems , opening the proverbial doors wide open for a new class of experiments . the second important research direction , which will require combined progress in experimental techniques and mathematical foundations , is likely going to be the return of dynamics .this has had a revolutionary effect in neuroscience , revealing responses unimaginable for quasi - steady - state stimuli , and dynamical stimulation is starting to take off in molecular systems as well .how good are living systems in filtering out those aspects of their time - dependent signals that are not predictive and are , therefore , of no use ?what is the evolutionary growth bound when signals change in a continuous , predictive fashion ?none of these questions have been touched yet , whether theoretically or experimentally . finally , we need to start building mechanistic models of adaption in living systems that are more complex than a simple subtraction of the mean .how are the amazing adaptive behaviors of the second and the third kind achieved in practice on physiological scales ?does it even make sense to distinguish the three different adaptations , or can some molecular or neural circuits achieve them all ?how many and which parameters of the signal do neural and molecular circuits estimate and how ? some of these questions may be answered if one is capable of probing the subjects with high frequency , controlled signals , and the recent technological advances will be a gamechanger as well . a arkin .signal processing by biochemical reaction networks . in j walleczek ,editor , _ self - organized biological dynamics and nonlinear control : toward understanding complexity , chaos and emergent function in living systems_. cambridge up , 2000 .n wingreen . why are chemotaxis receptors clustered but other receptors are nt ? in _ the fourth international q - bio conference on cellular information processing_. center for nonlinear studies , lanl , santa fe , nm , 2010 .n bowen , l walker , l matyunina , s logani , k totten , b benigno , and j mcdonald .gene expression profiling supports the hypothesis that human ovarian surface epithelia are multipotent and capable of serving as ovarian cancer initiating cells ., 2:71 , 2009 .n tishby , f pereira , and w bialek . the information bottleneck method . in b hajek and rs sreenivas , editors , _proc 37th annual allerton conference on communication , control and computing _ , pages 36877 .u illinois , 1999 .h barlow .sensory mechanisms , the reduction of redundancy , and intelligence . in dblake and a utlley , editors , _ proc symp mechanization of thought processes _ , volume 2 , page 53774 .hm stationery office , london , 1959 .
|
_ to appear as chapter 5 of _ quantitative biology : from molecular to cellular systems _ , me wall , ed . ( taylor and francis , 2011 ) . _ in this chapter , we ask questions ( 1 ) what is the right way to measure the quality of information processing in a biological system ? and ( 2 ) what can real - life organisms do in order to improve their performance in information - processing tasks ? we then review the body of work that investigates these questions experimentally , computationally , and theoretically in biological domains as diverse as cell biology , population biology , and computational neuroscience .
|
networks have attracted a burst of attention in the last decade ( useful reviews include refs . ) , with applications to natural , social , and technological networks . within biology , networks are prevalent , including : neural networks , where synapses link neurons ; metabolic networks , describing metabolic processes in the cell , linking chemical reactions and the regulatory processes that control them ; protein interaction networks , representing physical interactions between an organism s proteins ; transcription networks , describing regulatory interactions between different genes ; food webs , using links to characterize who eats whom ; and networks of sexual relations and infections , including aids models .taking a broader view , networks seem to be everywhere !there are : electrical power grids , whose stability relates to the network structure ; airline networks , with service efficiency tied to properties of the network ; the world wide web , with search engines using the network links to locate pages ; networks in linguistics , with words linked by co - occurrence ; social networks of all sorts ; collaboration networks , describing joint works amongst actors , authors , research labs , ; and many more . of great current interest is the identification of community groups , or modules , within networks .stated informally , a community group is a portion of the network whose members are more tightly linked to one another than to other members of the network .a variety of approaches have been taken to explore this concept ; see refs . for useful reviews .detecting community groups allows quantitative investigation of relevant subnetworks .properties of the subnetworks may differ from the aggregate properties of the network as a whole , e.g. , modules in the world wide web are sets of topically related web pages .methods for identifying community groups can be specialized to distinct classes of networks , such as bipartite networks .the nodes in a bipartite network can be partitioned into two disjoint sets such that no two nodes within the same set are adjacent .bipartite networks thus feature two distinct types of nodes , providing a natural representation for many affiliation or interaction networks , with one type of node representing actors and the other representing relations .examples of actor - relation pairs include people attending events , court justices making decisions , scientists jointly publishing articles , organizations collaborating in projects , and legislators serving on committees . arguably , bipartite networks are the empirically standard case for social networks and other interaction networks , with unipartite networks appearing often implicitly as projections .we formally describe networks using the language of graph theory .let be a set of vertices and be a set of vertex pairs or edges from .the pair is called a graph . in a simple graph , allpairs are distinct and , i.e. , there are no double lines or loops . given a partition where no edges exist between pairs of points within , or , then is said to be bipartite .we shall consider simple graphs on a ( large ) finite set : the number of edges or degree of vertex is defined by the number of edges is is often called the volume of the graph .graph structure is encoded in the adjacency matrix with detecting community groups allows for the identification and quantitative investigation of relevant subnetworks .local properties of the community groups may differ from the global properties of the complete network .for example , topically related web pages in the world wide web are typically interlinked , so that the contents of pages in distinct community groups should reveal distinct themes .thus , identification of community groups within a network is a first step towards understanding the heterogeneous substructures of the network . to identify communities ,we take as our starting point the modularity , introduced by .modularity makes intuitive notions of community groups precise by comparing network edges to those of a null model . as noted by : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a good division of a network into communities is not merely one in which there are few edges between communities ; it is one in which there are fewer than expected edges between communities . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the modularity is up to a normalization constant the number of edges within communities minus those for a null model : along with , it is necessary to provide a null model , defining .the standard choice for the null model constrains the degree distribution for the vertices to match the degree distribution in the actual network .random graph models of this sort are obtained by putting an edge between vertices and at random , with the constraint that on average the degree of any vertex is .this constrains the expected adjacency matrix such that denote by and assume further that factorizes into leading to a consequence of the null model choice is that when all vertices are in the same community .the goal now is to find a division of the vertices into communities such that the modularity is maximal .an exhaustive search for a decomposition is out of the question : even for moderately large graphs there are far too many ways to decompose them into communities. fast approximate algorithms do exist ( see , for example , refs . ) .specific classes of networks have additional constraints that can be reflected in the null model . for bipartite graphs ,the null model should be modified to reproduce the characteristic form of bipartite adjacency matrices } { \begin{array}{cc } { { \mathbf{o } } } & { { \mathbf{m}}}\\ { { { \mathbf{m}}}^\mathrm{t } } & { { \mathbf{o}}}\end{array } } } { { \quad}.}\label{eq : bipartsubstructure}\ ] ] recently , specialized modularity measures and search algorithms have been proposed for finding communities in bipartite networks .we make use of the algorithm called brim : bipartite , recursively induced modules . starting from a ( more or less ) ad hoc partition of the vertices of type 1 ,it is straightforward to optimize a corresponding decomposition of the vertices of type 2 .from there , optimize the decomposition of vertices of type 1 , and iterate .in this fashion , modularity increases until a ( local ) maximum is reached . however , the question remains : is the maximum a `` good '' one ? at this level then a random search is called for , varying the composition and number of communities , with the goal of reaching a better maximum after a new round of hill climbing using the brimalgorithm .in the ongoing research project nemo , networks of research and development collaborations under eu framework programs fp1 , fp2 , , fp5 are studied . the collaborations in the framework programs give rise to bipartite graphs , with edges existing between projects and the organizations which take part in them . with this construction ,participating organizations are linked only through joint projects . in the various framework programs ,the number of organizations ranges from 2,000 to 20,000 , the number of projects ranges from 3,000 to 15,000 , and the number of links between them ( project participations ) ranges from 10,000 to 250,000 ( see for precise values ) .a popular approach in social network analysis where networks are often small , consisting of a few dozen nodes is to visualize the networks and identify community groups by eye .however , the framework program networks are much larger : can we `` see '' the community groups in these networks ?structural differences or similarities of such networks are not obvious at a glance . for a graphical representation of the organizations and/or projects by dots on an a4 sheet of paper , we would have to put these dots at a distance of about from each other , and we then still would not have drawn the links ( collaborations ) which connect them .previous studies used a list of coarse graining recipes to compact the networks into a form which would lend itself to a graphical representation .as an alternative we have attempted to detect communities just using brim , i.e. , purely on the basis of relational network structure , and blind with respect to any additional information about the nature of agents . in , we show a community structure for fp3 found using the brimalgorithm , with a modularity of for 14 community groups .the communities are shown as vertices in a network , with the vertex positions determined using spectral methods .the area of each vertex is proportional to the number of edges from the original network within the corresponding community .the width of each edge in the community network is proportional to the number of edges in the original network connecting community members from the two linked groups .the vertices and edges are shaded to provide additional information about their topical structure , as described in the next section .each community is labeled with the most frequently occurring subject index .we denote by the frequency of occurrence of the subject index in the network , with similarly we consider the projects within one community and the frequency of any subject index appearing in the projects only of that community .we call the topical profile of community to be compared with that of the network as a whole. topical differentiation of communities can be measured by comparing their profiles , among each other or with respect to the overall network .this can be done in a variety of ways , such as by the kullback `` distance '' a true metric is given by ranging from zero to two .topical differentiation is illustrated in . in the figure ,example profiles are shown , taken from the network in .the community - specific profile corresponds to the community labeled ` 11 .food' in . based on the most frequently occurring subject indices_agriculture _ , _ food _ , and _ resources of the seas ,fisheries_the community consists of projects and organizations focussed on r&drelated to food products .the topical differentiation is for the community shown . for a specific community ( dark bars ) and the overall profile for the network as a whole ( light bars ) .the community - specific profile shown is for the community labeled `` 11 .food '' in .the community has . ] for further analysis , we have also looked for communities in networks of projects and subject indices . here, the projects and subject indices constitute the vertices of a bipartite network , with edges existing between projects and the subject indices assigned to them .this construction disregards the organizations , providing an alternate approach to investigating the topical structure of the framework programs .the later framework programs , such as fp5 , show a fair degree of overlap between the communities , due to the subject indices being freely assigned in project applications .this is in marked contrast to the networks for the first three framework programs , where topics were attributed rigidly within thematic subprograms .the communities for fp13 thus have more clearly segregated community structures .fp1 is particularly extreme , having _ no _ overlaps between the communities .the differences between framework programs point to the need for some care in interpreting community structures : the communities in fp13 reflect policy structures while those found for later framework programs are more representative of interaction patterns .compute the mutual information and normalize where the entropy is with the definitions in , ranges from zero , for uncorrelated decompositions of the set , to one , for perfectly correlated decompositions . using the brimalgorithm , we have partitioned network vertices into community groups by maximizing the modularity . in principle, many dissimilar partitions of the vertices could produce similar modularity values . with the normalized mutual information ( or a similar measure ), we can assess the amount of shared structure between two different partitions of the vertex set .for example , in we have shown a decomposition of the network of projects and subject indices for fp5 . in, we show a second decomposition of the same network .the modularity is nearly identical instead of the seen previously and the decompositions have some visible similarities . however , there are also definite structural differences , most prominently that the second decomposition has only eight communities while the first has nine .are the structural differences significant ?the normalized mutual information is found to be , indicating a strong correlation between the two decompositions and demonstrating that they have relatively minor structural differences .we have successfully identified community groups in networks defined from the framework program projects and the subject indices assigned to them .the full networks defined from the projects and the organizations taking part in them are considerably larger and correspondingly more challenging to investigate . for the full organizations - projects network ,the brimhill climbing algorithm is being supplemented with an aggressive , probabilistic search through community configurations upon which brimacts .this extended search has only just begun , but preliminary results are encouraging .once established , communities will then be investigated with regard to their internal structure , with the goal of identifying correlated properties within communities and contrasting properties across communities .we expect the analysis of internal structure to reveal patterns , themes , and motivations of collaborative research and development in the european union .the authors gratefully acknowledge financial support from the european fp6-nest - adventure programme , under contract number 028875 ; from the austrian research centers , in the _ leitprojekt `` bewertung und gestaltung von wissensnetzwerken '' _ ; from the portuguese fct , under projects pocti / mat/58321/2004/fse - feder and fct / pocti-219/feder ; and from the alexander von humboldt foundation , for travel support . c. christensen , and r. albert , _ international journal of bifurcation and chaos _* 17 * , 22012214 ( 2007 ) , http://arxiv.org/abs/q-bio.ot/0609036 , special issue `` complex networks structure and dynamics '' .s. n. dorogovtsev , and j. f. f. mendes , `` the shortest path to complex networks , '' in _ complex systems and inter - disciplinary science _ , edited by n. johnson , j. efstathiou , and f. reed - tsochas , world scientific , 2004 , http://arxiv.org/abs/cond-mat/0404593 .l. danon , a. daz - guilera , j. duch , and a. arenas , _ j. stat .mech . _ * p09008 * ( 2005 ) , http://www.iop.org / ej / article/1742 - 5468/2005/09/p09008/jstat5% _ 09_p09008.html[http://www.iop.org / ej / article/1742 - 5468/2005/09/p09008/jstat5% _ 09_p09008.html ] .l. freeman , `` finding social groups : a meta - analysis of the southern women data , '' in _ dynamic social network modeling and analysis _ , edited by r. breiger , k. carley , and p. pattison , the national academies press , washington , dc , 2003 , http://moreno.ss.uci.edu/85.pdf .a. j. seary , and w. d. richards , `` spectral methods for analyzing and visualizing networks : an introduction , '' in _ dynamic social network modeling and analysis : workshop summary and papers _ , edited by r. breiger , k. carley , and p. pattison , the national academies press , washington , d.c . , 2003 , pp .209228 , http://www.sfu.ca/~richards/pages/nas.ajs-wdr.pdf .
|
bipartite networks are a useful tool for representing and investigating interaction networks . we consider methods for identifying communities in bipartite networks . intuitive notions of network community groups are made explicit using newman s modularity measure . a specialized version of the modularity , adapted to be appropriate for bipartite networks , is presented ; a corresponding algorithm is described for identifying community groups through maximizing this measure . the algorithm is applied to networks derived from the eu framework programs on research and technological development . community groups identified are compared using information - theoretic methods . address = austrian research centers gmbh arc , bereich systems research , vienna , austria address = ccm , universidade da madeira , funchal , portugal address = ccm , universidade da madeira , funchal , portugal address = ccm , universidade da madeira , funchal , portugal
|
the gps timing and control ( gtc ) system is one of the major subsystems in the high altitude water cherenkov ( hawc ) gamma ray observatory .hawc is a very high energy ( vhe ) gamma ray observatory being built on the flank of the volcano sierra negra in mexico at latitude north , longitude west , and altitude 4100 m. it is important for hawc to maintain a low dead time , as an all - sky survey instrument . in order to maintain a low dead time ,the hawc main data acquisition system ( daq ) was designed as a distributed daq , providing continuous read out .the first level of the daq consists of 11 caen vx1190a time to digital converters ( tdcs ) and 11 ge xvb602 intel corei7 single board computers ( sbcs ) to read out tdcs , where each tdc - sbc pair reads a fragment of an event with 128 channels .these event fragments are combined in the online reconstruction farm .the distributed design of the daq system makes synchronous operation of the system critical .the gtc system , which is a custom fpga based system , is designed to perform this task . + as its name suggests , the gtc system has two sub systems : the gps timing system and the control system .the primary task of the gps timing system is to provide a timestamp for each recorded event , which is the absolute time of the trigger .the primary task of the control system is to provide the clock and the control signals to the tdcs , and provide trigger and detector status information to the scaler system .the gtc system is implemented using three different types of custom cards : clock type hclock card , control type hclock card and cb_fan card , as well as a commercial gps receiver navsync cw46s.pb.pdf ] +the completed hawc detector will consist of 300 steel water tanks of 7.3 m in diameter and 4.5 m in height instrumented with 4 pmts in each tank .each of these tanks contain a light - tight bladder filled with purified water and 4 pmts pointed upwards are placed near the bottom of the bladder .construction of hawc is scheduled in stages ; continuous operation of the first phase with 30 tanks ( hawc 30 ) with a fully functional gtc system started in november 2012 , and the final phase with 300 tanks will be completed in 2014 . + the hawc detector is designed to observe cosmic gamma rays by detecting the component of extensive air showers ( eas ) which reaches ground level .eas are generated from the interactions between the earth s atmosphere and cosmic gamma rays .when the relativistic charged particles in an eas move through the water tanks , they create cherenkov light that can be detected by the pmts .the main daq measures the arrival time and time over threshold ( tot ) of the pmt pulses , with an accuracy of 100 ps , using caen vx1190a time to digital converters ( tdcs ) .this information is used to determine the species of the primary particle initiating the eas ( gamma ray or proton ) , its energy , and the celestial coordinates of the primary particle .caen vx1190a tdcs are designed to record tot measurements within a given time window , around a trigger signal .each of these tdcs is equipped with 128 data channels and an output buffer to store data until read out .the control of the tdcs is done using three signals of the tdc control bus : trg , clr , and crst .the trg is the trigger signal input to the tdc . in hawc ,the trigger signal is a periodic signal that is provided by the gtc system . in a typical datarun the periodic trigger frequency is 40 khz ( period = 25 ) and the tdcs record the data in a 25.2 window around each trigger .the data saved in a given time window is called an event " in this paper .the analysis software searches these events for individual eas arrivals .the overlapping of the `` event '' time windows and the ability of the tdcs to read out while acquiring new data provides read out with no intrinsic daq dead time .clr is the clear command , which clears the data in the output buffer , resets the event counter , bunch counter , and performs a tdc global reset .crst is the reset command , which resets the extended trigger time tag and bunch counter .+ when hawc is completed , it will need 1200 data channels , which is 9 full tdcs and 48 data channels from a 10 tdc .besides the 10 tdcs used to record pmt signals , an tdc will be used to record 32 signals coming from the gps timing system and calibration signals .these signals are similar to the tot signals but they are encoded with the current gps time , which is the timestamp of that event .figure [ overalltiming ] shows a simplified timing diagram of the pmt signals and the timestamp signals . in this timing diagram , channels 1 through 128 of tdc 1 through ( n-1 ) record pmt signals and channels 1 through 32 of the n tdc record timestamps .+ tdc record timestamps .two timestamps per each trigger window are guaranteed , when the clock system is configured to send a timestamp in every 10 . ]while tdc buffers are filling with data , a ge xvb602 intel core i7 based vme single board computer ( sbc ) reads each tdc and delivers the data to the online reconstruction farm .however , sbcs can not perform the read out process at exactly the same rate for every tdc .therefore , the online reconstruction farm receives different fragments of a single event at different times .the hawc online reconstruction software identifies the event fragments belonging to a given trigger using the event identification number ( event i d ) , which is a 12 bit number in the event header .the event i d becomes zero after a tdc power cycle and then increases by one for each trigger .the gtc system also can reset the event i d to zero by sending a clr signal through the tdc control bus . + after identifying the event fragments of a single event , the online reconstruction process combines the fragments into a single event and decodes the timestamp .this event build is possible only if all the tdcs are working synchronously and maintain a unique event i d for a given trigger .the main objective of the control system is to keep the tdcs in synch .the synchronization between tdcs is achieved by distributing a global clock signal to all the tdcs , and clearing and resetting all the tdcs simultaneously at the beginning of each run .the gps timing system provides two services to hawc : 1 ) produce a periodic timestamp and 2 ) derive a low jitter 40 mhz signal to use as the global clock signal for hawc .as shown in figure [ clockcardschem ] , the gps timing system is made from two components : a custom board called the clock type hclock card and a navsync cw46s gps receiver .figure [ hclockcard ] shows a photograph of a fully assembled hclock card .it is a 2 slots wide 6u vme-64x module that is equipped with a phase lock loop ( pll ) , ten 17-pair ( 34 pins ) lvds general purpose input output ( gpio ) ports , a 16 pin connector to the gps receiver , a a24d16 vme interface and a virtex ii fpga .+ each of these gpio ports has 16 lvds gpio signals to / from the fpga and the pair carries a 40 mhz clock signal , which is also an lvds signal .the direction of the gpio ports is switchable by changing the io driver chips .clock type hclock cards are made with two input ports and eight output ports .+ the fpga is mounted in a mezzanine card ( labeled as mez-456 in the picture ) . since the performance and the resources of the virtex ii family fpgas are adequate for the requirements of hawc , a virtex ii xc2v1000 - 4fg456 fpgas in our shop s spare stock was used .however , if a future upgrade needs to change the fpga , it can easily be done by simply designing a new mezzanine card . + the gps receiver is used to obtain the gps time and a 10 mhz sine wave signal .the internal pll of the clock type hclock card uses this 10 mhz sine wave signal to derive a low jitter 40 mhz digital clock signal and makes several exact copies that are delivered to the control type hclock card , to the fpga and to the signal pair of all the gpio connectors .this 40 mhz signal is used as the global clock signal of hawc .other than this sine wave , the gps receiver transmits a one pulse per second ( 1pps ) pulse stream and a set of data strings via the rs232 protocol .the rising edges of these 1pps pulses mark the top of each second .the firmware running inside the fpga uses this 1pps signal and the data strings to replicate the current gps time .+ a simplified functional block diagram of the clock firmware is shown in figure [ clockfirmwareblockdiagram ] .this is a sequential logic design with several state machines implemented using vhdl .+ the gps receiver installed at the hawc site is configured to send three data strings followed by a 1pps pulse .these three strings ( polyt , gpgsa , and polyp ) are standard nmea 0183 strings , which carry the current gps time , gps receiver operating mode , number of visible satellites , and dilution of precision ( dop ) values .the first module of the firmware reads these serial strings and extracts the current gps time and the health information such as the gps fix status and dilution of precision .then the gps time and health information goes to the internal clock module , which is a continuously running 8 digit binary coded decimal ( bcd ) clock using the 40 mhz clock signal as the reference frequency . in this 8 digit clockthe least significant digit is microseconds and the most significant digit is tens of seconds .this clock module also receives the 1pps signal , which is used to identify the top of each second . at the top of each second, the internal clock module compares its clock time with the gps clock time , and overwrites the internal clock if the times do not match and the gps receiver is in good health .this allows the timing system to have an internal clock that runs synchronously with the gps clock .the final stage of the firmware is to make a tdc readable timestamp in every interval , where can be configured to or .other than these major modules , the clock firmware has a vme module that handles an a24d16 vme interface and several fifos that are filled with gps health monitoring information . the first 28 bits of the gtc timestamp are a 7 digit bcd value that carries the time in the format : 10s of s , 1s of s , 100s of milliseconds , 10s of milliseconds , 1s of milliseconds , 100s of microseconds and 10s of microseconds ( ss : mmm : uu ) .the remaining 4 bits are used to encode various errors that are encountered in the process of acquiring and encoding the timestamp .these error codes are defined in table 1 .the encoding of this timestamp to a tdc readable format is done using a simple algorithm .each bit is denoted by a pulse ; if a pulse is 1 wide it denotes a logic zero bit , if a pulse is 2 wide it denotes a logic one bit . as an example , the timing diagram shown in figure [ exampletimestamp ] is the encoding for the time 12.34567 seconds with no errors .an encoding scheme of this type with pulses must be used because the tdcs are only sensitive to edges but not to logic levels .that is one can not just send the 28 raw binary bits with logic levels to the tdcs because most of the time , most of the lines will not make a logic transition during a trigger window ( ) .s wide pulse is used to indicate logic 0 , and a 2 wide pulse is used to indicate logic 1 . ].the errors corresponds to each timestamp is encoded into the 4 most significant bits of the timestamp .the meaning of each error code is shown . [ cols="^,<",options="header " , ] every hawc event has a timestamp associated with it .this timestamp is constructed by combining three components : tdc timestamp , ntp timestamp , and trigger derived timestamp .tdc timestamp is the encoded timestamp that comes from the gtc system .these timestamps are sent to a tdc when the microseconds digit of the absolute clock time is 0 , for example when the absolute time is * * .****00 sec , * * .****10 sec , * * .****20 sec , etc .the tdc that records this encoded timestamp is also read out in the same way as the other tdcs , and the time stamp becomes a part of the main data stream .the tdc timestamp ( ss : mmm : uu ) rolls over every minute .hence , the low - resolution timestamp ( ` yyyy : mm : dd : hh : mm : ss ` ) needs to be combined with the raw hawc tdc data within one minute of the trigger time .this is done by the single board computers that read tdcs , which record the system time each time tdc readout is completed for a block of events .the computer system clock of the single board computers is synchronized via ntp time service to a local ntp time server . with the local ntp time server ,the absolute accuracy of the system clocks is in the millisecond range .the tdc timestamp , and the ntp timestamp of each event get combined together in the online reconstruction farm .when they are added , the tens of seconds and the seconds digits are coming from both the tdc timestamp , and the ntp timestamp .these two digits must be equal if both timestamps , tdc timestamp , and ntp timestamp , are accurate .therefore , we use this property as a sanity check to measure the accuracy of the timestamps . as discussed before , the latency between an event trigger and the daq computer adding its time stamp to that event s data plus the uncertainty of the computer s system clock should together be less than one minute when the daq system is running normally . in the current configuration ,the sbcs initiate readout approximately every 6 milliseconds .the finest resolution timestamp is derived from the raw hawc tdc data itself .the raw hawc tdc data have a field called ` extended trigger time tag ' which contains the trigger arrival time ( ) relative to the last bunch counter reset ( crst ) with a precision of 800 ns . for each of the 28 tdc channels which corresponds to the tdc timestamp, there is a rising edge time measurement , again relative to the last tdc reset , lets call it where .now each of the 28 channels will provide a delta time measurement ( ) from the most recent rising edge of its input signal until the arrival of the trigger signal .+ + thus the finest resolution time is given by , + + the measured in this method has an accuracy of 0.1 ns(100 ps ) .in the final step , we construct the gps timestamp of the trigger by combining these three components : gps time = ntp time + tdc time + .we thus have an accurate timestamp for the arrival time of the trigger signal .the absolute accuracy of this time stamp is 1 , which is the absolute accuracy of the tdc time , note that following a similar procedure we can measure all of the pmt signal edges with respect to the arrival of the trigger signal .the control system provides several services to hawc : 1 ) keep all tdcs working synchronously , 2 ) issue a synchronous trigger signal to the tdcs , 3 ) issue a scaler daq trigger signal called load next event ( lne ) and send the status of the detector to the scaler system .other than these major services , the control system also has a general purpose level shifter to shift signals from lvds to ecl and vice versa .the control system is made from two custom vme boards : a control type hclock card and a cb_fan card .the control type hclock card is a version of the hclock card with 6 input ports and 4 output ports .the control type hclock card also gets the 40 mhz global clock from the clock card .the cb_fan card is a 2 slots wide 6u vme-64x module , that is designed to provide appropriate level conversions and fan - outs for the control type hclock card - tdc interface .the cb_fan card does not perform any logic .a photograph of a fully assembled cb_fan card is shown in figure [ cbfancard ] . +a schematic diagram of the connections between the clock type hclock card , control type hclock card , cb_fan card and the scaler system is shown in figure [ controlcardschem ] . the input signal coming from the clock card is the 40 mhz global clock signal .the control type hclock card makes several copies of this 40 mhz clock signal and distributes it to the signal pair of all the gpio connectors and to the fpga .the interface between the control type hclock card and the scaler system consists of three outputs and one input : 10 mhz reference , pause pulses , busy pulses , lne ( the trigger signal for the scaler system ) and lne enable input .the 10 mhz reference is a continuous 10 mhz square wave signal output .the pause pulses produces a 10 mhz signal in - phase with the 10 mhz reference when the control system is in the pause state .the scaler system counts both of these signals .the ratio of the number of pause pulses to the number of 10 mhz reference pulses gives the fractional dead time of the detector enforced by the experiment control system .the functionality of the busy pulse output is similar to the pause pulses except busy pulses produce a 10 mhz signal when at least one tdc is filled to the almost full level .the load next event ( lne ) signal , a 100 hz clock , acts as a readout - start trigger for the scaler system .+ the control type hclock card interface to a cb_fan card consists of four output signals , 40 mhz , clr , crst and trig , and 16 input signals , 16 almost full .the four output signals are the tdc control - bus signals .the control card makes four identical copies of all output signals and can accept up to 64 inputs .therefore , one control type hclock card can be connected with four cb_fan cards .but the tdcs can not directly connect to the control type hclock card , because these i / os are lvds signals but the tdc control bus is compatible with only ecl signals .the cb_fan card is designed to provide level shifting between the hclock card lvds signals and tdc ecl signals .apart from the level shifting , the cb_fan card makes 6 identical copies of the 40 mhz , clr , crst and trig signals .the active edges of the trg , clr and crst signals are placed midway between the active clock edges .control firmware , control type hclock card hardware and cb_fan card hardware is designed obtain these signals synchronized to better than 12.5 ns .therefore , one cb_fan card can be used to control up to 6 tdcs .since one control type hclock card can interface with 4 cb_fan cards , the gtc system is capable of controlling up to 24 tdcs .+ a simplified functional block diagram of the control firmware is shown in figure [ fig : controlfirmware ] .similar to the clock firmware , this firmware is also a sequential logic design implemented in vhdl .however , unlike the clock firmware individual modules of the control firmware are not connected in series .coordination of these modules is done through the vme module .+ the first module shown in figure [ fig : controlfirmware ] is the trigger module , which coordinates the trigger signals that go to the tdcs .the trigger module can work in three modes : pause , periodic trigger , and external trigger . in the pause mode ,the trigger module does not issue any triggers . in the periodic trigger mode ,the trigger module issues a periodic trigger signal with a known frequency set by the vme module . in the external trigger mode ,the trigger module issues a trigger signal upon a request coming from the external trigger .this trigger mode is not currently used in hawc ; the potential usage of this functionality is discussed in section [ sec : pottentialusage ] . in a typical data taking run of hawc, the trigger module runs in the periodic trigger mode with a trigger frequency of 40 khz . at the end of each runthe hawc experiment control system sends a request via the gtc control software to the vme module to switch the trigger module to the pause mode .the 40 khz periodic trigger frequency was chosen because it is the optimum trigger frequency for the hawc daq system . however , this periodic trigger frequency can be changed by a request to the vme module from the hawc run control system .the clr and crst modules issue the clear and reset signals to the tdcs upon a request coming from the vme module .these requests originate in the run control system at the beginning of each run .+ the next three modules in figure [ fig : controlfirmware ] provide the signals to the scaler system .the 10 mhz reference module is a 10 mhz square wave signal generator that generates a reference pulse stream to the scaler system .the functionality of the pause pulse module is equivalent to a multiplexer with two inputs and one output : logic lo input , 10 mhz square wave input and pause pulse output .when the trigger module is in the pause state , the pause pulse output switches to the 10 mhz square wave .when the trigger module is not in the pause state , the output switches to the logic lo level .the busy pulse module has a similar functionality , except that the selection between logic lo and 10 mhz square is done using the or of the almost full signals .if any of the almost full inputs are logic hi , the busy pulse output gets connected with the 10 mhz square wave , otherwise the output stays in the logic lo level .therefore , one can calculate the fraction of the time that hawc stays in the busy state using the ratio between busy pulses per run and 10 mhz square wave pulses .apart from the main features of the gtc system described above it is also able to support several other functionalities : external triggers , sbc readout signal and lvds control busses . at the present , hawc runs using a periodic trigger signalhowever , the gtc system is designed to support both the periodic trigger mode and the external trigger mode .+ the sbc read out signal is another currently unused feature of the gtc system . similar to the other signals this signal also comes from the control card and goes to the cb_fan card .the cb_fan card converts this signal to a single ended 3.3v logic level 110 ohm back terminated signal and makes 8 copies of them .the intention of this signal is to issue a read out request to the sbcs .one of the potential uses of this signal is to issue an sbc read out request when at least one tdc becomes almost full .+ besides the ecl signal tdc control buses , each cb_fan card also fans - out three copies of lvds control buses .this control bus signaling could drive the read out of additional devices such as pmt digitizers or external programed trigger modules .in order to check reliability of the gtc system , we made a test setup with two independent gtc systems and a tdc with an accuracy of 200 ps to record timestamps coming from both tdcs .if both gtc systems are reliable we expect them to produce identical timestamps for a given trigger .we collected data during 24 hours and did nt print any event with unequal timestamps .+ the health of the gtc system was continuously monitored from early 2013 , and we found that the 1pps signal had a jitter , less than 50 ns , with respect to the 10 mhz output . each time the 1pps moved more than 25 ns it caused an overwrite of the clock firmware s internal clock from the gps clock .therefore , this jitter produced error flags in the timestamps .the average rate of this error flag is 11 per hour .this jitter introduced an upper limit of 25 ns accuracy to the gtc generated timestamp .however , the 25 ns accuracy is well below the required accuracy of 1 for hawc .navsynch builds a new gps module with an internal phase lock to lock the 1pps and 10 mhz outputs .however , the present gps module is sufficient for hawc s requirements .the hawc gamma ray observatory equipped with a fully functional gtc system started its first phase , with 30 tanks , in november of 2012 .the pmt signals were digitized using a caen vx1190a tdc .apart from the pmt signals the clock system generates a 32 bit timestamp encoded in a 32 channel pulse pattern , which is similar to the tot signals of the pmt output signals after the febs .these 32 signals were digitized using another caen vx1190a tdc .both tdcs were read out by their own sbcs via a vme back plane and the data is transferred to the online reconstruction farm via an ethernet connection . in the online reconstruction farm the timestamp andthe pmt data that correspond to the same event ids are combined to form a single event . after combining these two parts ,the online reconstruction software decodes the timestamp . + in order to make tdcs work synchronously ,the control system delivers two identical copies of a 40 mhz clock signal and the trigger signal to tdcs . since the online reconstruction process uses the event i d to combine the pmt data with the timestamp , it is a must to maintain a unique event i d to event fragments that correspond to a given trigger .therefore , at the beginning of each run the gtc system issues a clear ( clr ) signal to reset the event i d counters .the gtc system also issues a reset ( crst ) signal at the beginning of each run to reset all the other counters in tdcs .+ the health monitoring of the gtc system was continuously done from early 2013 and it reveals that the accuracy of the timestamps produced by the gtc system has an upper limit of 25 ns . however , the 25 ns accuracy is well below the required accuracy of 1 for hawc .+ when hawc is completed in 2015 , it will have 300 tanks instrumented with 4 pmts per each tank .this will increase the number of tdcs required to record pmt signals up to 10 , and another tdc will be used to record timestamps . therefore , the completed hawc will be instrumented with 11 tdcs .the gtc system with two cb_fan cards will be able to match this requirement .we would like to give our special thanks to everyone in the hawc collaboration who helped us to design and build the gtc system .funding for the gtc system construction was provided by the nsf hawc construction grant , phy 1002546 , via a subcontract with the university of maryland , and nsf hawc grants phy 0901973 and phy 1002432 .
|
the design and performance of the gps timing and control ( gtc ) system of the high altitude water cerenkov ( hawc ) gamma ray observatory is described . the gtc system provides a gps synchronized absolute timestamp , with an accuracy better than 1 , for each recorded event in hawc . in order to avoid any slack between the recorded data and the timestamp , timestamps are injected to the main data acquisition ( daq ) system after the front - end electronic boards ( febs ) . when hawc is completed , the hawc main daq will use 10 time to digital converters ( tdcs ) . in order to keep all the tdcs in sync , the gtc system provides a synchronized clock signal , coordinated trigger signal , and control signals to all tdcs . gps timestamp , gamma - ray astrophysics , water cherenkov detector , time to digital converter , tev astronomy
|
the classical approach of mathematical finance is to consider that the prices of basic products ( future , stock , )are observed on the market . in particular , their valuesare used in order to price complex derivatives .since options traders typically rebalance their portfolio once or a few times a day , such derivatives pricing problems typically occur at the daily scale .when working at the ultra high frequency scale , even pricing a basic product , that is assigning a price to it , becomes a challenging issue .indeed , one has access to trades and quotes in the order book so that at a given time , many different notions of price can be defined for the same asset : last traded price , best bid price , best ask price , mid price , volume weighted average price, this multiplicity of prices is problematic for many market participants .for example , market making strategies or brokers optimal execution algorithms often require single prices of plain assets as inputs .choosing one definition or another for the price can sometimes lead to very significantly different outcomes for the strategies .this is for example the case when the tick value ( the minimum price increment allowed on the market ) is rather large .indeed , this implies that the prices mentioned above differ in a non negligible way . in practice, high frequency market participants are not looking for the fair " economic value of the asset .what they need is rather a price whose value at some given time summarizes in a suitable way the opinions of market participants at this time .this price is called _efficient price_. hence , this paper aims at providing a statistical procedure in order to estimate this efficient price . in this paper , we focus on the case of large tick assets . we define them as assets for which the bid - ask spread is almost always equal to one tick .our goal is then to infer an efficient price for this type of asset .naturally , it is reasonable to assume that the efficient price essentially lies inside the bid - ask spread but we wish to say more . in order to retrieve the efficient price ,the classical approach is to consider the imbalance of the order book , that is the difference between the available volumes at the best bid and best ask levels , see for example . indeed , it is often said by market participants that the price is where the volume is not " .here we consider a dynamic version of this idea through the information available in the _ order flow_. more precisely , we assume that the intensity of arrival of the limit order flow at the best bid or the best ask level depends on the distance between the efficient price and the considered level : if this distance is large , the intensity should be high and conversely .thus , we assume the intensity can be written as an increasing deterministic function of this distance .this function is called the _ order flow response function_. in our approach , a crucial step is to estimate the response function in a non parametric way . then, this functional estimator is used in order to retrieve the efficient price .note that it is also possible to use the buy or sell market order flow . in that case, the intensity of the flow should be high when the distance is small . indeed ,in this situation , market takers are not loosing too much money ( with respect to the efficient price ) when crossing the spread .the paper is organized as follows .the model and the assumptions are described in section [ mod ] .particular properties of the efficient price are given in section [ effpr ] and the main statistical procedure is explained in section [ stat ] .the theorems about the response function can be found in section [ thh ] and the limiting behavior of the estimator of the efficient price is given in section [ theffpr ] .one numerical illustration can be found in section [ num ] and a conclusion is given in section [ conclu ] .finally the proofs are relegated to section [ proofs ] .we assume the tick size is equal to one , meaning that the asset can only take integer values .moreover , the efficient price is given by ,\ ] ] where is a brownian motion on some filtered probability space and is a -measurable random variable , independent of and uniformly distributed on , with .note that such a simple dynamics for the efficient price is probably still reasonable at our high frequency scale .let be the fractional part of , denoted by , that is : to fix ideas and without loss of generality , we focus in the rest of the paper on the limit order flow at the best bid level . we assume that when a limit order is posted at time at the best bid level , its price is given by .therefore at time , the efficient price is we denote by the total number of limit orders posted over ] . to get asymptotic properties , we let tend to infinity .it will be also necessary to assume that depends on .more precisely , we have the following assumption : [ [ assumption - h2-asymptotic - setting ] ] assumption h2 : asymptotic setting + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for some , as tends to infinity , note that up to scaling modifications , we could also consider a setting where is fixed and the tick size is not constant equal to but tends to zero .the intensity of the order flow is .this process has several nice properties .first , recall that if is uniformly distributed on ] .thus , since is uniformly distributed on ] and =5/(3\sigma^4) ] can be computed as multiple integrals on ^{2} ] an explicit expression for the bivariate laplace transform of is available ,see formula ( i.3.18.1 ) in .using differentiation , from this expression , one can easily derive ] which enables to deduce and ] .however , we do not have access to the values of the . nevertheless , we know that they are uniformly distributed on ] ) , for the space of cdlg functions from ( resp . ] , , for the skorohod topology .we have the following result for : [ thhinv ] under h1 and h2 , as tends to infinity , we have in , where is a continuous centered gaussian process with covariance function defined by . notethat although there are terms in the sum defining in , the rate of convergence in theorem [ thhinv ] is which is slower than .this is due to the strong dependence between the within each cycle and the fact that the number of cycles is of order .the same type of result holds also for : [ tcl]under h1 and h2 , as tends to infinity , we have in . note that both in theorem [ thhinv ] and theorem [ tcl ] , the covariance functions of the limiting processes can be computed explicitly , see section [ reg ] .let ] , with ,\\ a_2&=\mathbb{e}\big [ \mathbb{i}_{\ { v_{j , k_{t},t}=\pm 1\ } } \mathbb{i}_{\ { t / k_{t}}^{jt / k_{t}}h ( y_{t } ) dt| > \varepsilon _ { t}\ } } \big].\end{aligned}\ ] ] we have now remark that from markov inequality and the fact that is uniformly distributed , we easily get that for any , , : }{\text{sup}}|y_t - y_s|\geq \lambda\big]\leq c_p(\delta^{p/2}\lambda^{-p}+\delta^{1/2}).\ ] ] therefore , for , \leq c\frac{(t / k_t)^{p/2}}{\varepsilon_t^p}+c(t / k_t)^{1/2}.\ ] ] thus +c\frac{(t / k_t)^{p/2}}{\varepsilon_t^p}+c(t / k_t)^{1/2}.\ ] ] using the stationary distribution of we obtain that for small enough , we now turn to . remarking that implies we get .\ ] ] recall that conditional on the path of , , where is a poisson random variable with parameter thus we obtain using that \leq \frac{1}{k_{t}}\sum_{j=1}^{k_{t}}\mathbb{e}[|v_{j, k_{t},t}|],\]]we finally get \leq c\varepsilon_t\sqrt{t}+c\frac{t^{(p+1)/2}}{\varepsilon_t^pk_t^{p/2}}+c\frac{k_{t}}{\mu_t\sqrt{t}\varepsilon_t^2}+c\frac{t}{\sqrt{k_t}}.\ ] ] we take , with tending to zero . then \leq c\zeta_t+c\frac{t^{p+1/2}}{\zeta_t^pk_t^{p/2}}+c\frac{k_{t}\sqrt{t}}{\mu_t\zeta_t^2}+c\frac{t}{\sqrt{k_t}}.\ ] ] thanks to the assumptions on , we can find a sequence tending to such that ] goes to .eventually , from , we get that converges in law towards a centered gaussian random variable with variance $ ] .we now give the second lemma needed to prove proposition [ propconvhemoinsun ] .[ lemma2]let sequence is tight in .recall first the following classical c - tightness criterion , see theorem 15.6 in : if for , and all \leq c\big(|t _ { 1}-t _ { 2}|^{p_{1}}\big),\]]then is tight in we now use the sequence of stopping time defined by .let from theorem ii.5.1 and theorem ii.5.2 in , we have \leq ct\ ] ] and \leq ct.\ ] ] we now define remark that for , the are centered ( see section [ reg ] ) and iid .moreover , using the occupation formula together with a taylor expansion , we get \leq t^{-1}{\mathbb{e}}\big[\big(\int_{\nu _ { i-1}}^{\nu _ { i}}\mathbb{i}_{\ { t _ { 1}<h ( y_{t } ) \leq t _ { 2}\ } } dt\big)^2\big ] \leq ct^{-1}|t_2-t_1|^2{\mathbb{e}}[(l^*)^2],\ ] ] with }{\text{sup}}\big(l_{-1/\sigma,1/\sigma}(u)\big).\ ] ] from the ray - knight version of burkhlder - davis - gundy inequality , see , we know that all polynomial moments of are finite and thus \leq ct^{-1}|t_2-t_1|^2.\ ] ] we show in the same way that +\mathbb{e}\big[\big(\tilde y_{n_t+1 } ( t _ { 1},t _ { 2})\big)^2\big]\leq ct^{-1}|t_2-t_1|^2.\ ] ] we now use the preceding inequalities in order to show that the tightness criterion holds .we have with thus \leq c\sum_{i=1}^4{\mathbb{e}}[|b_i|^2].\ ] ] by theorem i.5.1 in , \leq c\mathbb{e}[n_{t } ] \mathbb{e}\big[\big(y_{2 } ( t _ { 1},t _ { 2})\big ) ^{2}\big].\ ] ] moreover , using , we obtain \mathbb{e}\big[|y_{2 } ( t _ { 1},t _ { 2})| ^{2}\big]\leq c|t _ { 1}-t _ { 2}|^{2}.\ ] ] finally , we easily get from that \leq c|t _ { 1}-t _ { 2}|^{2}.\ ] ] and therefore the tightness criterion is satisfied . in order to prove theorem [ thhinv ] and theorem [ tcl ] ,we start with the following corollary of proposition [ propconvhemoinsun ] : [ cortemp ] as , converges in law towards in ( for the product topology ) .the result follows directly from proposition [ propconvhemoinsun ] together with theorem 13.7.2 in and the continuous mapping theorem .we now give the proof of theorem [ thhinv ] .we write with from proposition [ propconvhemoinsun ] together with the continuity of the composition map , see theorem 13.2.2 in , we have in .now recall that thus using corollary [ cortemp ] together with theorem 13.3.3 in , we get in .the convergences of and taking place jointly , this ends the proof of theorem [ thhinv ] .from theorem [ thhinv ] , theorem [ tcl ] follows from theorem 13.7.2 in .let be the oracle estimate of defined the same way as but with instead of .now we use that and the fact that the cycles on which is defined are negligible in the estimation of and .then , from proposition [ propconvhemoinsun ] , we have in , with independent of . using theorem 13.2.1 in andthe fact that we derive now recall that and that conditional on the path of , , where is a poisson random variable with parameter therefore , using a taylor expansion and the fact that }{\text{sup}}|y_u - y_t|\geq 1\big]\leq c(t / k_{t})^{1/2},\ ] ] we get that \ ] ] is smaller than }{\text{sup}}|y_u - y_t|\big]+c\big(k_t/(t\mu_{t})\big)^{1/2}\leq c(t / k_t)^{1/2}+c\big(k_t/(t\mu_{t})\big)^{1/2}.\ ] ] eventually , since and tend to zero , we get \leq ct/\sqrt{k_t}+c(k_t/\mu_{t})^{1/2}\rightarrow 0,\ ] ] which concludes the proof .we are grateful to renaud drappier and sebouh takvorian from bnp paribas for their very interesting comments .we also thank the referee for his helpful remarks .
|
at the ultra high frequency level , the notion of price of an asset is very ambiguous . indeed , many different prices can be defined ( last traded price , best bid price , mid price , ) . thus , in practice , market participants face the problem of choosing a price when implementing their strategies . in this work , we propose a notion of efficient price which seems relevant in practice . furthermore , we provide a statistical methodology enabling to estimate this price form the order flow . * key words : * efficient price , order flow , response function , market microstructure , cox processes , fractional part of brownian motion , non parametric estimation , functional limit theorems .
|
the idea motivating the importance of identifying core genes is to understand the shared functionality of a given set of species .we introduced in a previous work two methods for discovering core and pan genes of chloroplastic genomes using both sequence similarity and alignment based approaches . to determine these core and pan genomes for a large set of dna sequences , we propose in this work to improve the alignment based approach by considering a novel sequence quality control test .more precisely , we focus on the following questions considering a collection of 99 chloroplasts : how can we identify the best core genome ( an artificially designed set of coding sequences as close as possible to the real biological one ) and how to deduce scenarii regarding their gene loss .the term chloroplast comes from the combination of plastid and chloro , meaning that it is an organelle found in plant and eukaryotic algae cells which contains chlorophyll .chloroplasts may have evolved from _ cyanobacteria _ through endosymbiosis and since their main objective is to conduct photosynthesis , these fundamental tiny energy factories are present in many organisms .this key role explains why chloroplasts are at the basis of most trophic pyramids and thus responsible for evolution and speciation .moreover , as photosynthetic organisms release atmospheric oxygen when converting light energy into chemical energy and simultaneously produce organic molecules from carbon dioxide , they originated the breathable air and represent a mid to long term carbon storage medium .consequently , exploring the evolutionary history of chloroplasts is of great interest and therefore further phylogenetic studies are needed .an early study of finding the common genes in chloroplasts was realized in 1998 by _ stoebe et al . _they established the distribution of 190 identified genes and 66 hypothetical protein - coding genes ( _ ysf _ ) in all nine photosynthetic algal plastid genomes available ( excluding non photosynthetic _ astasia tonga _ ) from the last update of plastid genes nomenclature and distribution .the distribution reveals a set of approximately 50 core protein - coding genes retained in all taxa . in 2003 , _ grzebyk et al . _ , have studied the core genes among 24 chloroplastic sequences extracted from public databases , 10 of them being algae plastid genomes .they broadly clustered the 50 genes from _stoebe et al ._ into three major functional domains : ( 1 ) genes encoded for atp synthesis ( _ atp _ genes ) ; ( 2 ) genes encoded for photosynthetic processes ( _ psa _ and _ psb _ genes ) ; and ( 3 ) housekeeping genes that include the plastid ribosomal proteins ( _ rpl _ and _ rps _ genes ) .the study shows that all plastid genomes were rich in housekeeping genes with one _ rbclg _ gene involved in photosynthesis . to determine core chloroplast genomes for a given set of photosynthetic organisms , bioinformatics investigations using sequence annotation and comparison toolsare required , and therefore various choices are possible .the purpose of our research work is precisely to study the impact of these choices on the obtained results .a general presentation of the approaches we propose is provided in section [ sec : general ] . a closer examination of the approaches is given in section [ sec : extraction ] .section [ sec : simil ] will present coding sequences clustering method based on sequence similarity , while section [ sec : mixed ] will describe quality test method based on quality genes .the paper ends with a discussion based on biological aspects regarding the evolutionary history of the considered genomes , leading to our methodology proposal for core and pan genomes discovery of chloroplasts , followed by a conclusion section summarizing our investigations .instead of considering only gene sequences taken from ncbi or dogma , an improved quality test process now takes place as shown in figure [ fig1 ] .it works with gene names and sequences , to produce what we call `` quality genes '' .remark that such a simple general idea is not so easy to realize , and that it is not sufficient to only consider gene names provided by such tools . providing good annotations is an important stage for extracting gene features .indeed , gene features here could be considered as : gene names , gene sequences , protein sequences , and so on .we will subsequently propose methods that use gene names and sequences for extracting core genes and producing chloroplast evolutionary tree .real genomes were used in this study , which cover eleven types of chloroplast families ( see for more details ) .furthermore , two kinds of annotations will be considered in this document , namely the ones provided by ncbi on the one hand , and the ones by dogma on the other hand .to make this document self contained , we recall the same definition with a fast revision of similarity based method .basically , this method starts with annotated genomes either from ncbi or dogma and uses a distance ] and a similarity measure , the method builds the _ similarity _ undirected graph where vertices are alleles and s.t .there is an edge between and if we have .each connected component ( cc ) of this graph defines a class of the dna sequences and is abusively called a `` gene '' , whereas all its nodes ( dna sequences ) are the `` alleles '' of this gene .let the function that maps each sequence into its representative gene .each genome is thus mapped into the set where duplicated genes are removed .consequently , the core genome ( resp . , the pan genome ) of two genomes and is defined as the intersection ( resp ., as the union ) of their projected genomes . the intersection ( resp .the union ) of all the projected genomes constitutes the core genome ( resp . the pan genome ) of the whole species .let us now consider the 99 chloroplastic genomes introduced earlier .we use in this case study either the coding sequences downloaded from ncbi website or the sequences predicted by dogma . each genome is thus constituted by a list of coding sequences . in this illustration study, we have evaluated the similarity between two sequences by using a global alignment .more precisely , the measure introduced in the first approach is the similarity score provided after a needleman - wunch global alignment , by the _ emboss _ package released by embl .the number of genes in the core genome and in the pan genome have been computed .obtained results from various threshold values are represented in table [ fig : sim : core : pan ] .remark that when the threshold is large , the pan genome is large too .no matter the chosen annotation tool , this first approach suffers from producing too small core genomes , for any chosen similarity threshold , compared to what is usually expected by biologists ..size of core and pan genomes w.r.t .the similarity threshold [ cols="^,^,^,^,^,^,^,^ " , ] [ fig : sim : core : pan ] let us present our new approach . in this one , we propose to integrate a similarity distance on gene names into the pipeline .each similarity is computed between a name from dogma and a name from ncbi , as shown in figure [ meth2:gensim ] .the proposed distance is the levenshtein one , which is close to the needleman - wunsch , except that gap opening and extension penalties are equal .the same name is then set to sequences whose ncbi names are close according to this edit distance .the risk is now to merge genes that are different but whose names are similar ( for instance , nd4 and nd4l are two different mitochondrial genes , but with similar names ) . to fix such a flaw , the sequence similarity , for intersected genes in a genome , is compared too in a second stage ( with a needleman - wunsch global alignment ) after selecting a genome accession number , and the genes correspondence is simply ignored if this similarity is below a predefined threshold .we call this operation , which will result in a set of quality genes , a quality test .a result from this quality test process is a set of quality genes .these genes will then constitute the quality genomes .a list of generated quality genomes based on specific threshold will construct the intersection core matrix to generate the core genes , core tree , and phylogenetic tree after choosing an appropriate outgroup .it is important to note that dna sequence annotation raises a problem in the case of dogma : contrary to what happens with gene features in ncbi , genes predicted by dogma annotation may be fragmented in several parts .such genes are stored in the gene - vision file format produced by dogma , as each fragment is in this file with the same gene name .a gene whose name is present at least twice in the file is either a duplicated gene or a fragmented one .obviously , fragmented genes must be defragmented before the dna similarity computation stage ( remark that such a defragmentation has already been realized on ncbi website ) .as the orientation of each fragment is given in the gene - vision output , this defragmentation consists in concatenating all the possible permutations ( in the case of duplication ) , and only keeping the permutation with the best similarity score in comparisons with other sequences having the same gene name , if this score is larger than the given threshold .all algorithms have been implemented using python language version 2.7 , on a personal computer running ubuntu 12.04 32bit with 6 gbyte memory , and a quad - core intel core i5 processor with an operating frequency of 2.5 ghz . to produce a core tree and genomes based on quality control approach , we need to know what are the common genes that share almost the same name and sequence from different annotation tools .figure [ subfig-1:ncbi_vs_dogma ] shows the original amount of genes based on two different annotation tools , their correlation is equal to 0.57 . a two steps quality test routineis then launched to produce `` quality genomes '' and to enlarge the correlation : ( 1 ) select all common genes based on gene names and ( 2 ) check the similarity of sequences , which must be larger than a predefined threshold .figure [ subfig-2:coverage_ncbi_vs_dogma ] presents the genes coverage percentage between ncbi and dogma .remark that , gene differences between such annotation tools can affect the final core genome .more precisely , the number of _ trnas _ and _ rrnas _ genes are very high in the case of dogma annotation , while they are very low in the case of ncbi .there are also some unnamed or badly named _genes in the case of ncbi .these genes may improve the final core genome , if their functionality are well defined .the number of _ core genes _ , illustrated in figure [ subfig-1:core ] , represents the amount of genes in the computed core genome .the main goal is to find the largest number of core genes that is compatible with biological background related to chloroplasts .from the first approach with a threshold of 60% , we have obtained 2 genes for 99 genomes with ncbi and dogma , whereas 4 genes for 98 genomes have been found using the second approach . in the case of second approach, we have ignored one genome for _ micromonas pusilla _ under the accession ( nc_012568.1 ) from our sample , because we have a few amount of quality genes or none that could have been generated from its correspondents . with the second approach ,zero gene in rooted core genome means that we have two or more subtrees of organisms that are completely divergent among each other .unfortunately , for the first approach with ncbi annotation , the core genes within ncbi cores tree did not provide true biologically distribution of the genomes .conversely , in the case of dogma annotation , the distribution of genomes is biologically relevant .the ncbi under performance may be explained by broken subcores due to an artificially low number of genes in some genomes intersection , which could be explained by coding sequence prediction or annotation errors , or by very divergent genomes .more precisely , _micromonas pusilla _ ( accession number nc_012568.1 ) is the only genome who totally destroys the final core genome with ncbi annotations , for both gene features and gene quality methods .according to chloroplast endosymbiotic theory , the primary endosymbiosis has led to three chloroplast lineages among which the two most evolved groups are the chloroplastida and the rhodophyce .these chloroplast groups , which respectively consist of _ land plants _ and _ green algae _ , and _ red algae _, gave rise to secondary plastids when algae cells were engulfed by other heterotrophic eukaryotes through various secondary endosymbioses .thus _ euglens _ come from _ green algae _ while _ red algae _ gave birth to both _brown algae _ and _dinoflagellates_. now , if we observe the built core trees , in particular the one gained with quality control approach , we can notice that a primary plastid generated by the first endosymbiosis can be found in a single lineage of the chloroplast genome evolution tree : the chloroplastida group corresponds to a lineage , whereas the rhodophyce group is represented by a second one .the generated core tree is composed by two subtrees , the first one containing the lineages of land plants and green algae and the second one presenting the lineages of brown and green algae . in the tree , some chloroplast lineages such as _ angiosperms _ and _green algae _ have well biological distributions , while other lineages ( _ euglens _ , _ dinoflagellates _ , and _ ferns _ ) are badly distributed when compared to their biological history . indeed ,common quality genes from quality control approach are well covered by most ncbi genomes , while a large number of _ trnas _ and _ rrnas _ from dogma genomes have been lost .in this research work , we studied two methodologies for extracting core genes from a large set of chloroplastic genomes , and we developed python programs to evaluate them in practice . a two stage similarity measure , on names and sequences , is thus proposed for dna sequences clustering in genes , which merges best results provided by ncbi and dogma .results obtained with this `` quality control test '' are deeply compared with our previous research work , on both computational and biological aspects , considering a set of 99 chloroplastic genomes .core trees have finally been generated for each method , to investigate the distribution of chloroplasts and core genomes .the tree from dogma annotation has revealed the best distribution of chloroplasts regarding their evolutionary history . in particular, it appears to us that each endosymbiosis event is well branched in the dogma core tree .b. alkindy , j. f. couchot , c. guyeux , a. mouly , m. salomon , j. m. bahi , `` finding the core - genes of chloroplasts '' , journal of bioscience , biochemistery , and bioinformatics , iacsit press , 4(5):357364 , 2014 .
|
in computational biology and bioinformatics , the manner to understand evolution processes within various related organisms paid a lot of attention these last decades . however , accurate methodologies are still needed to discover genes content evolution . in a previous work , two novel approaches based on sequence similarities and genes features have been proposed . more precisely , we proposed to use genes names , sequence similarities , or both , insured either from ncbi or from dogma annotation tools . dogma has the advantage to be an up - to - date accurate automatic tool specifically designed for chloroplasts , whereas ncbi possesses high quality human curated genes ( together with wrongly annotated ones ) . the key idea of the former proposal was to take the best from these two tools . however , the first proposal was limited by name variations and spelling errors on the ncbi side , leading to core trees of low quality . in this paper , these flaws are fixed by improving the comparison of ncbi and dogma results , and by relaxing constraints on gene names while adding a stage of post - validation on gene sequences . the two stages of similarity measures , on names and sequences , are thus proposed for sequence clustering . this improves results that can be obtained using either ncbi or dogma alone . results obtained with this `` quality control test '' are further investigated and compared with previously released ones , on both computational and biological aspects , considering a set of 99 chloroplastic genomes . chloroplasts , clustering , quality control , methodology , pan genome , core genome , evolution
|
non - linear least squares fitting is an integral part of most astronomical analysis .the process embodies the fundamental process of hypothesis testing for a candidate model which may explain the data .there are several built - in fitting procedures packaged within the interactive data language ( idl ) product . unfortunately , the existing idl procedures are not very desirable from the perspective of astronomical data analysis .the built - in procedures curvefit and lmfit are somewhat unreliable , and do not always take advantage of idl s vectorization capability . because of these limitations , the author undertook to write a robust and functional least squares fitting code for idl .the work was based on translating the highly successful minpack-1 package written in fortran into idl , and building new functionality upon that framework .mpfit is basically a translation and enhancement of the minpack-1 software , originally developed by jose mor collaborators at argonne national laboratories .the code was written in fortran , and is available now from the netlib software repository .minpack-1 has the advantages that it is : * robust designed by numerical analysts with real data in mind * self - contained not dependent on a large external library * general capable of solving most non - linear equations * well - known one of the most - used libraries in optimization problems the original minpack-1 library contains two different versions , lmder and lmdif .both require the user function to compute the residual vector , , but lmder also requires the user to compute the jacobian matrix , , of the residuals as well ; lmdif estimates the jacobian via finite differences .the minpack algorithm solves the problem by linearizing it around the trial parameter set , , and solving for an improved parameter set , , via the least squares equation , the solution is obtained by factorization of , leading to improved numerical accuracy over the normal equations form .the standard levenberg - marquardt technique of replacing the first parenthesized term with , where is the levenberg marquardt parameter and is a diagonal scaling matrix , produces faster convergence .the solution is iterated until user - selected convergence criteria are achieved , based on the sum of squares and residual values .the translation to idl focused on preserving the quality of the original code , optimizing it for speed within idl , and adding functionality within the semantics of idl .the result of the translation is a single fitting engine , mpfit , which provides all of the original minpack-1 capability .this function is not specific to a particular problem , i.e. it can be used on data of arbitrary dimension or weighting .in addition to the generic fitting routines , several convenience routines have been developed that make mpfit useful in several specific problem domains : * mpfitfun , mpfit2dfun optimized for 1-d & 2-d functions ; * mpfitexpr for dynamically - created formulae , e.g. on the command line ; * mpcurvefit a drop - in replacement for the standard curvefit idl library routine , for users who need compatibility ; * mpfitpeak , mpfit2dpeak specialized for 1-d & 2-d peak fitting ; * mpfitellipse for fitting elliptical curves to x / y scatter points .the idl version can be found on the author s website ( see resources , sec .[ sec : resources ] ) .beyond the original minpack-1 code , mpfit contains several innovations which enhances its usefulness and convenience to the user , and also take advantage of the capabilities of idl . * private data .* the user can pass any private data safely to the user function as keyword variables via the functargs parameter .this helps to avoid the use of common block variables. * parameter constraints . *the notion of simple parameter boundary constraints is supported via the parinfo parameter .individually settable upper and lower limits are supported via limits . also , as a convenience , parameters can be held fixed , or tied to another parameter value .the total number of degrees of freedom is tracked , as well as the number of parameters pegged at their limits ( via the dof and npegged keywords ) . * jacobian calculations . * the user is free to supply explicit derivatives in their user function , or have mpfit calculate them numerically , depending on the autoderivative and parinfo.mpside settings .the method for calculation of derivatives ( step size and direction ) are settable on a per - parameter basis via the parinfo.step and .relstep settings . for user - calculated derivatives, the user can enable a debugging mode by setting parinfo.mpderiv_debug .* covariance matrix . *the capability to calculate the covariance matrix of the fit parameters is an improvement over the original published minpack-1 version . * hard - to - compute functions . * for functions that are difficult to compute within a single function call , mpfit can be requested to allow ` external ' evaluation .mpfit then returns control temporarily to the caller so that it can compute the function using external information and by whatever means , and then the caller re - calls mpfit to resume fitting .* iteration function .* after each iteration , a user procedure designated by iterproc may be called .the default procedure simply prints the parameter values , but a more advanced version may be used , for example for gui feedback. * error handling . *two error status parameters are provided . upon return ,status is set to a numerical status code suitable for automated response .errmsg is set to a descriptive error string to inform the human user of the problem .mpfit also traps common problems , like user - function errors and numerical over / under - flows .mpfit is provided with extensive documentation .the mpfit source code has reference - style documentation attached to the header of the source module itself .a basic tutorial is provided on the author s web page ( see sec . [sec : resources ] ) , which introduces the user to least squares fitting of a 1-d data set , and graduates to applying parameter constraints . also , a ` faq ' style web page gives users quick answers to common questions , such as which module to use , how to calculate important quantities , and troubleshooting techniques .examples of usage can be found on the author s website , and as a part of the code documentation itself . as an example , consider a user that has a data set with independent variable x and dependent variable y ( with gaussian errors ey ) , and wants to fit as a function of f(x , p ) where p is an array of parameters . in this case, mpfitfun should be used to solve for the best fit parameters pbest with the following invocation , .... pbest = mpfitfun('f ' , x , y , ey , pstart , status = st , errmsg = err , $ bestnorm = chi2 , dof = dof , error = perror , covar = covar ) .... where pstart is an initial guess of the parameter values . upon return ,the best fit value and degrees of freedom are returned via the bestnorm and dof parameters .parameter errors and covariance matrix are returned in the error and covar parameters .error conditions are returned in status and errmsg .mpfit has been available for ten years from the author s web site , and as been downloaded several thousand times . during that time, the package has been continuously improved , both in terms of functionality , and in terms of fixing `` bugs . '' by its nature , idl code is `` open source , '' and at least ten users have contributed changes which have been incorporated into the main code base .mpfit is distributed with very liberal licensing constraints .the package has been acknowledged as helpful in a number of published works , including at least 29 refereed publications since 2001 ( including astrophysical journal , monthly notices and pasj ) , and in 102 preprints on the arxiv preprint server .interestingly , mpfit has also been translated into the python language , and is available in the scipy scientific package ( the interesting aspect is that the translation was based on the idl version and not the original fortran ) .the author has also create a c translation of mpfit , which has the benefit of speed and portability , along with many of the idl - based improvements .in addition to being used in scientific analysis , mpfit has also been incorporated into numerous standalone packages , for example pan ( `` peak analysis '' ) for neutron scattering spectroscopy , and pintofale for x - ray spectroscopy .* mpfit idl & c code : * mpfit python version : + * minpack-1 fortran web page : * minpack-1 pure c translation : mor , j. 1977 , `` the levenberg - marquardt algorithm : implementation and theory , '' in numerical analysis , vol . 630 , ed . g. a. watson ( springer - verlag : berlin ) , 105 mor , j. & wright , s. 1993 , optimization software guide , frontiers in applied mathematics , vol .14 , ( philadelphia , pa : siam )
|
mpfit is a port to idl of the non - linear least squares fitting program minpack-1 . mpfit inherits the robustness of the original fortran version of minpack-1 , but is optimized for performance and convenience in idl . in addition to the main fitting engine , mpfit , several specialized functions are provided to fit 1-d curves and 2-d images ; 1-d and 2-d peaks ; and interactive fitting from the idl command line . several constraints can be applied to model parameters , including fixed constraints , simple bounding constraints , and `` tying '' the value to another parameter . several data weighting methods are allowed , and the parameter covariance matrix is computed . extensive diagnostic capabilities are available during the fit , via a call - back subroutine , and after the fit is complete . several different forms of documentation are provided , including a tutorial , reference pages , and frequently asked questions . the package has been translated to c and python as well . the full idl and c packages can be found at .
|
in general , the interaction of a charged particle with a medium can be derived from the treatment of its electromagnetic interaction with that medium , where the interaction is mediated by a corresponding photon .the processes that occur are ionization , bremsstrahlung , cherenkov radiation , and , in case of inhomogeneous media , transition radiation ( tr ) .the latter process had been predicted by ginzburg and frank in 1946 .it was first observed in the optical domain by goldsmith and jelley in 1959 and further studied experimentally with electron beams of tens of kev .the relevance of this phenomenon for particle identification went unnoted until it was realized that , for highly - relativistic charged particles ( ) , the spectrum of the emitted radiation extends into the x - ray domain . while the emission probability for such an x - ray photon is small , its conversion leads to a large energy deposit compared to the average energy deposit via ionization .this led to the application of tr for particle identification at high momenta . since then many studies have been pursued , both at the level of the basic understanding of tr production as well as with regard to the applications in particle detection and identification .consequently , trds have been used and are currently being used or planned in a wide range of accelerator - based experiments , such as ua2 , zeus , na31 , phenix , helios , d , ktev , h1 , wa89 , nomad , hermes , hera - b , atlas , alice , cbm and in astro - particle and cosmic - ray experiments : wizard , heat , macro , ams , pamela , access . in these experimentsthe main purpose of the trd is the discrimination of electrons from hadrons , but pion identification has been performed at fermilab in a 250 gev hadron beam and identification has been achieved in a hyperon beam at cern .the subject of transition radiation and how it can be applied to particle identification has already been comprehensively reviewed in ref .an excellent concise review is given in .therefore , we restrict ourselves to a general description of the phenomenon and how trd is employed in particle identification detectors .we will then concentrate on more recent developments of trds and specific analysis techniques , in particular for the detectors at the cern large hadron collider ( lhc ) .the practical theory of tr production is extensively presented in references .extensions of the theory for non - relativistic particles are covered in . here, we briefly summarize the most important results for relativistic charged particles . the double differential energy spectrum radiated by a charged particle with a lorentz factor traversing an interface between two dielectric media ( with dielectric constants and ) has the following expression : =( - ) ^2 which holds for : . , where is the ( electron ) plasma frequency for the two media and is the fine structure constant ( =1/137 ) .the plasma frequency is a material property and can be calculated as follows : _p = 28.8 where is the electron density of the medium and is the electron mass . in the approximation is the density in and is the average charge to mass ratio of the material .typical values for plasma frequencies are =20.6 ev , =0.7 ev .since the emission angle of the tr is small ( ) one usually integrates over the solid angle to obtain the differential energy spectrum : ( ) _ interface= ( -2 ) a single foil has two interfaces to the surrounding medium at which the index of refraction changes .therefore , one needs to sum up the contributions from both interfaces of the foil to the surrounding medium .this leads to : ( ) _ foil= ( ) _ interface 4 ^ 2(_1/2 ) where is the interference factor .the phase is related to the formation length ( see below ) and the thickness of the respective medium , i.e. . following the arguments in ref . the average amplitude modulation is .the above spectra are shown in fig .[ f : two ] for one interface of a single mylar foil ( 25 ) in air ( using the same parameters as in ref . ) .absorption of tr in the material of the radiator has not been considered in the above .the effective tr yield , measured at the exit of the radiator , is strongly suppressed by absorption for energies below a few kev ( see also ) , see fig .[ f : tr_dep ] below . as shown above the emission probability for a tr photon in the plateau regionis of order per interface . for this to lead to a significant particle discrimination one needs to realize many of theses interfaces in a single radiator . for a stack of foils of thickness , separated by a medium ( usually a gas ) of thickness ,the double differential energy spectrum is : ( ) _ stack= ( ) _ foil ( ) where is the phase retardation , with , and is the absorption cross section for the radiator materials ( foil + gas ) . due to the large absorption cross section below a few kev , low - energy tr photonsare mostly absorbed by the radiator itself .the tr produced by a multi - foil radiator can be characterized by the following qualitative features : * one can define the so - called `` formation zone '' z_i=. this can be interpreted as the distance beyond which the electromagnetic field of the charged particle has readjusted and the emitted photon is separated from the field of the parent particle . the formation zone depends on the charged particle s , on the tr photon energy and is of the order of a few tens of microns for the foil and a few hundreds of microns for air .the yield is suppressed if , which is referred to as the _ formation zone effect_. + for constructive interference one gets : ( ) _ foil=2()_interface ; ( ) _ stack = n_f()_foil . *the tr spectrum has its most relevant maximum at _[ eq : omega_max ] , which can be used to `` tune '' the trd to the most relevant absorption cross section of the detector by varying the material and thickness of the radiator foils . * for the tr spectrumis mainly determined by the single foil interference . *the multiple foil interference governs the saturation at high , above a value of _s=. 25 m ) ( right ) . ] in general , tr generated by irregular radiators can be calculated following prescriptions discussed in ref .however , for all practical purposes this procedure is limited to the treatment of irregularities in the materials and tolerances from the fabrication of otherwise regularly spaced radiators . for materials like foam or fibers ( used e.g. by hermes , atlas in the central barrel , alice and ams ) as shown in fig .[ f : rads ] this procedure is impractical . here , the measured response is simulated in terms of a regularly spaced radiator with comparable performance or by applying an overall efficiency factor .( upper panel , corresponding to an electron momentum of 0.2 , 0.5 , 1 and 2 gev / c ) , foil thickness ( middle panel ) and foil spacing ( lower panel ) . ] in ref . tr has been studied for different ( foil ) radiator configurations .the interference pattern discussed above has been demonstrated by cherry et al . and by fabjan and struczinkski , who also verified the expected dependence of the highest energy interference maximum on the foil thickness , eq .[ eq : omega_max ] . in ref . a slightly simpler expression for the tr production has been proposed : = ( 1-(-n_f ) ) _n_n(- ) ^2 [ 1-(_1+_n ) ] [ tr1 ] where : _i = l_1/2c(^-2+_i^2 ) , = l_2/l_1 , _ n=>0 [ tr2 ] in the following we utilize this formula to show how the tr yield and spectrum ( at the exit of the radiator ) depend on the lorentz factor of the incident charged particle , as well as on the foil thickness ( ) and spacing ( ) for a regular radiator of =100 foils .these basic features of tr production are illustrated in fig .[ f : tr_dep ] .the threshold - like behavior of tr production as a function of is evident , with the onset of tr production around .the yield saturates quickly with ( formation length for ch is about 7 m ) , the average tr energy is proportional to , eq .[ eq : omega_max ] .taking into account absorption of tr photons in the foils leads to an optimum of foil thickness in the range 15 - 20 m ( dependent also on the thickness of the detector ) .the tr yield is proportional to for gap values of a few hundred m , saturating slowly with , as the formation length for air is about 700 m ; the spectrum is slightly harder for larger gap values . due tothe dependence of tr on , it is evident that there is a wide momentum range ( 1100 gev / c ) where electrons ( resp .positrons ) are the only particles producing transition radiation .kaons can also be separated from pions on the basis of tr in a certain momentum range ( roughly 200700 gev / c ) and identification in a hyperon beam has been done as well .having introduced the main features of tr production above , we shall now focus on its usage for particle identification in high - energy nuclear and ( astro-)particle experiments .we outline the main characteristics , design considerations and optimization for a trd , based on simulations .an obvious choice to detect transition radiation is a gaseous detector .a proposal to use silicon detectors in a trd has also been put forward and tr detection with crystals has been proposed too , see below for more details .affordability for large - area coverage , usually needed in ( accelerator ) experiments , is a major criterion .in addition , a lightweight construction make gaseous detectors a widespread solution for trds .most of the trd implementations are based on multiwire drift chambers , but straw tubes have been used too , for example in the nomad , hera - b atlas , pamela and ams detectors .we will describe the detector realization in the examples covered in section [ sect : modern_trds ] .see e.g. for all the important details concerning drift chambers principles and operation . for gaseous detectors we present the absorption length vs. tr energy in fig .[ f : tr_gas ] for ar , kr and xe .obviously , the best detection efficiency is reached using the heaviest gas , xe , which has an absorption length around 10 mm for `` typical '' tr photon energies in the range of 3 - 15 kev ( produced by a radiator of typical characteristics , =10 - 20 m , =100 - 300 m , see fig . [f : tr_dep ] ) .the electron identification is further enhanced by the `` favorable '' ionization energy loss , in xe , which has the highest value of the fermi plateau of all noble gases .( upper panel ) , and total tr energy , ( middle panel ) , as a function of electron momentum . in the lower panelwe show for comparison the average ionization energy deposit , , for pions and electrons . ] in fig .[ f : tr_det ] we consider a detector with a gas volume of 1 cm thickness and show its tr detection capability as a function of momentum . on average ,about 2/3 of the number of produced tr photons ( employing a radiator with =15 m , =300 m , =100 , which will be our baseline choice in the following ) are detected in such a detector , filled with a mixture xe - co [ 85 - 15 ] .about half the total produced tr energy , which is the sum over all detected tr photons for an electron of a given momentum , is detected . for the chosen configuration , on average the signal from tr is comparable to the ionization energy deposit , , also shown in fig .[ f : tr_det ] .it is important to emphasize that , due to the very small tr emission angle , the tr signal generated in a detector is overlapping with the ionization due to the specific energy loss and a knowledge ( and proper simulation ) of de / dx ( see also section [ sect : alice ] ) is a necessity for the ultimate understanding and modeling of any trd .the energy deposit spectra of pions and electrons in a xe - based detector are presented in fig .[ f : like ] ( left panel ) . for pions it representsthe energy loss in the gas and is close to a landau distribution . for electrons , it is the sum of the ionization energy loss and the signal produced by the absorption of the tr photons .[ cols="<,^ , > " , ] using the above position and angular resolutions the stand - alone tracking resolution of the trd was estimated in simulations for different momenta as a function of multiplicity density . for momenta below 2 gev/ c the stand - alone momentum resolution of the trd is around with little dependence on the multiplicity . through the inclusion of the trd into the tracking in the central barrelan overall momentum resolution around 3% can be obtained up to momenta of about 90 gev / c .a great variety of trds were employed for fixed - target high - energy experiments .we discuss here , briefly , the trd of the hermes experiment at hera and that of the proposed cbm experiment at the future fair facility . the trd of the hermes experiment employed random fiber radiators of 6.35 cm thickness ( corresponding on average to 267 dielectric layers ) and proportional wire chambers of 2.54 cm thickness , filled with xe - ch .the trd consisted of two arms , each with 6 radiator - detector layers flushed with co in between .as a consequence of a rather thick radiator , the pion rejection factor achieved with a truncated mean method was 130 for a momentum of 5 gev / c and 150 averaged over all measured momenta , for an electron efficiency of 90% . using a likelihood method ,the pion rejection factor averaged over all measured momenta was determines to be 1460 , decreasing to 489 for an electron efficiency of 95% .the trd of the cbm ( compressed baryonic matter ) experiment at the planned fair accelerator facility at gsi is aimed to provide electron identification and charged particle tracking .the required pion suppression is a factor of about 100 and the position resolution has to be of the order of 200 - 300 m . in order to fulfill these tasks , in the context of the high rates and high particle multiplicities in cbm, a careful optimization of the detector is required .currently , the whole detector is envisaged to be subdivided into three stations , positioned at distances of 4 , 6 and 8 m from the target , each one of them composed of at least three layers .because of the high rate environment expected in the cbm experiment ( interaction rates of up to 10 mhz ) , a fast readout detector has to be used . to ensure the speed and also to minimize possible space charge effects expected at high rates , it is clear that the detector has to have a thickness of less than 1 cm .two solutions exist for such a detector : a multiwire proportional chamber ( mwpc ) with pad readout or straw tubes .while both had been investigated at the earlier stage of the detector design , the mwpc solution is currently favored .a novel concept of a `` double - sided '' mwpc had been tested in prototypes and is a strong candidate for the inner part of the detector .this detector design provides twice the thickness of the gas volume , while keeping the charge collection time to that of a single mwpc . for the radiatorboth possibilities , regular and irregular , are under consideration .the final choice of the radiator type for the cbm trd will be established after the completion of prototypes tests .measurements with prototypes , both in beam and with x - ray sources demonstrate that the detector can handle the design rates .the main characteristics of the trd are : i ) cell sizes : 1 - 10 ( depending on the polar angle , tuned for the occupancy to remain below 10% ) ; ii ) material budget : - 20% ; iii ) rates : up to 100 khz/ ; iv ) doses ( charged particles ) : up to 16 krad / year , corresponding to 26 - 40 mc / cm / year charge on the anode wires . for a classical mwpc - type trd with the envisaged 9 - 12 layers ,the total area of detectors is in the range 485 - 646 m .the total number of electronic channels is projected to between 562 and 749 thousand .a recent review of trds for astro - particle instruments is given in .in general , both balloon and space experiments lead to compact design requirements . for short - term balloon flights , like the wizard/ ts93 and heat experiments , the main challenge is the rather strong variation of temperature and pressure during the flight , which require significant corrections of the measured detector signals .the requirements imposed by the long - term operation of a trd in space as envisaged for the ams experiment , lead to challenging aspects of its operation without maintenance .the mechanical requirements arising from vibrations during the launch demand special design and laboratory qualifications .the trd of the wizard / ts93 experiment weighs about 240 kg and covers an active area of 76 .ten layers of carbon fiber radiators of 5 cm thickness and 1.6 cm - thick proportional wire chambers filled with xe - ch give a total of 2560 electronics readout channels .a pion contamination at the sub - percent level has been achieved in testbeam measurements .the trd of the heat experiment is composed of six layers of polyethylene fiber radiators ( 12.7 cm thickness ) and 2 cm - thick proportional wire chambers operated with xe - ch .proton rejection factors around 100 were achieved for 90% electron efficiency for 10 gev / c momentum .the trd designed for the pamela experiment is composed of a total of 1024 straw - tube detectors of 28 cm length and 4 mm diameter , filled with xe - co mixture and arranged in 9 layers interleaved with radiators of carbon fibers .pion rejection factors around 20 for 90% electron efficiency were measured in testbeams at momenta of few gev / c .the trd of the ams experiment , which was recently installed on the international space station ( iss ) , has an envisaged operational duration of about three years .the trd will contribute to the ams required proton rejection factor of about 10 , necessary for the study of positron spectra planned with ams .the detection elements are 5248 straw tubes of 6 mm diameter , arranged in modules of 16 straws each , with a length of up to 2 m. the straws , with 30 m gold - plated tungsten anode wires , are operated at 1350 v , corresponding to a gas gain of 3000 .the radiator is a 2 cm thick polypropylene fleece .special cleaning of the radiator material is required to meet the outgassing limits imposed by nasa .special tightness requirement for the straw tubes are imposed by the limited supply of detector gas ( the ams trd has a gas volume of 230 liters ) .the spectra of energy deposition in a single straw of the ams trd for protons , pions , muons and electrons obtained in beamtest measurements are shown in fig .[ f : ams1 ] .a very good description of the measurements has been achieved with modified geant3 simulations .the excellent proton rejection performance achieved in testbeams with a full 20 layer prototype for the ams trd is shown in fig .[ f : ams2 ] .a neural network method has been used , delivering a proton rejection factor ( at 90% electron efficiency ) between 1000 and 100 for momenta between 15 and 250 gev / c .the trd technique offers a unique opportunity for electron separation with respect to hadrons in a wide momentum range from 1 to 100 gev / c .the separation between pions , kaons and protons ( or heavier hadrons ) is possible in well defined windows of momenta .we have presented a survey of the transition radiation detectors employed in accelerator and space experiments , with special emphasis on the two large detectors presently operated in the lhc experiments , the atlas trt and the alice trd .building on a long series of dedicated measurements and on various implementation of trds in complex experimental particle physics setups , these two particular trd systems are challenging in their scale and required performance , both for tracking and electron identification .the alice trd provides in addition fast triggering capability .they also illustrate two complementary approaches , dictated by their respective requirements : the atlas trt being a very fast detector , with moderate granularity , perfectly suited for operation in high - rate pp collisions , while the alice trd is a slower detector with very good granularity , optimized for pb+pb collisions . with data taking at the lhc now in full swing, the evaluation of the performance of these two systems , which is already well underway , will serve as a solid basis for the design of trds for future high - energy ( astro-)particle and nuclear physics experiments .ginzburg and i.m .frank , zh . eksp .teor . fiz .* 16 * , 15 ( 1946 ) .p. goldsmith and j.v .jelley , phil . mag . * 4 * , 836 ( 1959 ) .h. boersch , c. radeloff , and g. sauerbrey , phys .* 7 * , 52 ( 1961 ) ; + a.l .frank , e.t .arakawa , r.d .birkhoff , phys .126 * , 1947 ( 1962 ) .garibian , l.a .gevorgyan , c. yang , sov .jetp * 39 * , 265 ( 1974 ) ; + g.m .garibian , m.l .cherry , d. m " uller , t.a .ter - mikaelian , high energy electromagnetic processes in condensed media , wiley - interscience , new york , 1972 .cherry et al . , phys .d * 10 * , 3594 ( 1974 ) .x. artru et al .d * 12 * , 1289 ( 1975 ) .l. durand , phys .d * 11 * , 89 ( 1975 ) .a. hirose , rad .chem . * 64 * , 261 ( 2002 ) .fabjan , w. struczinkski , phys .b * 57 * , 483 ( 1975 ) .c. camps et al .prince et al . ,j. cobb et al .fabjan et al . , m.l .cherry et al . , phys .d * 17 * , 2245 ( 1978 ) c.w .fabjan et al . , a. b " ungener et al . , r. ansari et al . , r.d .appuhn et al . , g.d .barr et al . , e. obrien et al . , e. obrien et al . , nucl .a * 566 * , 615 ( 1993 ) b. dolgoshein , j .- f .detoeuf et al ., h. piekarz , g.e .graham et al . ,h. gr " assler et al . , g.a .beck et al . ,w. br " uckner et al . , g. bassompierre et al . ,k. ackerstaff et al . , v. saveliev , g. aad et al .( atlas collaboration ) , journal of instrumentation , jinst * 3 * , s08003 ( 2008 ) , see the trt sections on pages 68 .+ t. akesson et al . , r. belotti et al ., astropart . phys .* 7 * , 219 ( 1997 ) .barwick et al . , m. ambrosio et al . , infn / ae-97/04 ( 1997 ) p. v.doetinchem et al . , m. ambriola et al . ,case , p.p .altice , m.l .cherry , j. isbert , d. patterson , and j.w .mitchell , m.l .cherry , g.l .case , d. errede et al ., h .- j .butt et al ., y. watase et al ., m. holder and h. suhr , r.d .appuhn et al . , g. bassompierre et al . , a. andronic et al .( alice collaboration ) , u. egede , phd thesis , university of lund , 1998 , isbn 91 - 628 - 2804 - 5 .b. dolgoshein et al . , w. blum , w. riegler and l. rolandi , particle detection with drift chambers , springer - verlag , 2008 .a. andronic et al .( alice collaboration ) , t. akesson et al .( atlas collaboration ) , atl - indet-2000 - 021 a. andronic ( alice collaboration ) , r. belotti , m. castellano , c. de marzo , g. pasquariello , g. satalino , and p. spinelli , c. adler et al .( alice collaboration ) , alice collaboration , j. phys .phys . * 32 * , 1295 ( 2006 ) ; see the trd chapter on page 1401 .g. aad et al . , jinst * 3 * , s08003 ( 2008 ) .e. abat et al ., jinst * 3 * , p02014 ( 2008 ) .e. abat et al ., jinst * 3 * , p10003 ( 2008 ) .e. abat et al ., jinst * 3 * , p02013 ( 2008 ) .e. abat et al ., jinst * 3 * , p06007 ( 2008 ) . t. akesson et al . , ieee nucl .conf . rec .* 1 * , 549 ( 2002 ) .alice transition radiation detector technical design report , alice tdr 9 , cern / lhcc 2001 - 021 .c. adler et al . , fair , http://www.gsi.de/fair/ m. petrovici et al ., cbm collaboration , progress report 2006 , p. 33 - 38 , http://www.gsi.de/documents/doc-2007-mar-137-1.pdf a. andronic , c. garabatos , d. gonzalez - diaz , a. kalweit , f. uhlig , jinst * 4 * , p10014 ( 2009 ) [ arxiv:0909.0242 ] .kirn et al . , talks presented at the 4th workshop on advanced transition radiation detectors , 14 - 16 september , bari ( italy ) , available at http://agenda.infn.it/conferencedisplay.py?confid=3468 + see also , e. hines ( atlas ) , arxiv:1109.5925 .
|
we review the basic features of transition radiation and how they are used for the design of modern transition radiation detectors ( trd ) . the discussion will include the various realizations of radiators as well as a discussion of the detection media and aspects of detector construction . with regard to particle identification we assess the different methods for efficient discrimination of different particles and outline the methods for the quantification of this property . since a number of comprehensive reviews already exist , we predominantly focus on the detectors currently operated at the lhc . to a lesser extent we also cover some other trds , which are planned or are currently being operated in balloon or space - borne astro - particle physics experiments .
|
set reconciliation occurs naturally . for example , routers may need to reconcile their routing tables and files on mobile devices may need to be synchronized with those in the cloud . the reconciliation problem is to find the set differences between two distributed sets . here, the set difference for a host is defined as the set of elements that the host has but the other host does not .once two hosts can find their respective set differences , each can use the information to solve the reconciliation problem by adding its difference set to the other or removing it from its own set to reconcile the two sets to their union or intersection , respectively . in this paper , for presentation simplicity , we consider a simpler case that a host just reconcile its set to the same as the set that the other host currently possesses .we describe the problem we wish to solve in mathematical notation .suppose that there are two hosts , and , which possess two sets , and , respectively .the elements of and are from a set .the difference sets for and are and , respectively .for example , if has and b has , then we have and .we denote the size of a set by . to ease the presentation , we assume throughout the paper that , and for some positive integer .the method proposed in this paper can be naturally extended to the case of by simply increasing the space allocation from to ( described in sec .[ sec : cs - iblt ] ) . in the reconciliation problem ,the two hosts wish to reconcile their sets , by making them identical .for example , can update by adding elements in to and removing elements in from .this means , in the above example , once knows and , performs the operation of .consequently , the reconciliation is accomplished . in solving the reconciliation problem ,we are mainly concerned with the communication cost , the number of elements required to be transmitted between the two hosts .a straightforward method of solving the reconciliation problem is that host sends his entire set to host .after that , can check and identify the set differences between and .obviously , the communication cost for this method is .a more efficient but probabilistic method is to utilize bloom filter .more specifically , host constructs a bloom filter by inserting the elements in to the bloom filter and then sending the bloom filter to . with the received bloom filter, can check if the elements in is in the filter and thus can identify with some probability that not all these elements are identified due to hash table collisions in the bloom filter .similar queries made for the remaining elements in can be used to identify with some probability that extra elements are identified due to hash table collisions in the bloom filter . to lower false identifications , the size of bloom filter needs to be proportional to .therefore , the communication cost of this bloom filter approach is still asymptotically the same as the straightforward method .minsky _ et al_. developed a characteristic polynomial method . in this method , sends several evaluated values of the characteristic polynomial to , where is defined as with s being elements in .host does similar evaluation based on its own characteristic polynomial . by _rational interpolation _ , can derive and thus recover the set differences based on s and s evaluated values . here ,given pairs of , rational interpolation is to find a satisfying for each pair , where the polynomials and are of degrees and , respectively . observe that . sends evaluated values of to , and calculates the value of at each predetermined evaluation point . once can be recovered from the evaluated values of , the set differences can be obtained by finding the roots of and . a concrete example in shows how this characteristic polynomial method works .suppose that , , the prior knowledge about is available , the evaluation points have been predetermined , and a proper finite field has been chosen .under such conditions , and can be formulated as and , respectively .the evaluations of and at four evaluation points are , but we omit the detail in this paper . ] and over , respectively .the values of are therefore . from rational interpolations perspective , the value corresponds to the size set differences and corresponds to of size .the interpolated , where the roots of numerator are and and the root of denominator is , can be used to derive the set differences between and .an issue in this reconciliation case is that only the size of set differences , instead of the individual and , is known and so rational interpolation can not be applied directly .nevertheless , a formula is given in to the estimates of and based only on the size of set differences . despite its algebraic computation over finite fields , a notable feature of this method is that the communication cost is only dependent on , instead of , due to the use of interpolation .very recently , goodrich and mitzenmacher developed a data structure , called invertible bloom lookup table ( iblt ) , to address the reconciliation problem .iblt can be thought of as a variant of counting bloom filter with the property that the elements inserted to bloom filter can be extracted even under collision . with the use of iblt, the reconciliation problem can be solved in approximately communication cost under the assumption that is known in advance .the aforementioned straightforward method and bloom filter approach incur a large amount of communication cost when is of large size . on the other hand , characteristic polynomial method and ibltare efficient only when prior knowledge about is available . without this prior knowledge, the computation overhead of the characteristic polynomial method can be as large as .iblt need to be repeatedly applied with progressively increasing , incurring a wasted communication cost which can be as large as .we propose an algorithm , called cs - iblt , which is a novel combination of compressed sensing ( cs ) and iblt , enabling the reconciliation problem to be solved with communication cost even without prior knowledge about .a distinguished feature of cs - iblt is that the number of transmitted messages changes with adapt to the value of , instead of the conventional wisdom that the correct must be estimated first .notably , this adaptive feature is attributed to the use of cs .first , we briefly review compressed sensing ( cs ) and invertible bloom lookup table ( iblt ) in sec . [sec : compressed sensing ] and sec .[ sec : invertible bloom lookup table ] , respectively .then , we describe our proposed cs - iblt algorithm in sec . [sec : cs - iblt ] .we provide analysis and comparison between iblt and cs - iblt in secs .[ sec : analysis ] and [ sec : comparison ] .suppose that is a -sparse vector of length with .that is , only nonzero components can be found in .a standard compressed sensing ( cs ) formulation is , where and , with , are called measurement vector and measurement matrix , respectively .cs states that if is a random matrix satisfying the restricted isometry property and is greater than for some constant , then can be reconstructed based on with high probability .the vector can be reconstructed by -minimization as follows : an invertible bloom lookup table ( iblt ) is composed of a array , , with hash functions , , , .it supports three operations , insert , delete , and list - entries .suppose that is a numeric value . to insert an element with the insert operation , ] is increased by , for all .the deletion of an element with the delete operation is operated by decreasing ] by .the second column of iblt can be treated as a counting bloom filter .list - entries is used to dump all elements currently stored in iblt .it works by searching for the position where =1 ] is listed and operation delete( ] . by cs recovery on ^t ] separatively . because the entries in and are assumed to be integers , quantization is applied to the recovered result .suppose that obtains a recovery result after -minimization is applied to ^t$ ] . then proceeds to the list - entries operation on and checks whether the list - entries operation succeeds or not .if the list - entries operation succeeds , sends a positive acknowledgment meaning `` stop sending more measurements '' to , and host b reconciles with , with the and extracted from .if the list - entries operation fails , waits for the next measurement and again performs the above operations on through .the above setting and procedures remain the same in the case of except that and of length at most are needed instead .note that corresponds to the extreme case of .figure [ fig : cs - iblt ] illustrates how cs - iblt works . hosts and possess and , respectively . in the following , we omit the second column of iblt in our cs - iblt algorithm for representation simplicity .that is , we omit the counting bloom filter part .observe that , , and .note that because of , iblts are of length .this corresponds to the requirement in sec .[ sec : cs - iblt ] that iblts of length need to be allocated .suppose that hash functions are used in the iblt in cs - iblt . and are derived according to the hash positions and then is calculated . with cs - iblt, only needs to send the first entries in to .that is , only six entries of are sufficient for to exactly recover the . from the recovered , , we can extract and according to the iblt principles in sec .[ sec : invertible bloom lookup table ] . based on the rule described in sec .[ sec : analysis ] , knows that , .the following is the key relationship behind our proposed cs - iblt algorithm is : the cs recovery based on can generate an approximation of .when the number of measurements is sufficient in the cs recovery , is nearly identical to .based on the principles of iblt construction , can be thought of as an iblt with elements in and in , where is defined as the set .thus , first lists all the elements in .those positive elements are categorized as and those negative ones are categorized as . on the other hand , when the number of measurements is insufficient for the exact recovery of .that is , is significantly deviated from , will be aware of this failed recovery because after the list - entries operation is applied to such , the list - entries operation fails with high probability . note that the reconstructed array behaves like a random one when an insufficient number of measurements is used .the list - entries operation is unlikely to be successful on a random array .therefore , the decoding procedure will proceed with high probability until is achieved .the number of measurements required to recover determines the communication cost of cs - iblt . recall that we are interested in recovering from , and the theory of cs states that the number of required measurements can be as small as , where is the number of nonzero entries in the vector to be recovered .observe that the iblt , , is constructed by adding elements in and removing elements in .based on the iblt principles in sec .[ sec : invertible bloom lookup table ] , the elements commonly shared between and , which are the elements in , will be eliminated and only the elements in the set difference remain in . recall that measurements are needed for accurate cs recovery , where is the number of nonzero elements .thus , as the vector to be recovered is with at most nonzero entries , measurements are sufficient for the cs recovery , where and denote the number of hash functions used in iblt and the inherent size of set differences , respectively .as reported in , the length of iblt with elements should be at least to ensure the successful execution of the list - entries operation in the case of .however , the value of is estimated based on an inherent assumption that the inserted elements are all positive .based on the iblt principles in sec .[ sec : invertible bloom lookup table ] , can be regarded as an iblt with elements of and .since there could be some negative elements in and , we suggest to use , rather than , according to our empirical experience . in the case that prior knowledge about is unavailable, the use of iblt incurs a large amount of wasted communication . in particular , a reasonably first guess is , and host sends iblt of size to . if the real is smaller then , can obtain and successfully .essentially , communications are sufficient for finding the set differences and this means that we incur unnecessary communication cost which can be as large as .this extreme case occurs when .if the real is greater than , then the list - entries operation will be failed , and keeps waiting for the subsequent measurements from .this time , adopts a binary search - like approach to progressively have next .afterwards , hosts and repeat the above procedures until can empty . in the extreme case of , communication costis required .this performance is even worse than that of straightforward method in which is sent to directly . on the other hand , in the case of ,if cs - iblt is used , since the array is very sparse ( approximately only nonzero entries ) , only a very small number of measurements are needed . in the case of , measurements are sufficient for the cs recovery in cs - iblt .such communication cost occurs when all of the rows of are transmitted .in this section we demonstrate and compare the performance of iblt and cs - iblt via numerical experiments . figure [ fig : communication cost ]compares the performance of both methods under the assumption that prior knowledge about is not available . in these experiments , hash functions are used in both iblt and cs - iblt . in cs - iblt ,the random measurement matrix is gaussian distributed . in figure[ fig : k2n200 ] , and is varied from to .one can see in figure [ fig : k2n200 ] that communication cost of cs - iblt increases as increases due to the fact that the larger implies more nonzero entries in .in essence , the procedures in cs - iblt here are roughly like applying cs measurement matrix to a -sparse array and then deriving the cs recovered array . on the other hand , in iblt , because no prior knowledge about can be used , the guessed , , is used initially .this choice of enables to decode the received iblt , resulting in a flat curve from to .similar observations can be made in figure [ fig : k2n1000 ] .cs - iblt shows its main advantage when is relatively small and large . in the case of small , the overestimated incurs unnecessary communication but different measurements are adaptively transmitted one by one in cs - iblt .the sending stops immediately after the successful recovery of . in the case of large , several underestimated in iblt incurs useless communication but because of its adaptive property , even in the worst case , measurements can enable the successful recovery of .cs - iblt is inferior to iblt only in the case of moderate , which means that the initially guessed , , is pretty close to the real .the rationale behind this is that the communication cost of cs - iblt is still limited by the theory of cs .that is , it is still dependent on .however , if , we can think that iblt with prior knowledge about is utilized , resulting in only communication .hence , in such cases , cs - iblt is less efficient than iblt in terms of communication cost .we present a novel algorithm , cs - iblt , to address the reconciliation problem . according to our theoretical analysis and numerical experiments ,cs - iblt is superior to the previous methods in terms of communication cost in most cases under the assumption that no prior information is available .e. j. cands , j. k. romberg , and t. tao .robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information . _ ieee transactions on infomation theory _ , 52(2):489 - 509 , 2006 .
|
we consider a reconciliation problem , where two hosts wish to synchronize their respective sets . efficient solutions for minimizing the communication cost between the two hosts have been previously proposed in the literature . however , they rely on prior knowledge about the size of the set differences between the two sets to be reconciled . in this paper , we propose a method which can achieve comparable efficiency without assuming this prior knowledge . our method uses compressive sensing techniques which can leverage the expected sparsity in set differences . we study the performance of the method via theoretical analysis and numerical simulations . shell : bare demo of ieeetran.cls for journals
|
the purpose of this paper is to investigate the calibration performance of interest rate models based on the wiener chaos expansion .chaotic models were introduced in as an axiomatic framework for interest rates satisfying both no arbitrage and positivity conditions , following the line of research initiated by , where zero coupon bond prices are modeled as for a parametrized family of positive martingales .it was then shown in and that one can focus instead on modeling a supermartingale which is related to the martingales through the chaotic approach is derived from the observation that can itself be written as the conditional variance of a terminal random variable .this square integrable random variable then has a unique wiener chaos decomposition , and studying the way its expansion coefficients affect the corresponding interest rates model becomes the subject of the theory .notice the successive simplifications in the class of objects that one needs to model : from an entire family of martingales in the flesaker hughston framework , to the process in the potential approach , and finally the single random variable in chaotic models .arguably , each step of the way reduces the arbitrariness in the modeling exercise , with presumed advantages for calibration to real data . in particular, the chaos expansion allows one to successively introduce randomness in the model in a way that can , in principle , be made to correspond to the increased complexity of financial instruments under consideration .such are the types of statements that we propose to put to test in this paper .models in the flesaker hughston framework have been implemented and calibrated , for example , in and , whereas implementations of the potential approach can be found in and . to our knowledge , we present here the first practical implementation and calibration to market data of chaotic interest rate models .we adopt the day - by - day calibration methodology used in , , and .the motivation for this is to capture the prices of liquid interest rate derivatives such as caps and swaptions by a model as parsimonious as possible , which can then be used for pricing and hedging of exotic options , such as the chooser flexible cap and bermudan swaption . after reviewing the chaotic approach in section [ approach ], we implement to two separate calibration exercises . in section [ term_calibration ]we consider the calibration of chaotic models to the observed term structure of interest rates . for comparison, we use the nelson siegel and svensson models for forward rates as benchmarks .these so called descriptive models are examples of a general exponential polynomial class of models analyzed in , which we use as motivation for the parametric form we adopt for the chaos coefficient functions .we find in this section that chaotic models perform comparably to descriptive models with the same number of parameters , with the advantage of avoiding problems with positivity and consistency .we then move to a full calibration to yields and option prices in section [ option ] .we recall known expressions for option prices in a second chaos model , derive the corresponding formulas in a third chaos model , and calibrate them to two separate data sets in three different ways . for comparison, we also calibrate the hull and white model , the rational lognormal proposed in and a lognormal libor market model .we find that chaos models generally perform much better than the hull and white and rational lognormal models and have fitting errors comparable to those obtained with a libor model .when we apply an information criterion that takes into account the different number of parameters in the models , we find that one of our third chaos models consistently outperforms the libor market model .section [ conclusion ] concludes the paper by summarizing the results and pointing to future research in the area .we review the framework proposed in , whereby a general positive interest rate model with no arbitrage opportunities is associated with a square integrable terminal random variable , which can then be modeled using a wiener chaos expansion .let be a probability space equipped with the standard augmented filtration generated by a brownian motion and suppose that is a random variable with the property that is not for any finite value of and =0 ] as .it is then well known ( see , for example , ) that can be used as a state price density for an arbitrage free interest rate model in which the price of a zero coupon bond with maturity is given by }{v_t}=\frac{z_{tt}}{z_{tt } } , \qquad 0\leq t \leq t < \infty,\ ] ] where , \qquad 0\leq t \leq t < \infty.\ ] ] it follows from that for each fixed the processes , and consequently the bond prices , are decreasing functions of the maturity , which in turn implies that all instantaneous forward rates are automatically positive in this framework .furthermore , the short rate and the market price of risk vector are determined by the dynamics of as follows : for later use , we introduced the family of positive martingales , \qquad 0\leq t \leq s < \infty,\ ] ] so that can be written as in and bond prices as in .returning to the squared integrable random variable , the wiener chaos decomposition ( see and ) says that it can be written as the sum for deterministic square integrable functions called the chaos coefficients of .we say that an interest rate model is an -th chaos model if the decomposition for the random variable can be completely determined by its first coefficient functions .the basic insight of the chaotic approach is that the decomposition above provides a way to add complexity to an interest rate model in a controlled manner .for example , as we will see in the next section , the first order chaos correspond to deterministic interest rate models , whereas the second order chaos give rise to stochastic interest rate models with randomness governed by a parametric family of gaussian processes .more importantly for calibration purposes , the increased complexity in the interest models should be related to the instruments that are available in the market . in what follows ,we propose a systematic way to calibrate chaotic interest models starting with bond prices alone and then gradually increasing the complexity of market instruments included in the calibration .our general strategy will consist of choosing general parametric forms for the deterministic functions , and then fit the parameters to market data according to the bond and option pricing formulas emerging from the models .we now describe how the prices of the most common interest rate options can be written in terms of the quantities defined in section [ definitions ] . since is a state price density , it follows that the price at time of a derivative with payoff is given by }{v_t}.\ ] ] we see that expression for bond prices is simply a special case of with .we apply this general expression to european put options , caplets and swaptions , since these will be the only ones needed in our calibration section , although similarly results also hold for other interest rate derivatives , notably call options and floors ( see ) .we follow the standard definitions and notations in .a put option with maturity and strike price written on a bond with maturity correspond to the following payoff using , we see that the price of the a bond option at time is given by }{v_s}=\frac{e[(kz_{tt}-z_{tt})^+|{\cal f}_{s}]}{v_s}.\ ] ] a caplet is a call option with strike and maturity written on a spot libor rate for the time interval ] , a direct calculation shows that where the coefficients are using again the fact that , it follows that the state price density is therefore bond prices in a one variable third chaos model are given by the following ratio of forth degree polynomials in : in particular , the initial term structure is given by inspired by the parametric forms used in descriptive models , we propose to model the function itself in exponential polynomial form .observe that setting where and to guaranteed integrability , leads to a first chaos model in which forward rates are ratios of functions in exponential polynomial form .we regard this _ rational exponential polynomial _ family as a natural extension of the exponential polynomial class considered in , with added flexibility and potentially better calibration performance . in particular , this class allows us to reproduce all empirical shapes commonly observed for forward rate curves , such as increasing , decreasing and hump - shaped functions .having made this choice for the function , equation and its generalization naturally lead us to consider functions and all belonging to the exponential polynomial class as well .we describe the details of the term structure calibration in appendix [ term_procedure ] .we calibrate 14 different chaos models using two distinct data set from the uk bond market : observations of bonds of different maturities at every other business day from january 1998 to january 1999 ( a volatile market often exhibiting an inverted yield curve ) and weekly observations from december 2002 to december 2005 ( a more moderate market ) . for comparison , we also calibrate two of the descriptive models specified in .we summarize our calibration results for the two data sets in tables [ table:1 ] and [ table:2 ] .the first column in each table labels the models , starting with the descriptive nelson siegel and svensson models defined by the first two expression in , followed by the models in appendix [ term_procedure ] .the second column characterizes the type of model , whereas the third one gives the number of calibrated parameters .the remaining columns show the average values for the negative log likelihood function , the root - mean - squared percentage error and the diebold - mariano statistics with respect to the svensson model , as described in appendix [ term_procedure ] ..term structure calibration for 1998 - 1999 ( volatile market ) [ cols="^,<,^,>,^,>",options="header " , ] [ aic2 ]we proposed and implemented a systematic way to calibrate chaotic models for interest rates to available term structure and option data .the calibration performance to initial term structures is comparable to that of traditional descriptive models for forward rates with the same number of parameters , with the advantages of guaranteed positivity and consistency with a fully stochastic model for the time evolution of interest rates .when we include option data in the form of at - the - money caplets and swaptions , we see that chaos models perform significantly better than the benchmark hull and white and rational lognormal models , and comparably to a libor market model with a higher number of parameters .if we take the number of parameters into account , a conservative information criterion shows that one of our chaos models consistently outperforms the benchmark libor market model .the underlying reason for the superior performance of libor market models when compared , for example , to markovian short rate models is the rich correlation structure they provide for libor forward rates at different dates .similarly , in chaos models the resulting short rate is in general not markovian , and our calibration results show that an equally rich correlation structure can be achieved without having to model forward rates individually under each corresponding forward measure . the next step in our program is to consider options that are not at - the - money .the current industry standard for this is to use stochastic volatility models for either forward or swap rates . in a paper in preparationwe show that chaos models naturally give rise to rates with stochastic volatility , and explore this fact to calibrate them to the smile observe in interest rate data .expectedly , much work remains to be done , in particular on the interpretation of the financial interpretation of the chaos coefficients , where techniques such as principal components analysis might shed some additional light on the meaning of the most relevant calibrated parameters .extensions to other classes of financial products , notably exchange rate derivatives , are also possible .we believe the analysis presented in this paper demonstrates the viability of chaos models as serious contenders for practical use in the financial industry and will stimulate further work in the area .we are grateful for helpful comments by m. avellaneda , d. brody , d. crisan , j - p .fouque , l. hughston , t. hurd , j. teichmann , m. yor and the participants at the crfms seminar , santa barbara , may 2009 , the mathematical finance and related topics in engineering and economics workshop , kyoto , august 2009 , the research in options conference , buzios , november 2009 , the sixth world congress of the bachelier finance society , toronto , june 2010 , and the imperial college finance seminar , october 2010 , where this work was presented . 10 , _ interest rate dynamics and consistent forward rate curves _ , mathematical finance , 9 ( 1999 ) , pp .323348 . , _ interest rate models - theory and practice _ , springer , 2nd ed . , 2006 ., _ model selection and multi model inference : a practical information - theoretic approach _ , springer , 2nd ed . , 2002 ., _ descriptive bond - yield and forward - rate models for the british government securities market _ , british actuarial journal , 4 ( 1998 ) , pp .265321 . , _ a family of term - structure models for long - term risk management and derivative pricing _ , mathematical finance , 14 ( 2004 ) , pp .415444 . ,_ interest rate models : an introduction _ , princeton university press , 2004 . , _ stability of descriptive models for the term structure of interest rates with application to german market data _ , british actuarial journal , 7 ( 2001 ) , pp. 467507 . , _ comparing predictive accuracy _ , journal of business and economic statistics , 13 ( 1995 ) , pp .253263 . , _ interest rate , term structure and valuation modeling _ , john wiley and sons , inc . , 2002 . ,_ exponential - polynomial families and the term structure of interest rates _ , bernoulli , 6 ( 2000 ) , pp .. 10811107 . , _ positive interest _ , risk , 9 ( 1996 ) , pp . 4649 . , _ the libor market model in practice _ , john wiley and sons , inc . ,. , _ changes of numraire , changes of probability measure and option pricing _ , journal of applied probability , 32 ( 1995 ) , pp .. 443458 . ,_ volatility of the short rate in the rational lognormal model _ , finance and stochastics , 2 ( 1998 ) , pp .199211 . ,_ bond pricing and the term structure of interest rates : a new methodology for contingent claims valuation _ , econometrica , 60 ( 1992 ) , pp .. 77105 . , _ a chaotic approach to interest rate modelling _ , finance and stochastics , 9 ( 2005 ) , pp .4365 . ,_ pricing interest - rate - derivative securities _ ,review of financial studies , 3 ( 1990 ) , pp .57392 . ,_ interest rate caps `` smile '' too ! but can the libor market models capture the smile ? _ , the journal of finance , 62 ( 2007 ) , pp . 345382 . , _ the potential approach in practice_. preprint , cambridge university , 2008 . , _ interest rate , currency and equity derivatives valuation using the potential approach _ , international review of finance , 1 ( 2000 ) , pp . 269294 ., _ parsimonious modeling of yield curves _ , journal of business , 60 ( 1987 ) , pp .47389 . , _ the malliavin calculus and related topics _ , probability and its applications , springer - verlag , new york , 1995 . , _implementation and performance of various stochastic models for interest rate derivatives _, applied stochastic models in business and industry , 17 ( 2001 ) , pp .109120 ., _ the potential approach to the term structure of interest rates and foreign exchange rates _finance , 7 ( 1997 ) , pp .157176 . , _ fitting potential models to interest rates and foreign exchange rates ._ , in vasicek and beyond , l. p. hughston , ed . , risk publications , 1997 , pp . 327342 . , _ a note on the flesaker - hughston model of the term structure of interest rates _ , applied mathematical finance , 4 ( 1997 ) , pp151163 . , _ systematic generation of parametric correlation structures for the libor market model ._ , international journal of theoretical & applied finance , 6 ( 2003 ) , p. 507 ., _ estimating forward interest rates with the extended nelson and siegel method _ , sveriges riksbank quarterly review , 3 ( 1995 ) , pp .1326 . , _ a general stochastic volatility model for the pricing of interest rate derivatives _ , review of financial studies , 22 ( 2009 ) , pp .20072057 . , _ calibration of the chaotic interest rate model _ , phd thesis , university of st andrews , 2010 . , _ the homogeneous chaos _ , american journal of mathematics , 60 ( 1938 ) , pp . 897936 .we begin by listing our choices of chaos coefficients of different orders according to the general parametric form in .we start with first chaos models directly inspired by the nelson siegel and svensson parametric forms in , namely : we now describe in detail the steps we take to calibrate these chaos models to observed yield curves , which we obtain from clean prices of treasury coupon strips in the uk bond market from the united kingdom debt management office ( dmo ) according to using an actual / actual day - count convention .we consider the following two data sets : 1 .bond prices at dates ( every other business day ) from january to january , with around 49 to 62 maturities for each date .bond prices at dates ( every friday ) from december to december , with around 100 to 130 maturities for each date .note that the first dataset contains a volatile market including the period of the long - term capital management ( ltcm ) crisis , whereas the second dataset corresponds to more moderate market conditions .we then apply the maximum likelihood estimation ( mle ) method suggested by cairns in to each of the models and each starting date in the data sets above .this is done as follows : given a model with parameter vector and a starting date with available maturities , , we denote the theoretical prices by and the corresponding observed bond prices by .we then assume that where is the macaulay duration for the bond and }{\sigma^2_0(p)d^2b(p)+1 } , \quad b(p)=\frac{\sigma_d^2}{\sigma_0 ^ 2(p)[\sigma_{\infty}-\sigma_0 ^ 2(p)]},\ ] ] with the error parameters adopted in for the uk bond market data between january and november . as explained in , is based on the assumption that the published bond prices have rounding error of around per nominal price , whereas corresponds to the assumption that the difference between actual and expected yields have independent errors of the order of five basis points . finally , places a limit on the magnitude of price errors for long dated bonds .this leads to the log - likelihood function +\frac{(\log p_{0t_i}(\theta)-\log\overline{p}_{0t_i})^2}{\nu^2(p_{0t_i}(\theta),d_i)}\big].\ ] ] we then use a global search procedure to find the global maximum for the log likelihood function , that is , to avoid finding a local maximum , we repeat the procedure using 1000 different random starting points and select the best maximization result . having estimated the parameter vector , we denote by the fitted yield for maturity and by the corresponding observed yield .we then define the fitting root - mean - squared percentage error ( rmspe ) as ^ 2}.\ ] ] we then apply the diebold - mariano ( dm ) statistics based on rmspe to compare fitting performances as is done in and . here for the computation we use the program dmariano in the statistics package stata , with a lag order of thirteen in both of our two data sets .the null hypothesis , which is that two models have the same fitting errors , may be rejected at level if the absolute value of the dm statistics is greater than .we compare the calibration performance of the chaos models with a descriptive model for forward rates in svensson form : the higher the dm statistics , the more a chaotic model outperforms the svensson model .for option calibration , we consider one variable second chaos models with as well as each of the three benchmark models described in section [ benchmark ] . regarding the data , zero - coupon yields are bootstrapped from the libor , future and swap rates ( see for the detail of the bootstrapping technique ) and interest rate option prices from icap ( garban intercapital - london ) and ttkl ( tullett & tokyo liberty - london ) via the bloomberg database .we consider the following two data sets from the uk interest rate market : * data between september and august at dates ( every friday closing mid price ) consisting of * * zero - coupon yields with 17 maturities ranging from one month to 20 years , * * implied volatilities for atm caplets with 37 maturities ranging from one to 10 years , * * implied volatilities for atm swaptions with 7 maturities ranging from one month to 5 years and 6 tenors ranging from one to 10 years .* data between may and may at dates ( every friday closing mid price ) consisting of * * zero - coupon yields with 22 maturities ranging from one month to 20 years . * * implied volatilities for atm caplets with 77 maturities ranging from one to 20 years , * * implied volatilities for atm swaptions with 7 maturities ranging from one month to 5 years and 6 tenors ranging from one to 10 years .note here that the option data corresponds to a part of the data in , where data was analyzed between august and january .we obtain caplet implied volatilities by bootstrapping atm cap implied volatilities observed in the market using the technique described in , where the atm caplets implied volatilities maturing at six months and nine months are obtained by constant extrapolation .the extrapolation is necessary to bootstrap the other atm caplet implied volatilities , but when we calibrate the data , the extrapolated prices give us great errors .hence , although we follow and implement the extrapolation , we do not use those two short maturities for the calibration . moreover , we observed some obvious outliers and corrected them accordingly .for each of the models and data sets above , we perform three distinct calibrations : first to yields and caplets , then to yields and swaptions , and finally to yields , caplets and swaptions . to define the objective function for each to these calibrations , denote the observed yields and prices of atm caplets and atm swaptions by , and , their theoretical counterparties by , and , and the corresponding mean square percentage errors by ^ 2},\ ] ] ^ 2},\ ] ] and ^ 2}.\ ] ] for the calibration to yields and caplets , we then minimize the objective function similarly , for the calibration to yields and swaptions , we minimize the objective function finally , for the calibration to yields , caplets and swaptions we minimize the objective function for each of these calibrations , we test the pricing performance of the calibrated models . for example , after calibrating to yields and atm swaptions , we use the model to price the atm caplets and compute the pricing error from market atm caplet prices .whenever possible , for example in the hull - white model , we calibrate initial yield curves and options separately by minimizing their respective square errors , since these involve distinct sets of parameters . moreover ,for the lfm model , we minimize the square error in implied volatilities rather than actual prices . to access the relative performance between the models , we use the akaike information criterion ( aic ) and the model selection relative frequency described in .the aic is formed with the maximized value of the likelihood function for an estimated model and the number of parameters in the following way : for the least squares method under the assumed normality of residuals , this reduces to . where rss is the fitted residual sum of squares .since is a constant , we ignore this term and conclude that to define the model selection relative frequency , let us suppose we have two models and calibration sets ( for example the different calibration dates in our data ) . for a data set we compute aic denoted for one model and for the other .suppose the aic of the first model is smaller than the aic of the other model times .then the model selection relative frequencies ( msrf ) for the first model and the second model are computed respectively by
|
in this paper we calibrate chaotic models for interest rates to market data using a polynomial exponential parametrization for the chaos coefficients . we identify a subclass of one variable models that allow us to introduce complexity from higher order chaos in a controlled way while retaining considerable analytic tractability . in particular we derive explicit expressions for bond and option prices in a one variable third chaos model in terms of elementary combinations of normal density and cumulative distribution functions . we then compare the calibration performance of chaos models with that of well known benchmark models . for term structure calibration we find that chaos models are comparable to the svensson model , with the advantage of guaranteed positivity and consistency with a dynamic stochastic evolution of interest rates . for calibration to option data , chaos models outperform the hull and white and rational lognormal models and are comparable to libor market models . positive interest rate models , wiener chaos , model calibration e43 91g30 , 91g20 , 91g70
|
pattern recognition ( or classification or discrimination ) is about predicting the unknown nature of an observation : an observation is a collection of numerical measurements , represented by a vector belonging to some measurable space .the unknown nature of the observation is denoted by belonging to a measurable space . in pattern recognition ,the goal is to create a measurable map ; which represents one s prediction of given .the error of a prediction when the true value is is measured by , where the loss function . for simplicity , we suppose . in a probabilisticsetting , the distribution of the random variable describes the probability of encountering a particular pair in practice .the performance of , that is how the predictor can predict future data , is measured by the risk . in practice ,we have access to independent , identically distributed ( ) random pairs sharing the same distribution as called the learning sample and denoted . a learning algorithm is trained on the basis of .thus , is a measurable map from to . is predicted by the performance of is measured by the conditional risk called the generalization error denoted by ] is equal to , where .in the sequel , we suppose that the training sample and the test sample are disjoint and that the number of observations in the training sample and in the test sample are respectively and .moreover , we suppose also that the is an empirical risk minimizer on a sample with finite vc - dimension and a loss function bounded by . we also suppose that the predictors are symmetric according to the training sample , i.e. the predictor does not depend on the order of the observations in . eventually , the cross - validation are symmetric i.e. does not depend on , this excludes the hold - out cross - validation .* we denote these hypotheses by .* we will show upper bounds of the kind with .the term is a vapnik - chernovenkis - type bound whereas the term is a hoeffding - like term controlled by the size of the test sample .this bound gives can be interpreted as a quantitative answer to the bias - variance trade - off question . as the percentage of observations in the test sample increases ,the term decreases but the term increases .notice that this bound is worse than the vapnik - chernovenkis - type bound and thus can be called a sanity - check bound in the spirit of .even though these bounds are valid for almost all the cross - validation procedures , their relevance depends highly on the percentage of elements in the test sample ; this is why we first classify them according to . at last ,notice that our bounds can be refined using chaining arguments .however , this is not the purpose of this paper . the first result deals with large test samples , i.e. the bounds are all the better if is large .note that this result excludes the hold - out cross - validation because it does not make a symmetric use of the data .[ largets1]suppose that holds .then , we have for all , with * * first , we begin with a useful lemma ( for the proof , see appendices ) [ ch1:lemme1 ] under the assumption of proposition [ largets1 ] , we have for all symmetrically * proof of proposition [ largets1 ] .* recall that is based on empirical risk minimization .moreover , for simplicity , we have supposed the infimum is attained _i.e. _ .define .we have by splitting according to : notice that .intuitively , corresponds to the variance term and is controlled in some way by the resampling plan . on the contrary , in the general setting , , and the bias term and measures the discrepancy between the error rate of size and of size the first term can be bounded via hoeffding s inequality , as follows then , by jensen s inequality , we have , for fixed vectors , we have by linearity of expectation and the i.i.d assumption finally , by lemma 1 in since and the conditional independence : the second term may be treated by introducing the optimal error which should be close to , using the supremum and the fact that is an empirical risk minimizer , we obtain : then , since and by definition of , we deduce thus , by lemma [ ch1:lemme1 ] , we get recall the following result ( see e.g. ) , we finally obtain next , we obtain [ largets]suppose that holds. then , we have , for all , * proof * first , the following lemma holds ( for the proof , see appendices ) , [ lemme1 ] suppose that holds , then we have thus , using the two previous results , we have a concentration inequality for the absolute error , suppose that holds .then , we have , for all , with * * with the previous concentration inequality , we can bound from above the expectation of : [ cor : l1-largets]suppose that holds . then , we have , * proof . *this is a direct consequence of the following lemma : [ ch1:lem esperance ] let be a nonnegative random variable .let nonnegative real such that .suppose that for all .then : the previous bound is not relevant for all small test samples ( typically leave - one - out cross - validation ) since we are not assured that the variance term converges to ( in leave - one - out cross - validation , ) .however , under , cross - validation with small test samples works also , as stated in the next proposition .[ smallts]suppose that holds .then , we have , for all , with * * for small test samples , we get the same conclusion but the rate of convergence for the term is slower than for large test samples : typically against * proof .* now , we get by splitting according to : first , from the proof of proposition [ largets ] , we have secondly , notice that . to control , we will need the following lemma ( for the proof see appendices ) which says that if a bounded random variable is centered and is nonpositive with small probability then it is nonnegative with also small probability .[ ch1:monlemme ] if and .then for all we get moreover , we have since by lemma [ lemme1 ] using lemma [ ch1:lemme1 ] , it follows : applying lemmas [ ch1:monlemme ] and inequality [ eq : shatter ] allows to conclude . we have the following complementary but not symmetrical result : [ smallts]suppose that holds .then , we have for all , * proof .* we have since : from this result , we deduce that , suppose that holds .then , we have for all , * * eventually , we get [ largets]suppose that holds . then , we have : we just need lemma [ ch1:lem esperance ] and the following simple lemma let a nonnegative random variable bounded by , a real such that , for all . then , eventually , collecting the previous results , we can summarize the previous results for upper bounds in probability with the following theorem : [ thm : sym ] suppose that holds .then , we have for all , with * * an interesting consequence of this proposition is that the size of the test is not required to grow to infinity for the consistency of the cross - validation procedure in terms of convergence in probability . for -fold cross - validation, we can simply use the previous bounds together .thus , we get [ k - fold ] suppose that holds .then , we have for all , with * * since , notice the previous bound can itself be bounded by in fact , the bound for the variance term can be improved by averaging the training errors .this step emphasizes the interest of -fold cross - validation against simpler cross - validation .[ k - fold ] suppose that holds .then , in the case of the -fold cross - validation procedure , we have for all : thus , averaging the observed errors to form the -fold estimate improves the term from .this result is important since it shows why intensive use of the data can be very fruitful to improve the estimation rate .another interesting consequence of this proposition is that , for a fixed precision , the size of the test is not required to grow to infinity for the exponential convergence of the cross - validation procedure . for this , it is sufficient that the size of the test sample is larger than a fixed number . * proof . * recall that the size of the training sample is , and the size of the test sample is then . for this proposition , we have we are interested in the behaviour of ) which is a sum of terms in the case of the -fold cross - validation .the difficulty is that these terms are neither independent , nor even exchangeable .we have in mind to apply the results about the sum of independent random variables .for this , we need a way to introduce independence in our samples . in the same time, we do not want to lose too much information .for this , we will introduce independence by using by using the supremum .we have , now , we have a sum of i.i.d terms : , with . however , we have an extra piece of information : an upper bound for the tail probability of these variables , using the concentration inequality due to . with ) and . in fact , summing independent bounded variables with exponentially small tail probability gives us a better concentration inequality than the simple sum of independent bounded variables . to show this, we proceed in three steps : 1 .the -hlder norms of each variable is uniformly bounded by , 2 .the laplace transform of is smaller than the laplace transform of some particular normal variable , 3 . using chernoff s method, we obtain a sharp concentration inequality . 1 . first step ( for the proof , see appendices ) , we prove + [ lem : subgaussian ] + let a random variable ( bounded by with subgaussian tail probability for all with and . then , there exists a constant such that , for every integer , .second step ( see exercise 4 in ) , we have + [ lem : holder ] + if there exists a constant , such that for every integer we have 3 . third step , we have the result using chernoff s method .+ [ lem : chernoff ] + if , for some , , we have : if are i.i.d ., we have: putting lemma [ lem : subgaussian ] [ lem : holder ] lem : chernoff together , we eventually get : symmetrically , we obtain : suppose that holds .then , in the case of the -fold cross - validation procedure , we have for all eventually , we have a control on the absolute deviation [ ch1:thm kfold ] suppose that holds . then , in the case of the -fold cross - validation procedure , we have for all , with * * for hold - out cross - validation , the symmetric condition that for all , is independent of is no longer valid . indeed , in the hold - out cross - validation ( or split sample ), there is no crossing again . in the next proposition, we suppose that the training sample and the test sample are disjoint and that the number of observations in the learning sample and in the test sample are still respectively and .moreover , we suppose also that the predictors are empirical risk minimizers on a class with finite -dimension and a loss function bounded by . * we denote these hypotheses by . * we get the following result [ holdout]suppose that holds . then , we have for all , with * * * proof .* we just have to follow the same steps as in proposition [ thm : sym ] .but in the case of hold - out cross - validation , notice that moreover , the lemma [ ch1:monlemme ] is no longer valid , since . we base the next discussion on upperbounds , so the following heuristic arguments are questionable if the bounds are loose .one can wonder : what is the use of averaging again over the different folds of the -fold cross - validation , which is time consuming ? as far as the expected errors are concerned , the upper bounds are the same for crossing cross - validation procedures and for hold - out cross - validation . but suppose we are given a level of precision , and we want to find an interval of length with maximal confidence .then notice that .thus if is constant , : the term will be much greater for hold - out based on large learning size . on the contrary , if the learning size is small , then the term is smaller for non crossing procedure for a given .this might due to the absence of resampling .regarding the variance term , we need the size of the test sample to grow to infinity for the consistency of the hold - out cross - validation . on the contrary , for crossing cross - validation, the term converges to whatever the size of the test is . if we consider the error , the upper bounds are the same for crossing cross - validation procedures and for other cross - validation procedures .but if we look for the interval of length with maximal confidence , then notice that ( with defined respectively in theorems [ ch1:thm kfold ] , [ thm : sym ] ) if the number of elements in the training sample is constant and large enough .thus , if the learning size is large enough , the term is much smaller for the -fold cross - validation , thanks to the crossing .the expression of the variance term depends on the percentage of observations in the test sample and on the type of cross - validation procedure .we have thus a control of the variance term depending on we can define the estimation curve ( in probability or in norm ) which gives for each cross - validation procedure and for each the estimation error .let : and defined in theorem [ thm : sym ] .this can be done with the expectation of the absolute of deviation or with the probability upper bound if the level of precision is . with and defined as in proposition cor : l1-largets .we say that the estimation curve in probability experiences a phase transition when the convergence rate changes .the estimation curve experiences at least one transition phase .the transition phases just depend on the class of predictors and on the sample size . on the contrary of the learning curve , the transition phases of the estimation curve are independent of the underlying distribution .the different transition phases define three different regions in the values of the percentage of observations in the test sample .this three regions emphasize the different roles played by small test sample cross - validation , large test samples cross - validation and -fold cross - validation .the estimation curve gives a hint for this simple but important question : how should one choose the cross - validation procedure in order to get the best estimation rate ?how should one choose in the -fold cross - validation ?the quantitative answer of theses questions is the of the estimation curve .that is in probability or in norm : as far as the norm is concerned , we can derive a simple expression for the choice of .indeed , if we use chaining arguments in the proof of proposition [ ch1:lemme1 ] , that is : there exists a universal constant such that ( for the proof , see e.g. ) .the proposition cor : l1-largets thus becomes : suppose that holds. then , there exists a universal constant such that : we can then minimize the last expression in .after derivation , we obtain .thus , the larger the vc - dimension is , the larger the training sample should be . since it may be difficult to find an explicit constant , one may try to solve : .we obtain then a computable rule another interesting issue is : knowing the number of observations and the class of predictors , we can now derive an optimal minimal -confidence interval , together with the cross - validation procedure .we look at the values such that the upperbound is below the threshold .then , we select the couple among those values for which is minimal . on figure [ fig : splitting ] , we fix a choice of .we observe that , for values of between and and for small vc - dimension , a choice of , i.e. the ten - fold cross - validation , seems to be a reasonable choice .van der laan et al.(2004 ) allen , d. m. ( 1968 ) the relationship between variable selection and data augmentation and a method for prediction ._ technometrics _ , 16 , 125 - 127 .arlot , s. ( 2007 ) .model selection by resampling penalization ._ submitted to colt_. bengio , y. and grandvalet , y. ( 2004 ) .no unbiased estimator of the variance of k - fold cross - validation ._ journal of machine learning research _ 5 , 1089 - 1105 .biswas , s. markatou , m. , tian , h. , and hripcsak , g. ( 2005 ) .analysis of variance of cross - validation estimators of the generalization error ._ journal of machine learning research _ , vol .6 , 1127 - 1168 .breiman , l. , friedman , j.h . , olshen , r. and stone , c.j .classification and regression trees .the wadsworth statistics probability series_. wadsworth international group .breiman , l. and spector , p. ( 1992 ) .submodel selection and evaluation in regression : the x - random case _ international statistical review _ , 60 , 291 - 319 .blum , a. , kalai , a. , and langford , j. ( 1999 ) . beating the hold - out : bounds for k - fold and progressive cross - validation ._ proceedings of the international conference on computational learning theory_. bousquet , o. and elisseef , a. ( 2001 ) .algorithmic stability and generalization performance _ in advances in neural information processing systems _ 13 : proc . nips2000 .bousquet , o. and elisseef , a. ( 2002 ) .stability and generalization ._ journal of machine learning research _ , 2:499 - 526 .burman , p. ( 1989 ) . a comparative study of ordinary cross - validation , v - fold cross - validation and the repeated learning - testing methods . _ biometrika _ , 76:503 514 .devroye , l. , gyorfi , l. and lugosi , g. ( 1996 ) . a probabilistic theory of pattern recognition .number 31 in _ applications of mathematics_. springer .devroye , l. and wagner , t. ( 1979 ) .distribution - free performance bounds for potential function rules ._ ieee trans .inform . theory _ , vol.25 , pp .601 - 604 .devroye , l. and wagner , t. ( 1979 ) .distribution - free inequalities for the deleted and holdout error estimates ._ ieee transactions on information theory _ , vol.25(5 ) , pp .601 - 604 .dudoit , s. and van der laan , m.j .asymptotics of cross - validated risk estimation in model selection and performance assessment ._ technical report _ 126 , division of biostatistics , university of california , berkeley .dudoit , s. , van der laan , m.j . , keles , s. , molinaro , a.m. , sinisi , s.e . and teng , s.l .loss - based estimation with cross - validation : applications to microarray data analysis ._ sigkdd explorations , microarray data mining special issue_. van der laan , m.j . , dudoit , s. and van der vaart , a. ( 2004),the cross - validated adaptive epsilon - net estimator , _ statistics and decisions _ , 24 373 - 395 .geisser , s. ( 1975 ) .the predictive sample reuse method with applications ._ journal of the american statistical association _ , 70:320328 .gyrfi , l .kohler , m. and krzyzak , m. and walk , h. ( 2002a ) . _ a distribution - free theory of nonparametric regression_. springer - verlag , new york .hastie , t. , tibshirani , r. and friedman , j.h .the elements of statistical learning : data mining , inference , and prediction .springer - verlag .hoeffding , w. ( 1963 ) .probability inequalities for sums of bounded random variables. _ journal of the american statistical association _ , 58 , 13?30 .holden , s.b .cross - validation and the pac learning model ._ research note _ rn/96/64 , dept . of cs , univ .college , london .holden , s.b .pac - like upper bounds for the sample complexity of leave - one - out cross validation . _ in proceedings of the ninth annual acm workshop on computational learning theory _ , pages 41 50 .kearns , m. and ron , d. ( 1999 ) .algorithmic stability and sanity - check bounds for leave - one - out cross - validation ._ neural computation _ , 11:1427 1453 .kearns , m. ( 1995 ) .a bound on the error of cross validation , with consequences for the training - test split . _ in advances in neural information processing systems 8_. the mit press .kearns , m. j. , mansour , y. , ng , a. and ron , d. ( 1995 ) .an experimental and theoretical comparison of model selection methods ._ in proceedings of the eighth annual acm workshop on computational learning theory _ , pages 21 30 . to appear in machine learning , colt95 special issuekutin , s. ( 2002 ) .extensions to mcdiarmid s inequality when differences are bounded with high probability . _ technical report _, department of computer science , the university of chicago . in preparation .kutin , s. and niyogi , p. ( 2002).almost - everywhere algorithmic stability and generalization error ._ uncertainty in artificial intelligence ( uai ) _ , august 2002 , edmonton , canada .lachenbruch , p.a . andmickey , m. ( 1968 ) .estimation of error rates in discriminant analysis. _ technometricslm68 _ estimation of error rates in discriminant analysis ._ technometrics _ , 10 , 1 - 11 .li , k - c .asymptotic optimality for cp , cl , cross - validation and generalized cross - validation : discrete index sample ._ annals of statistics _, 15:958975 .lugosi , g. ( 2003 ) .concentration - of - measure inequalities presented at _ the machine learning summer school 2003 _ , australian national university , canberra , mccarthy , p. j. ( 1976 ) . the use of balanced half - sample replication in crossvalidation studies ._ journal of the american statistical association _ , 71 : 596604 .mcdiarmid , c. ( 1989 ) . on the method of bounded differences ._ in surveys in combinatorics_ , 1989 ( norwich , 1989 ) , pages 148 188 .cambridge univ . press ,cambridge .mcdiarmid , c. ( 1998 ) . concentration . in probabilistic methods for algorithmic discrete mathematics , pages 195 248 .springer , berlin .picard , r.r . andcook , r.d .. (1984 ) .cross - validation of regression models ._ journal of the american statistical association _ , 79:575583 .ripley , b. d. ( 1996 ) .pattern recognition and neural networks . _ cambridge university press _ , cambridge , new york .shao , j. ( 1993 ) .linear model selection by cross - validation ._ journal of the american statistical association _ , 88:486494 .stone , m. ( 1974 ) .cross - validatory choice and assessment of statistical predictions ._ journal of the royal statistical society b _ , 36 , 111?147 .stone , m. ( 1977).asymptotics for and against cross - validation ._ biometrika _ , 64 , 29?35 .vapnik , v. and chervonenkis , a. ( 1971 ) . on the uniform convergence of relative frequencies of events to their probabilities . _theory of probability and its applications _, 16 , 264?280 .van der vaart , a. w. and wellner , j. ( 19936 . weak convergence and empirical _ processes_. springer - verlag , new york .vapnik , v. n. and chervonenkis , a. y. ( 1971 ) . on the uniform convergence of relative frequencies of events to their probabilities . _theory of probability and its applications _ , 16(2):264280 .vapnik , v. ( 1982 ) .estimation of dependences based on empirical data .springer - verlag .vapnik , v. ( 1995 ) .the nature of statistical learning theory .springer .vapnik , v. ( 1998 ) . statistical learning theory .john wiley and sons inc ., new york . a wiley - interscience publication .yang , y. ( 2007 ) .consistency of cross validation for comparing regression procedures ._ accepted by annals of statistics_. zhang , p. ( 1993 ) .model selection via multifold cross - validation ._ annals of statistics _ , 21:299313 .zhang , t. ( 2001 ) . a leave - one - out cross validation bound for kernel methods with applications in learning ._ 14th annual conference on computational learning theory _ - springer .we recall three very useful results . the first one , due to , bounds the difference between the empirical mean and the expected value .the second one , due to , bounds the supremum over the class of predictors of the difference between the training error and the generalization error . the last one is called the bounded differences inequality . [ ]let a class of predictors with finite vc - dimension and a loss function bounded by .then for all with ) and if ) and \right| \\ & \leq \sup_{\substack { x_{1},\ldots , x_{i},\ldots , x_{n } \\ x_{i}^{^{\prime } } } } \mathbb{e}_{v_{n}^{tr}}\left| \sup_{\phi \in \mathcal{c}}(\widehat{r}_{v_{n}^{tr}}(\phi ) -r(\phi ) ) -\sup_{\phi \in \mathcal{c}}(\widehat{r}_{v_{n}^{tr}}^{^{\prime } } ( \phi ) -r(\phi ) ) \right| \\ & \text{by jensen 's inequality } \\ & \leq \sup_{\substack { x_{1},\ldots , x_{i},\ldots , x_{n } \\ x_{i}^{^{\prime } } } } \mathbb{e}_{v_{n}^{tr}}\sup_{\phi \in \mathcal{c}}|\widehat{r}_{v_{n}^{tr}}(\phi ) -\widehat{r}_{v_{n}^{tr}}^{^{\prime } } ( \phi ) | \\ & \text{since } |\sup f-\sup g|\leq \sup |f - g| \\ & \leq \frac{1}{n}.\end{aligned}\ ] ]
|
in this article , we derive concentration inequalities for the cross - validation estimate of the generalization error for empirical risk minimizers . in the general setting , we prove sanity - check bounds in the spirit of _ bounds showing that the worst - case error of this estimate is not much worse that of training error estimate _ . general loss functions and class of predictors with finite vc - dimension are considered . we closely follow the formalism introduced by to cover a large variety of cross - validation procedures including leave - one - out cross - validation , -fold cross - validation , hold - out cross - validation ( or split sample ) , and the leave--out cross - validation . in particular , we focus on proving the consistency of the various cross - validation procedures . we point out the interest of each cross - validation procedure in terms of rate of convergence . an estimation curve with transition phases depending on the cross - validation procedure and not only on the percentage of observations in the test sample gives a simple rule on how to choose the cross - validation . an interesting consequence is that the size of the test sample is not required to grow to infinity for the consistency of the cross - validation procedure . keywords : cross - validation , generalization error , concentration inequality , optimal splitting , resampling .
|
how does a stellar object become a supernova remnant ( snr ) after the supernova event ?some hints come from observations of supernovae in other galaxies ( e.g * ? ? ?* ) but since they are far away they are not quite informative for understanding of how young snrs obtain their look , how properties of the explosion and ambient medium affect their evolution .different time and length scales need to be treated in order to model the transfiguration of sn to snr , that creates difficulties for numerical simulations . in the last yearshowever a number of studies has been performed in order to understand the involved processes .they adopt either one - dimensional simulations ( e.g. * ? ? ?* ; * ? ? ?quite recently three - dimensional models . in order to relate an snr model to observations, one has to simulate emission .radiation of the highly energetic particles is an important component of a model .the particle spectrum has to be known in order to simulate their emission .the _ non - stationary _ solution of the diffusion - convection equation has to be used in order to describe the distribution function of these particles in young snrs because the acceleration is not in the steady - state regime yet .there are evidences from numerical simulations that the particle spectrum could not be stationary even in the rather old snrs .there is well known approach to derive the time - dependent solution and expression for the acceleration time .the original formulation has been developed i ) for the spatially constant flow velocities and diffusion coefficients before and after the shock , ii ) for the momentum dependence of the diffusion coefficient of the form with the constant index , iii ) for the impulsive or the constant particle injection , iv ) for the monoenergetic injection of particles at the shock front and v ) for the case when the acceleration time upstream is much larger than that downstream . was the first to consider the time - dependent acceleration and has given a solution for and the diffusion coefficient independent of the particle momentum . has presented a way to generalize his own solution to include also the spatial dependence of the flow velocity and the diffusion coefficient . have found a generalization of the solution ( and momentum independent ) which allows one to consider different and .they have also obtained the expression for the acceleration time if there are the free - escape boundaries upstream and downstream of the shock . have generalized the solution to the time evolution of the pre - existing seed cosmic rays , i.e. the authors have generalized the treatment to the impulsive ( at time ) injection of particles residing in the half - space before the shock and being distributed with some spectrum . the approach to treat the time - dependent non - linear accelerationis developed by who have not obtained the solution but made an important progress in derivation of the acceleration time for the case when the particle back - reaction on the flow is important . in the present paper , the drury s test - particle approachis extended to more general situations .namely , few different representations for are obtained ; a way to avoid the limitation in deriving the distribution function at the shock is presented ; a solution is written in a way to allow for any time variation of the injection efficiency ; a possibility for the diffusion coefficient to have other than the power - law dependence on momentum is considered .the structure of the paper is as follows .the task and main assumptions are stated in sect .[ kineq2:kineqbase ] .the three different approaches to solve the non - stationary equation are presented in sections [ kineq2:kineqi ] , [ kineq2:kineqii ] and [ kineq2:kineqiii ] respectively .then , in sect .[ kineq2:discussion ] , we demonstrate when and to which extent our generalized solution differs from the original drury s formulation ( sect .[ kineq2:pmax00 ] ) and discuss implications of the time - dependent injection efficiency on the particle spectrum ( sects .[ kineq2:injteffect ] , [ kineq2:pinter ] ) . sect. [ kineq2:conclusions ] concludes .some mathematical identities used in the present paper are listed in the appendix [ kineq2:app1 ] .we consider the parallel shock and ( without loss of generality ) the coordinate axis to be parallel to the shock normal .the shock front is at .the flow moves from to .the one - dimensional equation for the isotropic non - stationary distribution function is : +\frac{1}{3}\frac{du}{dx}p{\frac{\partial{f}}{\partial{p}}}+q \label{kineq : kineq}\ ] ] where is the time , the spatial coordinate , the momentum , the diffusion coefficient , the source ( injection ) term .the equation is written in the reference frame of the shock front .the velocities of the scattering centers are assumed to be much smaller than the flow velocity .the injection term is considered as a product of terms representing temporal , spatial and momentum dependence in particular , it is and , where is the heaviside step function , in . in the present paper ,the injection is assumed to be isotropic and monoenergetic with the initial momentum ( e.g. * ? ? ?* ) : where the parameter is the injection efficiency ; it gives the fraction of particles which are accelerated .the particles are injected at the shock front : .different representations of the term are considered ; for example , it is for the constant injection and for the impulsive injection .a number of other assumptions are typically used in order to solve the equation .the distribution function is the distribution is continuous at the shock : where the index ` o ' represents values at the front ( ) , the index ` 1 ' denotes the point right before the shock front ( ) and the index ` 2 ' marks the point right after the shock ( ) .the distribution function is uniform downstream of the shock : there is no seed energetic particles far upstream : the flow velocity is spatially constant before and behind the shock : where both and are positive and constant , .the ratio is the shock compression factor .( [ kineq : umova7a])-([kineq : umova7b ] ) are related to the ` test - particle ' regime when the accelerated particles do not modify the flow structure . in this case , the derivative is in the present paper , we consider the diffusion coefficients and spatially constant in their domains .in this section , the approach to a solution is reviewed and generalized .the solution for the distribution function at the shock was derived initially under a limiting assumption that the particle acceleration time in the upstream medium is much larger than the acceleration time downstream . in the present section ,we show how a more general expression may be obtained .we describe also three ways to write down expressions for the distribution function outside the shock .our generalization of the solution allows for any ( integrable ) dependence of the injection efficiency on time .the treatment of the equation ( [ kineq : kineq ] ) consists in applying the laplace transform to the equation that leads to an equation for the laplace transform of the distribution function : +\frac{1}{3}\frac{du}{dx}p{\frac{\partial{\overline{f}}}{\partial{p}}}+\overline{q{_\mathrm{t}}}(s)q{_\mathrm{p}}(p)\delta(x ) , \label{kineq2:kineqg}\ ] ] note that ( continuous steady - state injection after ) was adopted in the original formulation ; then .hereafter , the over - line marks the laplace transform .the function of interest is given by the inverse laplace transform of the solution of the eq .( [ kineq2:kineqg ] ) . before the shock ( ) and after the shock ( ) , the equation ( [ kineq2:kineqg ] ) simplifies to .\label{kineq2:eqgx}\ ] ] we shall look for the solution in the form where the diffusion coefficients are uniform and is the value of the function in the point .substitution eq .( [ kineq2:eqgx ] ) with ( [ kineq2:solxtp ] ) gives upstream and downstream : .\label{kineq2:betadef}\ ] ] where the condition and thus was used .the correspondence of the sign in ( [ kineq2:betadef ] ) to the upstream and the sign to the downstream is clearly demonstrated by the limit .namely , the stationary solution comes from ( [ kineq2:solxtp])-([kineq2:betadef ] ) with substitution : in order to find the equation for the function on the shock , i.e. for , the equation ( [ kineq2:kineqg ] ) is integrated from to : -\left[d{\frac{\partial{\overline{f}}}{\partial{x}}}\right]_1+\frac{u_2-u_1}{3}p{\frac{\partial{\overline{f{_\mathrm{o}}}}}{\partial{p}}}+ q{_\mathrm{p}}\overline{q{_\mathrm{t}}}(s)=0 , \label{kineq2:eqgxint}\ ] ] the continuity condition ( and then ) as well as ( [ kineq : umova - tp ] ) are used .the expressions for the first two terms are given by differentiation of ( [ kineq2:solxtp ] ) in points and respectively : =\overline{f{_\mathrm{o}}}u_2\beta_2= -\overline{f{_\mathrm{o}}}u_2f_2/2,\ ] ] =\overline{f{_\mathrm{o}}}u_1\beta_1= \overline{f{_\mathrm{o}}}u_1f_1/2+u_1\overline{f{_\mathrm{o}}},\ ] ] where the notations , are introduced .then , the equation for is where and is the spectral index of the function , i.e. . the term is the spectral index of the stationary distribution function .the general solution of the inhomogeneous equation ( [ kineq2:eqgo ] ) is , \label{kineq2:generalsol}\ ] ] where is an arbitrary function .this solution may be rewritten as : where is the solution of the stationary equation , i.e. eq . ( [ kineq : kineq ] ) with : and the arbitrary function was set to zero in order to resemble the known expressions ( e.g. * ? ? ?* ; * ? ? ?* ) for the stationary solution .the distribution function is given by the inverse laplace transform ( [ kineq2:laplint5 ] ) of : where is the inverse laplace transform of ] , we obtain : where are the inverse laplace transforms of $ ] and are given by ( [ kineq2:t1phi ] ) .we may find in the same way as in sect .[ kineq2:sectvarphio ] .namely , applying binomial decomposition ( [ kineq2:binrozkl ] ) in respect to , using the shift rule ( [ kineq2:laplint3 ] ) in respect to and then the inverse laplace transform ( [ kineq2:laplint4 ] ) to each summand of the decomposition , we derive for expression analogous to eq .( [ kineq2:t1phi ] ) with instead of and .the distribution comes from with obvious substitution .we would like to note , that considers actually the case ; then the expressions for for both regions and are simpler in their approach : the expressions do not contain integration ( because ) and are given just by eq .( [ kineq2:t1phi ] ) with instead of and .one alternative approach to solve the equation ( [ kineq : kineq ] ) consists in splitting this equation into few separate equations , and in applying the laplace transform to equations which are simpler than the original eq ..([kineq : kineq ] ) .the splitting of the diffusion - convection equation into equations for , and is the typical approach in solving the stationary problem ( e.g. * ? ? ?* ; * ? ? ?* ) . from the mathematical point of view ,the task to solve eq .( [ kineq : kineq ] ) may be formulated as the conjugation problem for the linear parabolic equation of the second order with discontinuous coefficients : +u_1{\frac{\partial{f}}{\partial{x}}}=0 , \quad x<0,\label{kineq2:conjtask1}\\ \displaystyle{\frac{\partial{f}}{\partial{t}}}-{\frac{\partial{}}{\partial{x}}}\left[d_2{\frac{\partial{f}}{\partial{x}}}\right]+u_2{\frac{\partial{f}}{\partial{x}}}=0 , \quad x>0,\label{kineq2:conjtask2}\\ f(0,x , p)=0 , \label{kineq2:conjtask3}\\ f_1(t,0,p)=f_2(t,0,p)\equiv f{_\mathrm{o}}(t , p ) , \label{kineq2:conjtask4}\\ \displaystyle\left[d{\frac{\partial{f}}{\partial{x}}}\right]_2-\left[d{\frac{\partial{f}}{\partial{x}}}\right]_1 + \frac{u_2-u_1}{3}p{\frac{\partial{f{_\mathrm{o}}}}{\partial{p}}}+q{_\mathrm{t}}(t)q{_\mathrm{p}}(p)=0 \label{kineq2:conjtask5}\end{aligned}\ ] ] where , again , the index ` 1 ' refers to , the index ` 2 ' to and the index ` o ' to .the conjugation ( matching ) condition ( [ kineq2:conjtask5 ] ) is derived by integration of eq .( [ kineq : kineq ] ) from to under assumption that , , are continuous through the point at any time .the fundamental solutions of the heat conduction equations ( [ kineq2:conjtask1 ] ) and ( [ kineq2:conjtask2 ] ) are where is a real variable and are spatially constant in their domains .we shall look for the solution of the conjugation problem ( [ kineq2:conjtask1])-([kineq2:conjtask5 ] ) in the form of the parabolic simple - layer potentials where are unknown functions to be determined from eqs .( [ kineq2:conjtask4])-([kineq2:conjtask5 ] ) .substitution ( [ kineq2:conjtask4 ] ) with ( [ kineq2:soleqbase ] ) yields the first equation for : note , that we used here .dealing with the condition ( [ kineq2:conjtask5 ] ) , we use the expression for the simple - layer potential jump which , in our case , is we have , from ( [ kineq2:solheat ] ) , that now , the second equation for follows from eq . ( [ kineq2:conjtask5 ] ) , with the use of ( [ kineq2:jump1 ] ) , ( [ kineq2:jump2 ] ) : thus , we derived the system of equations ( [ kineq2:eqv1 ] ) and ( [ kineq2:eqv2 ] ) for unknown functions and where the first equation ( [ kineq2:eqv1 ] ) is the volterra integral equation of the first kind and the second one ( [ kineq2:eqv2 ] ) is the volterra integral equation of the second kind .there is unknown function in the right - hand side of eq .( [ kineq2:eqv2 ] ) .let us find before solving the system ( [ kineq2:eqv1 ] ) , ( [ kineq2:eqv2 ] ) .both the integrals in eq .( [ kineq2:eqv1 ] ) are equal to due to the continuity of the distribution function , ( [ kineq2:conjtask4 ] ) . with this relation ,( [ kineq2:eqv2 ] ) becomes in order to obtain the equation for , we apply the laplace transform to ( [ kineq2:eqfo1 ] ) and ( [ kineq2:eqfo2 ] ) . the first one , eq .( [ kineq2:eqfo1 ] ) , with the use of the convolution property ( [ kineq2:laplint5 ] ) transforms to eq .( [ kineq2:solheat ] ) yields these two relations allow us to find and their sum . the second one , eq .( [ kineq2:eqfo2 ] ) , after the laplace transform , gives another equation for the sum : equating the two expressions for , we derive the differential equation for which is exactly the same as eqs .( [ kineq2:eqgo ] ) .its solution gives the function , as it is shown in sect .[ kineq2:sectfotp ] and [ kineq2:sectvarphio ] : the function may be obtained by substitution ( [ kineq2:soleqbase ] ) with expression for . in order to have the expression for , we use ( [ kineq2:eqfo3b ] ) in ( [ kineq2:eqfo3c ] ) and write inverting this , we come to an alternative possibility to derive , without the need to know , is to consider the laplace transform of eq .( [ kineq2:soleqbase ] ) : and to represent , given by ( [ kineq2:solheat ] ) , as then we have i ) to apply the laplace transform to this ( the properties to be used are ( [ kineq2:laplint3 ] ) and ( [ kineq2:laplint4 ] ) with ) , ii ) to express from ( [ kineq2:eqfo3 ] ) with ( [ kineq2:eqfo3b ] ) and iii ) to substitute these and into eq .( [ kineq2:laplf(xp ) ] ) .after these steps we have that where \ ] ] which is the same as used in sect . [ kineq2:sectftxpappi ]now , applying the inverse laplace transform to ( [ kineq2:invtrsoli ] ) , we come to the solution which is the same as ( [ kineq2:gensolapii ] ) . note that is in fact ( cf .eq . [ kineq2:varphix ] ) this demonstrates the close relation of the time - dependent acceleration problem to the fundamental solution of the heat conduction equation and reveals the physical meaning of : shift of the distribution in space with velocity and its spread in accordance to .the third approach to the time - dependent acceleration problem deals again with the conjugation equations ( [ kineq2:conjtask1])-([kineq2:conjtask5 ] ) but without the laplace transform . generally speaking , in this way we may overcome the conditions and used during the inverse laplace transform in sect . [ kineq2:sectvarphio ] .the former possibility is important in cases where the dependence of the diffusion coefficient on the particle momentum differs from the power law .the later possibility could be relevant for small particle momenta which are not well above the injection momentum ( this is almost unimportant in the astrophysical environments ) . like in the previous section ,the solutions of the heat equations ( [ kineq2:conjtask1 ] ) and ( [ kineq2:conjtask2 ] ) are given by ( [ kineq2:solheat ] ) .we consider the volterra integral equation of the first kind ( [ kineq2:eqv1 ] ) . in order to regularize it, we consider the operator which maps according to the rule applying it to both sides of ( [ kineq2:eqfo1 ] ) , we obtain expressions ( [ kineq2:via ] ) for and which , if the condition holds , may be written as the function may be found by substitution eq .( [ kineq2:soleqbase ] ) with ( [ kineq2:via ] ) or ( [ kineq2:viab ] ) .an _ equation for the non - stationary distribution function at the shock _ comes from eq .( [ kineq2:eqfo2 ] ) : it is worth to note that this equation is derived without laplace transform .the correctness of the equation may be demonstrated by converting it to the equation ( [ kineq2:eqgo ] ) for the laplace transform .the way to do this is following : i ) compare ( [ kineq2:eqfo3c ] ) and ( [ kineq2:deffi ] ) and note that , ii ) apply the laplace transform to ( [ kineq2:eqfonolt2 ] ) and substitute it with this relation between and .introducing the variables defined by relations with given by ( [ kineq2:viab ] ) , we come to the integro - differential equation for : where is given by ( [ kineq2:nonunitermi ] ) and by ( [ kineq2:spectrindsf ] ) .assuming are known , the solution is : with .\label{kineq2:eqcalf}\ ] ] since is expressed through the function which we are looking for , the final solution for may be obtained by the method of successive approximations .the limit of is zero for due to ( [ kineq2:umova1 ] ) and is unity for since by definition . comparing ( [ kineq2:solfoappriii ] ) and ( [ kineq2:solftpq ] ) , we note that the relation between and is simpler for continuous injection ( i.e. ) : .it is useful for applications to note that depends in fact on the one variable only which is a combination .really , \label{kineq2:eqcalftau}\ ] ] where , , , which may be calculated for any ( differentiable ) dependence of the diffusion coefficient on the particle momentum and where we denote .therefore , the method of successive approximations reads \end{array}\ ] ] with the initial guess value it should be noted that this approach to solve the time - dependent equation is highly demanding from the computational point of view because each iteration increases the number of enclosed integrals .the maximum momentum of accelerated particles may be obtained from the expression for the average acceleration time we substitute it with , integrate and determine the maximum momentum from the equation : this result is valid for any relation between and .the expressions for in limits ( index 1 ) and ( index 2 ) follows from ( [ kineq2:highplimit4a ] ) : the maximum momenta are determined by the ratio of the two length - scales : of the shock motion and of the particle diffusion . rewriting ( [ kineq2:highplimit4a ] ) in terms of , namely , we see that shifts toward the smaller momenta with increase of comparing to the solution which assumes .the function and the integral are shown on fig . [ kineq2:fig_a]a , b for two indexes of the diffusion coefficient .it is clear that the probability distribution has a peak .the integral is zero at and reaches unity with increasing . since , the function represents also dependence on the particle momentum .[ kineq2:fig_a]c shows the integral versus and therefore demonstrates the shape of the particle spectrum around for the steady - state injection .the _ shift of toward smaller momenta with increase of the downstream acceleration time - scale _ is visible on this plot as well .[ kineq2:fig_a ] compares the drury s solution ( [ kineq2:t1phi ] ) derived under assumption ( dashed lines ) with the more general expression ( [ kineq2:genphi ] ) for ( thin solid lines ) and ( thick solid lines ) : the more significant the larger the difference between the lines .however , this effect is almost unimportant if the ratio is larger than few .the simpler drury s formula may then be used in order to approximate the solution .being accelerated , particles become more energetic with time .a particle population ` moves ' with time along the spectrum from the lowest to the largest energies . most of the particles , which started acceleration early , have larger energies at a given time , comparing to particles injected recently .modern models of the high - energy emission from snrs , even quite sophisticated , assume the constant injection efficiency ( i.e. a fraction of particles to be accelerated ) .is there any sign that shocks could be able to keep this fraction constant under different conditions and during ages ?in particular , theoretical consideration of the transport of magnetic turbulence together with the particle acceleration requires the injection to be variable in time .does and how the time - dependent injection affect the particle spectrum ( and therefore their emission ) ? a harder particle spectrum at the highest energiesis typically considered as a sign that particle acceleration is in the effective non - linear regime .however , the variable injection in the time - depending test - particle acceleration may also be responsible for hardening of the spectrum , if the efficiency of injection monotonically decreases ( blue solid line on fig .[ kineq2:fig_b]a ) .in such a situation , more particles reach the high - energy end ( being accelerated from early times when the injection was more effective ) with respect to the less - energetic part of the spectrum ( where particles injected recently with lower efficiency reside ) . in contrast , if the efficiency of injection is constantly increasing with time then the spectrum is relatively softer around ( blue dashed line on fig . [ kineq2:fig_b]a ) . a more prominent bump in the particle spectrum around the largest energiesis expected if a source was more effective in injection around a limited period of time at the beginning ( e.g. during some time after the supernova explosion ) .we use a toy model of the continuous injection plus gaussian \label{kineq2:qtgaussian}\ ] ] in order to simulate a source which effectively injected particles around some time and is supplying them at a constant rate at other times .red lines on fig .[ kineq2:fig_b]b demonstrate that the earlier the particles were injected the larger their momenta are at the present time , as expected .efficient particle injection around results in a bump around ( fig .[ kineq2:fig_b]b , solid red line ) . ) ._ inset _ shows the time variation of the injection efficiency used to produce the blue line on the main plot ( see text for details ) . ] as an example , we consider the snr rx j17.13.73946 .the time - dependent injection in the test - particle limit may lead to a shape of the particle spectrum similar to the one which could be due to two effects : efficient acceleration in the non - linear regime and a spectral break due to deterioration of the particle confinement if the shock expands near regions of the weakly ionized medium .[ kineq2:figrx ] ( red dashed line ) shows the proton spectrum as it is given by the non - linear steady - state solution of .this line is plotted for the same parameters as adopted by for snr rx j17.13.73946 : , the mach number , the overall shock compression , injection efficiency .the solid line represents the same spectrum with a break at ( red solid line ) ; the spectral index of the spectrum is after the break where is the index of the distribution function shown by the dashed line .the blue line is a spectrum calculated with the time - dependent test - particle solution ( [ kineq2:solftpq ] ) with : , corresponding to the age of rx j17.13.73946 if it is the remnant of the supernova ad393 , the shock velocity and the injection momentum , and the diffusion coefficient with the index .the variation of the injection with time is described by eq .( [ kineq2:qtgaussian ] ) with , and and , for parameters considered , is represented on the inset plot on fig .[ kineq2:figrx ] : the injection is highest during the first 30 years after the explosion , then the particles enters the acceleration process at a steady - state , lower , rate .the time - dependent spectrum demonstrates that the hardness around the maximum momentum is similar to the nonlinear steady - state solution with the spectral break and could thus also explain the observed emission spectrum of this snr .note that difference between injection efficiency at the maximum and at the constant rate is rather small in this example , it is about 3 times only .it is worth stressing , that _ the particles injected during the first decades after the sn event are actually responsible for the shape of the high - energy end of the particle spectrum _ ( and thus of the high - energy emission ) of snr at the present time .thus , the consideration of the variable injection could be a crucial element in models explaining the observed x - ray and gamma - ray spectra of young snrs .the last point in this subsection : what is the distribution function if the particles were injected during a limited period only and then the injection switched off ?it is modeled here with a simple expression where is the heaviside step function , and the result is shown on fig .[ kineq2:fig_b]c .it is clear again that the highest energy particles are those injected at the earliest times .the low - energy cutoff is evident in the particle spectrum ; it appears due to the suppression of the injection after .are , , respectively , standard deviation . ]if injection varies only during the first decades after the supernova explosion , then it affects the spectral shape just near the high - energy cut - off .the slope of the particle spectrum at momenta much less than the cut - off momenta remains unchanged in such a model ( fig .[ kineq2:figrx ] ) because the injection process supplied particles at a constant rate at later times ( from which particles were accelerated to smaller energies ; fig .[ kineq2:figrx ] ) .such kind of the time dependence of the injection could be important for interpretation of the -ray and x - ray observations of snrs ( emission from the highest - energy particles ) but not for the radio radiation .however , what if the injection varies also later on ?electrons emitting at a radio frequency , say at in the magnetic field , have energy .the acceleration time - scale for such electrons is of the order of a week , assuming the bohm diffusion and the shock speed .this is much less than the acceleration time of the highest - energy particles which is comparable to the age of an snr .therefore , the radio observations may reveal the _ present - time _ behavior of the injection efficiency . the test - particle shock acceleration in a _stationary _ regime predicts that the spectral index of the distribution depends on the shock compression only ; it is .the value typical for the strong astrophysical shocks in a media with should result in and in the spectral index of the synchrotron emission .the non - stationary consideration with the time - dependent injection could lead to deviation of the spectral index , eq .( [ kineq2:spectrindi ] ) , from the canonical value ( fig .[ kineq2:fig_b]a , blue lines ) . actually , the radio observations could help us to understand how strong could be the time dependence of the injection efficiency in snrs .let s consider an extreme assumption , namely , that the observed spread in the radio spectral index ( fig .[ kineq2:fig_alpha ] ) is completely due to the variable injection .we take for this estimate the temporal term in ( [ kineq2:injtermgen ] ) to be of the form with constant . in order to estimate the effect of the index , one may calculate numerically the radio index from the solution of the time - dependent equation : with eq .( [ kineq2:solftpq ] ) for and eq .( [ kineq2:genphi ] ) for , taking the value of the index at .we find by numerical calculations that , in order to reproduce the observed range , the index should be between if and between if .the dependence is linear for other parameters fixed .it is remarkable that the dependence of the radio spectral index on the parameters which determine the non - stationary distribution function may be found analytically .really , has a peak .if we substitute eq .( [ kineq2:solftpq ] ) with the delta - function instead of and take the derivative ( [ kineq2:defalphar ] ) , we obtain that though this expression is approximate it appears to be close to the dependence derived numerically .this formula leads to an important conclusion . _if the acceleration does not reach the steady state yet _ ( like in the young snrs ) , _ the spectral index of the accelerated particles _ at the intermediate momenta ( and thus the radio spectral index ) _ is not the function of the shock compression factor only _ ( like it is in a stationary system for the index ) but depends also on the index which represent the dependence of the diffusion coefficient on the momentum , , and the index in the temporal variation of the injection efficiency , .in contrast , the time - dependent injection in the stationary regime of the particle acceleration affects the only normalization of the spectrum ( through the coefficient in eq . [ kineq2:stationarysol ] ) , but not the spectral index .in the present paper , we generalized the solution of which describes the time - dependent diffusive shock acceleration of test - particles .the three representations of the spatial variation of the particle distribution function are presented .namely , eq . ( [ kineq2:gensolapii ] ) gives through , eq .( [ kineq2:solfx2 ] ) yields versus , eq .( [ kineq2:solfx2b ] ) relates with .our generalized solution ( [ kineq2:gensol ] ) for the distribution function at the shock is valid for any ratio between the acceleration time - scales upstream and downstream of the shock and allows one to consider the time variation of the injection efficiency .it is shown that , if the ratio decreases ( i.e. the significance of the downstream acceleration time grows ) then the particle maximum momentum is smaller comparing to calculated under assumption .the reason is visible from eq .( [ kineq2:highplimit4a ] ) .namely , is determined by the ratio between the length - scale of the shock motion and lenght - scale of the diffusion : the larger the diffusion lenght - scale the smaller the maximum momentum . however , if the ratio is larger than few then the simpler expression ( [ kineq2:solftpq ] ) may be used for the particle distribution function , with given by ( [ kineq2:t1phi ] ) .if , in addition , the injection is continuous and constant ( ) , then our generalized solution becomes the same as in .the time dependence of the injection efficiency is an important factor in formation of the shape of the particle spectrum at all momenta .the high - energy end of the accelerated particle spectrum is formed by particles injected at the very beginning .therefore , the temporal evolution of injection , especially during the first decades after the supernova explosion , does affect the non - thermal spectra of young snrs and has to be considered in interpretation of the x - ray and gamma - ray data .the stationary solution of the shock particle acceleration predicts that the power - law index of the cosmic ray distribution is determined by the shock compression only .in contrast , in young snrs where acceleration is not presumably steady - state , this index ( let s call it to distinguish from the stationary index ) depends also on the indexes and in the approximate expressions for the diffusion coefficient and for the temporal evolution of the injection efficiency .namely , it is .this property of the time - dependent solution could be responsible for deviation of the observed radio index from the classical value in some young snrs .since the acceleration times for electrons emitting at radio frequencies are very small , the observed slopes of the radio spectra could reflect the current evolution of the injection in snrs .this work is partially funded by the prin inaf 2014 grant ` filling the gap between supernova explosions and their remnants through magnetohydrodynamic modeling and high performance computing ' .99 amato e. , blasi p. 2006 mnras 371 , 1251 badenes c. et al .2008 , apj , 680 , 1149 bateman h. 1954 _ tables of integral transforms _blasi p. 2002 aph 16 ,429 blasi p. 2010 mnras 402 , 2807 blasi p. , amato e. , caprioli d. 2007 mnras 375 , 1471 brose r. , telezhinsky , pohl m. 2016 a&a , in print ( doi : http://dx.doi.org/10.1051/0004-6361/201527345 ) drury l. 1983 rep .phys . , 46 , 973 drury l. 1991 mnras , 251 , 340 forman m. , drury l. 1983 proc .18th icrc , 2 , 267 green d. 2014 bulletin of the astronomical society of india , 42 , 47 jones f. 1990 apj , 361 , 162 malkov m. , diamond p. , sagdeev r. 2005 apj 624 , l37 malkov m. , diamond p. , sagdeev r. nature commun . , 2011 , 2 , 194 orlando s. , miceli m. , pumo m. , bocchino f. 2015 apj , 810 , 168 orlando s. , miceli m. , pumo m. , bocchino f. 2016 apj , 822 , 22 ostrowski m. , schlickeiser r. 1996 solar physics , 167 , 381 patnaude et al .2015 apj , 803 , 101 skilling j. 1975 mnras 172 , 557 tang x. , chevalier r. 2015 apj 800 , 103 toptygin i. 1980 ssrv , 26 , 157 wang z. , qu q .- y ., chen y. 1997 a&a 318 , l59 weiler k. , sramek r. , panagia n. , van der hulst j. , salvati m. 1986 apj , 301 , 790in the main text of the paper , the binomial series as well as the direct and the inverse laplace transforms are used .some properties of the transform are : \right\}= \\\\=\displaystyle \frac{\exp\left[-\nu/(4t)\right]}{2^{n/2}t^{n/2 + 1/2}\sqrt{\pi}}\ \mathrm{he}_n\left(\frac{\nu^{1/2}}{2^{1/2}t^{1/2}}\right ) , \end{array } \label{kineq2:laplint4}\ ] ] where is the generalized hermite polinomial , is integer , is real positive number .the last relation is presented on p.246 in .the first values are : ; .hermite polinomial and the generalized hermite polinomial are related as there is a decomposition ( weisstein e. _ mathworld a wolfram web resource _ at http://mathworld.wolfram.com/hermitepolynomial.html ) where .it follows from here that where , or with to be hermite numbers .it can be shown that -y^{n-1}\mathrm{h}_{n}[x+1/(2y ) ] .\end{array } \label{kineq2:hermcma}\ ] ] in order to prove this , one has to use ( [ kineq2:herm3 ] ) and property
|
three approaches are considered to solve the equation which describes the time - dependent diffusive shock acceleration of test particles at the non - relativistic shocks . at first , the solution of drury ( 1983 ) for the particle distribution function at the shock is generalized to any relation between the acceleration time - scales upstream and downstream and for the time - dependent injection efficiency . three alternative solutions for the spatial dependence of the distribution function are derived . then , the two other approaches to solve the time - dependent equation are presented , one of which does not require the laplace transform . at the end , our more general solution is discussed , with a particular attention to the time - dependent injection in supernova remnants . it is shown that , comparing to the case with the dominant upstream acceleration time - scale , the maximum momentum of accelerated particles shifts toward the smaller momenta with increase of the downstream acceleration time - scale . the time - dependent injection affects the shape of the particle spectrum . in particular , i ) the power - law index is not solely determined by the shock compression , in contrast to the stationary solution ; ii ) the larger the injection efficiency during the first decades after the supernova explosion , the harder the particle spectrum around the high - energy cutoff at the later times . this is important , in particular , for interpretation of the radio and gamma - ray observations of supernova remnants , as demonstrated on a number of examples . [ firstpage ] shock waves acceleration of particles ism : supernova remnants
|
age progression , also called age synthesis or face aging , is defined as aesthetically rendering a face image with natural aging and rejuvenating effects for an individual face .it has found application in some domains such as cross - age face analysis , authentication systems , finding lost children , and entertainment .there are two main categories of solutions to the age progression task : prototyping - based age progression and modeling - based age progression .prototyping - based age progression transfers the differences between two prototypes ( e.g. , average faces ) of the pre - divided source age group and target age group into the input individual face , of which its age belongs to the source age group .modeling - based age progression models the facial parameters for the shape / texture synthesis with the actual age ( range ) . intuitively , the natural aging process of a specific human usually follows the general rules in the aging process of all humans , but this specific process should also contain some personalized facial characteristics , e.g. , mole , birthmark , etc . , which are almost invariant with time .prototyping - based age progression methods can not well preserve this * personality * of an individual face , since they are based on the _ general _ rules in the human aging process for a relatively large population .modeling - based age progression methods do not specially consider these personalized details .moreover , they require dense * long - term * ( e.g. age span exceeds 20 years ) face aging sequences for building the complex models. however , collecting these dense long - term face aging sequences in the real world is very difficult or even unlikely .fortunately , we have observed that the short - term ( e.g. age span smaller than 10 years ) face aging sequences are available on the web , such as photos of celebrities of different ages on facebook / twitter .some available face aging databases also contain the dense short - term sequences .therefore , generating personalized age progression for an individual input by leveraging short - term face aging sequences is more feasible . in this paper, we propose an age progression method which automatically renders aging faces in a personalized way on a set of age - group specific dictionaries , as shown in figure [ fig0 ] .primarily , based on the aging-(in)variant patterns in the face aging process , an individual face can be decomposed into an aging layer and a personalized layer .the former shows the general aging characteristics ( e.g. , wrinkles ) , while the latter shows some personalized facial characteristics ( e.g. , mole ) . for different human age groups ( e.g. , 11 - 15 , 16 - 20 , ... ) ,we design corresponding aging dictionaries to characterize the human aging patterns , where the dictionary bases with the same index yet from different aging dictionaries form a particular aging process pattern ( e.g. , they are linked by a dotted line in figure [ fig0 ] ) .therefore , the aging layer of the aging face can be represented by a linear combination of these patterns with a sparse coefficient ( e.g. , ] , where .so far , the aging face of equals the linearly weighted combination of the aging dictionary bases in the age group and the personalized layer , i.e. , for , where and are the common sparse coefficient and the personalized layer , respectively . for aging sequences covering all age groups , a personality - aware dictionary learning model is formulated as follows , where denotes the -th column ( base ) of , and parameters and control the sparsity penalty and regularization term , respectively . is used to represent the specific aging characteristics in the age group . *short - term coupled learning .* we have observed that one person always has the dense short - term face aging photos , but no long - term face aging photos .collecting these long - term dense face aging sequences in the real world is very difficult or even unlikely .therefore , we have to use the shot - term face aging pairs instead of the long - term face sequences .let denote the -th face in the age group , and denote the -th face of the same person in the age group , where .let every two neighboring age groups share face pairs , and then there are face aging pairs in total .for the face aging pairs covering the age group and ( ) , we reformulate a personality - aware coupled dictionary learning model to simultaneously learn all aging dictionaries , i.e. , in eqn . , every two neighboring aging dictionaries and corresponding to two age groups are implicitly coupled via the common reconstruction coefficient , and the personalized layer is to capture the personalized details of the person , who has the face pair . it is noted that face aging pairs are overlapped , which guarantees that we can train all aging dictionaries within a unified formulation .let \in \mathbb{r}^{m \times k} ] , \ !! \mathbb{r}^{f\times n} ] and \in \mathbb{r}^{k\times n} ] as follows , * updating .* we update by fixing and .specifically , we update while fixing all the remaining dictionaries excluding .we omit the terms which are independent of in eqn .: where , and . solving eqn . , and we can obtain a closed - form solution of . where two indicators and are defined as follows , after learning the aging dictionary $ ] , for a given face belonging to the age group , we can convert it into its aging face sequence . in the aging dictionary learning phase ( the offline phase ), the neighboring dictionaries are linked via the short - term ( i.e. , covering two age groups ) face aging pairs as the training data .our aging synthesis ( the online phase ) should be consistent with this training phase .therefore , we first generate the aging face in the nearest neighboring age group ( i.e. , age group ) by the learned aging dictionary with one common coefficient , as well as the personalized layer . the coefficient and personalized layerare optimized by solving the following optimization : - \left [ { \begin{array}{*{20}{c } } { { { \bf{w}}^g}}\\ { { { \bf{w}}^{g + 1 } } } \end{array } } \right]{{\bf{a}}^g } - \left [ { \begin{array}{*{20}{c } } { { { \bf{p}}^g}}\\ { { { \bf{p}}^g } } \end{array } } \right ] } \right\|_2 ^ 2 \\ & + \lambda { \left\| { { { \bf{a}}^g } } \right\|_1 } + \gamma \left\| { { { \bf{p}}^g } } \right\|_2 ^ 2 , \end{aligned } \vspace{-2mm}\ ] ] where is an initial estimation . eqn . can be solved by alternatively updating and until convergence , the updating ways are the same as reported in section [ op ] . after that , taking this new aging face in the current age group as the input of aging synthesis in the next age group ( i.e. , age group ) , we repeat this process until all aging faces have been rendered .figure [ fig1 ] shows this age synthesis process .more specifically , if we render an aging face for the input , an initial setting of is needed . in the scenario of age progression, we use the average face of the age group as the initialization of . however , these outputs for are not desired due to the facial differences between individual face and average face .we repeat the rendering of all aging faces with the new input pairs , ,, .we find that generally we can obtain invariable and desired aging faces when we repeat this process three times .a visualized example is shown in figure [ fig2b ] .algorithm [ alg2 ] describes the age progression synthesis in detail .* data collection . * to train the high - quality aging dictionary ,it is crucial to collect sufficient and dense short - time face aging pairs .we download a large number of face photos covering different ages of the same persons from google and bing image search , and other two available databases , cross - age celebrity dataset ( cacd ) and morph aging database .the cacd database contains more than 160,000 images of 2,000 celebrities with the age ranging from 16 to 62 .the morph database contains 16,894 face images from 4,664 adults , where the maximum and average age span are 33 and 6.52 years respectively .both of cacd and morph contain quite a number of short - term intra - person photos . since these faces are mostly in the wild " , we select the photos with approximately frontal faces ( to ) and relatively natural illumination and expressions .face alignment are implemented to obtain aligned faces . to boost the aging relationship between the neighboring aging dictionaries , we use collection flow to correct all the faces into the common neutral expression .we divide all images into 18 age groups ( i.e. , ) : 0 - 5 , 6 - 10 , 11 - 15 , 16 - 20 , 21 - 30 , 31 - 40 , 41 - 50 , 51 - 60 , 61 - 80 of two genders , and find that no person has aging faces covering all aging groups .actually , the aging faces of most subjects fall into only one or two age groups ( i.e. most persons have face photos covering no more than 20 years ) .therefore , we further select those intra - person face photos which densely fall into two neighboring age groups .finally , there are 1600 intra - person face pairs for training ( 800 pairs for males , and 800 pairs for females ) .every two neighboring age groups for one gender share 100 face aging pairs of the same persons and each age group , except for the 0 - 5 " age group and the 61 - 80 " age group , has 200 face photos .we train two aging dictionaries for male and female , respectively . *pca projection .* take the male subset as an example .we stack images in the age group as columns of a data matrix , where for , otherwise .the svd of is .we define the projected matrix , where is truncated to the rank = ( ) .we use the same strategy for the female subset .* parameter setting . * the parameters and in eqn . are empirically set as and .the number of bases of each aging dictionary is set as . in figure[ fig2a ] , we show the convergence properties of aging dictionary learning for male and female subsets .as expected , the objective function value decreases as the iteration number increases .this demonstrates that algorithm [ alg1 ] achieves convergence after about iterations. * aging evaluation .* we adopt three strategies to comprehensively evaluate the proposed age progression .first , we qualitatively evaluate the proposed method on the fgnet database , which is a publicly available database and has been widely used for evaluating face aging methods .this database contains 1,002 images of 82 persons , and the age range spans from 0 to 69 : about 64% of the images are from children ( with ages 18 ) , and around 36% are from adults ( with ages 18 ) .we show the age progression for every photo in the fgnet dataset , and do qualitative comparison with the corresponding ground truth ( available older photo ) for each person . for reference, we also reproduce some aging results of other representative methods .second , we conduct user study to test the aging faces of our method compared with the prior works which reported their best aging results .our method uses the same inputs as in these prior works .third , cross - age face recognition and cross - age face verification are challenging in extreme facial analysis scenarios due to the age gap .a straightforward way for cross - age facial analysis is to use the aging synthesis to normalize the age gap .specifically , we can render all the faces to their aging faces within the same age range , and then employ the existing algorithms to conduct face verification .inspired by this , we can also use the face verification algorithm to prove that the pair of aging face and ground truth face ( without age gap ) is more similar than the original face pair with age gap .we take each photo in fgnet as the input of our age progression .to well illustrate the performance of the proposed age progression , we compare our results with the released results in an online fun demo : face transformer demo ( ft demo ) , and also with those by a state - of - the - art age progression method : illumination - aware age progression ( iaap ) .by leveraging thousands of web photos across age groups , the authors of iaap presented a prototyping - based age progression method for automatic age progression of a single photo .ft demo requires manual location of facial features , while iaap uses the common aging characteristics of average faces for the age progression of all input faces .some aging examples are given in figure [ fig4 ] , covering from baby / childhood / teenager ( input ) to adult / agedness ( output ) , as well as from adult ( input ) to agedness ( output ) . by comparing with ground truth, we can see that the aging results of our method look more like the ground truth faces than the aging results of other two methods .in particular , our method can generate personalized aging faces for different individual inputs . in terms of texture change ,the aging face of ours in figure [ fig4](a ) has white mustache that is closer to ground truth ; in shape change , the aging faces of ours in figure [ fig4](b)(e)(f ) have more approximate facial outline to the ground truth ; in aging speed , the faces of ft demo and iaap in figure [ fig4](c ) are aging more slowly , while one of ft demo in figure [ fig4](d ) is faster . overall , the age speed of iaap is slower than ground truth since iaap is based on smoothed average faces , which maybe loses some facial textual details , such as freckle , nevus , aging spots , etc .ft demo performs the worst , especially in shape change .our aging results in figure [ fig4 ] are more similar to the ground truth , which means our method can produce much more personalized results .some prior works on age progression have posted their best face aging results with inputs of different ages , including , , , , , , , , , and .there are 246 aging results with 72 inputs in total . our age progression for each inputis implemented to generate the aging results with the same ages ( ranges ) of the posted results .we conduct user study to compare our aging results with the published aging results . to avoid bias as much as possible, we invite 50 adult participants covering a wide age range and from all walks of life . they are asked to observe each comparison group including an input face , and two aging results ( named a " and b " ) in a random order , and tell which aging face is better in terms of _ personality _ and _ reliability_. _ reliability _ means the aging face should be natural and authentic in the synthetic age , while _ personality _ means the aging faces for different inputs should be identity - preserved and diverse . all users are asked to give the comparison of two aging faces using four schemes : a is better " , b is better " , comparable `` , and ' ' neither is accepted " , respectively . we convert the results into ratings to quantify the results .there are 50 ratings for each comparison , 246 comparison groups , and then 12,300 ratings in total .the voting results are as follows : 45.35% for ours better ; 36.45% for prior works better ; and 18.20% for comparable " ; for neither is accepted " .the voting results demonstrate that our method is superior to prior works .we also show some comparison groups for voting in figure [ fig5 ] .overall , for the input face of a person in any age range , our method and these prior works can generate an authentic and reliable aging face of any older - age range .this is consistent with the gained relatively - high voting support . in particular , for different inputs , our rendered aging faces have more personalized aging characteristics , which further improves the appealing visual sense .for example in figure [ fig5 ] , the aging faces of ours in the same age range in the and the group of the row have different aging speeds : the former is obviously slower than the latter ; the aging faces of prior works with different inputs in the and groups of the column are similar , while our aging results are more diverse for different individual inputs . to validate the improved performance of cross - age face verification with the help of the proposed age progression , we prepare the intra - person pairs and inter - person pairs with cross ages on the fgnet database . by removing undetected face photos and face pairs with age span no more than 20 years , we select 1,832 pairs ( 916 intra - person pairs and 916 inter - person pairs ) , called original pairs " . among the 1,832 pairs , we render the younger face in each pair to the aging face with the same age of the older face by our age progression method . replacing each younger face with the corresponding aging face , we newly construct 1,832 pairs of aging face and older face , called our synthetic pairs " . for fair comparison, we further define our synthetic pairs - slowromancap1@ " as using the given tag labels of fgnet , while our synthetic pairs - slowromancap2@ " is using the estimated gender and age from a facial trait recognition system . to evaluate the performance of our age progression, we also prepare the iaap synthetic pairs - slowromancap1@ " and iaap synthetic pairs - slowromancap2@ " by the state - of - the - art age progression method in .figure [ fig6a ] plots the pair setting .the detailed implementation of face verification is given as follows .first , we formulate a face verification model with deep convolutional neural networks ( deep convnets ) , which is based on the deepid2 algorithm .since we focus on the age progression in this paper , please refer to for more details of face verification with deep convnets .second , we train our face verification model on the lfw database , which is designed for face verification .third , we test the face verification on original pairs , iaap synthetic pairs and our synthetic pairs , respectively . the false acceptance rate - false rejection rate ( far - frr ) curves and the equal error rates ( eer ) on original pairs and synthetic pairs are shown in figure [ fig6 ] .we can see that the face verification on our synthetic pairs achieves lower err than on original pairs and iaap synthetic pairs .this illustrates that the aging faces by our method can effectively mitigate the effect of age gap in cross - age face verification .the results also validate that , for an given input face , our method can render a personalized and authentic aging face closer to the ground truth than the iaap method .since the estimated age for an individual is more consistent with human aging tendency , our / iaap synthetic pairs - slowromancap2@ outperforms our / iaap synthetic pairs - [email protected] this paper , we proposed a personalized age progression method . basically , we design multiple aging dictionaries for different age groups , in which the aging bases from different dictionaries form a particular aging process pattern across different age groups , and a linear combination of these patterns expresses a particular aging process . moreover ,we define the aging layer and the personalized layer for an individual to capture the aging characteristics and the personalized characteristics , respectively . we simultaneously train all aging dictionaries on the collected short - term aging database .specifically , in two arbitrary neighboring age groups , the younger- and older - age face pairs of the same persons are used to train coupled aging dictionaries with the common sparse coefficients , excluding the specific personalized layer . for an input face, we render the personalized aging face sequence from the current age to the future age step by step on the learned aging dictionaries . in future work , we consider utilizing the bilevel optimization for the personality - aware coupled dictionary learning model .this work was partially supported by the 973 program of china ( project no .2014cb347600 ) , the national natural science foundation of china ( grant no . 61522203 and 61402228 ) , the program for new century excellent talents in university under grant ncet-12 - 0632 and the natural science fund for distinguished young scholars of jiangsu province under grant bk2012033 .
|
in this paper , we aim to automatically render aging faces in a personalized way . basically , a set of age - group specific dictionaries are learned , where the dictionary bases corresponding to the same index yet from different dictionaries form a particular aging process pattern cross different age groups , and a linear combination of these patterns expresses a particular personalized aging process . moreover , two factors are taken into consideration in the dictionary learning process . first , beyond the aging dictionaries , each subject may have extra personalized facial characteristics , e.g. mole , which are invariant in the aging process . second , it is challenging or even impossible to collect faces of all age groups for a particular subject , yet much easier and more practical to get face pairs from neighboring age groups . thus a personality - aware coupled reconstruction loss is utilized to learn the dictionaries based on face pairs from neighboring age groups . extensive experiments well demonstrate the advantages of our proposed solution over other state - of - the - arts in term of personalized aging progression , as well as the performance gain for cross - age face verification by synthesizing aging faces .
|
a single- and multi- gene duplication plays crucial role in evolution . on the proteinomic level, the gene duplication leads to a creation of new proteins that are initially identical to the original ones .in a course of subsequent evolution , the majority of these new proteins are lost as redundant , while some of them survive by diverging , i.e. quickly loosing old and possibly slowly acquiring new functions .the protein - protein interaction network is commonly defined as an evolving graph with nodes and links corresponding to proteins and their interactions .thus a successful single - gene duplication event results in a creation of a new node which is initially linked to all the neighbors of the original node .later , some links between each of the duplicates and their neighbors disappear , fig . ( [ fig_uno ] ) . such network evolution process is commonly called a duplication and divergence .although duplication and divergence is usually considered as the growth mechanism only for protein - protein networks , it also may play a role in a creation of certain new nodes and links in the world wide web , growth of various networks of human contacts by introduction of close acquaintances of existing members , and evolution of many other non - biological networks . a sketch of duplication and divergence event .links between the duplicated vertex and vertices 3 and 4 disappeared as a result of divergence . ]does the evolution dominated by duplication and divergence define the structure and other properties of a network ?so far , most of the attention has been attracted to the study of a degree distribution , which is a probability for a vertex to have links .wagner has provided a numerical evidence that duplication - divergence evolution does not noticeably alter the initial power - law degree distribution , provided that the evolution is initiated with a fairly large network .a somewhat idealized case of the completely asymmetric divergence when links are removed only from one of the duplicates ( as in fig . [ fig_uno ] ) was investigated in refs .it was found that the emerging degree distribution has a power - law tail : for .yet apart from the shape of the degree distribution , a number of other perhaps even more fundamental properties of duplication - divergence networks remain unclear : 1 . how well does the model describe its natural prototype , the protein - protein networks ? 2 .is the total number of links a self - averaging quantity ? 3 .how does the average total number of links depend on the network size ?4 . does the degree distribution scale linearly with ?a non - trivial answer to any of these questions would be much more important than details of the tail of the degree distribution ; the reason why only these details are usually studied is that the more fundamental questions are assumed to have trivial answers .here we shall attempt to answer above questions and we shall also look again at the degree distribution of the duplication - divergence networks . as in , we consider a simple scenario of totally asymmetric divergence , where evolution is characterized by a single parameter , link retention probability .it turns out that even such idealized model describes the degree distribution found in the biological protein - protein networks very well .we find that , depending on , the behavior of the system is extremely diverse : when more than a half of links are ( on average ) preserved , the network growth is non - self - averaging , the average degree diverges with the network size , and while a degree distribution has a scaling form , it does not resemble any power law . in a complimentary case of small growth is self - averaging , the average degree tends to a constant , and a degree distribution approaches a scaling power - law form . in the next sectionwe formally define the model and compare the simulated degree distribution to the observed ones .the properties of the model are first analyzed in the tractable and limits ( sec .iii ) and then in the general case ( sec .section v gives conclusions .to keep the matter as simple as possible , we focus on the completely asymmetric version of the model of duplication and divergence network growth .the model is defined as follows ( fig.[fig_uno ] ) : 1 .* duplication*. a randomly chosen target node is duplicated , that is its replica is introduced and connected to each neighbor of the target node .2 . * divergence*. each link emanating from the replica is activated with probability ( this mimics link disappearance during divergence ) .if at least one link is established , the replica is preserved ; otherwise the attempt is considered as a failure and the network does not change .( the probability of the failure is if the degree of the target node is equal to . ) in contrast to duplication - mutation models ( see e.g. ) , no new links are introduced .initial conditions apparently do not affect the structure of the network when it becomes sufficiently large ; in the following , we always assume that the initial network consists of two connected nodes . as in the observed protein - protein interaction networks , in this modeleach node has at least one link and the network remains connected throughout the evolution .these features is the main distinction between our model and earlier models ( see e.g. ) which allowed an addition of nodes with no links and generated disconnected networks with questionable biological relevance .the above simple rules generate networks which are strikingly similar to the naturally occurring ones .this is evident from figs .[ fig_yeast][fig_human ] which compare the degree distribution of the simulated networks and protein - protein binding networks of baker yeast , fruit fly , and human .the protein interaction data for all three species were obtained from the biological association network databases available from ariadne genomics . the data for human ( _ h .sapiens _ ) protein network was derived from the ariadne genomics resnet database constructed from the various literature sources using medscan .the data for baker yeast ( _ s .cerevisiae _ ) and fruit fly ( _ d .melanogaster _ ) networks were constructed by combining the data from published high - throughput experiments with the literature data obtained using medscan as well .each simulated degree distribution was obtained by averaging over 500 realizations .the values of the link retention probability of simulated networks were selected to make the mean degree of the simulated and observed networks equal . the number of nodes and the number of links in the corresponding grown and observed networks were therefore equal as well .degree distribution of protein - protein binding network of yeast with =4873 proteins and average degree .the link retainment probability of fitted simulated network . ]degree distribution of protein - protein binding network of fly with =6954 proteins and average degree .the link retainment probability of fitted simulated network . ] degree distribution of protein - protein binding network of human with =5275 proteins and average degree .the link retainment probability of fitted simulated network . ] figures [ fig_yeast][fig_human ] demonstrate that even the most primitive form of the duplication and divergence model ( which does not account for disappearance of links from the original node , introduction of new links , removal of nodes , and many other biologically relevant processes ) reproduces the observed degree distributions rather well .these figures also show that the degree distributions of both simulated and naturally occurring networks are not exactly resembling power - laws that they are commonly fitted to ( see , for example , ) .a possible explanation is that the protein - protein networks ( naturally limited to few tens thousand of nodes ) are not large enough for a degree distribution to converge to its power - law asymptotics . to probe the validity of this argument we present ( fig . [ fig_simul ] ) the degree distributions for networks of up to vertices with link retention probability similar to the fitted to the observed networks , .it follows that a degree distribution does not attain a power - law form even for very large networks , at least for naturally occurring .degree distributions of grown networks with ( bottom to top ) , , and vertices .the link retention probability , all data was averaged over 100 realizations . ]here we analyze duplication - divergence networks in the limits and when the model is solvable and ( almost ) everything can be computed analytically .this case has already been investigated in refs . . herewe outline its properties as it will help us to pose relevant questions in the general case when divergence is present . when , each duplication attempt is successful and the network remains a complete bipartite graph throughout the evolution : initially it is ; at the next stage the network turns into or , equiprobably ; and generally when the number of nodes reaches , the network is a complete graph with every value occurring equiprobably . in the complete bipartite graph the degree of a node has one of the two possible values : and .hence in any realization of a network , the degree distribution is the sum of two delta functions : .averaging over all realizations we obtain the total number of links in the complete graph is .averaging over all we can compute any moment ; for instance , the mean is equal to and the mean square is given by in the thermodynamic limit , the link distribution becomes a function of the single scaling variable , namely : with .the key feature of the networks generated without divergence ( ) is the lack of self - averaging .in other words , fluctuations do not vanish in the thermodynamic limit .this is evident from eqs .( [ lav])([pnl ] ) : in the self - averaging case we would have had ( instead of the actual value ) and the scaling function would be the delta function .the lack of self - averaging implies that the future is uncertain a few first steps of the evolution drastically affect the outcome .finally we mention that the limit of our model is equivalent to the classical plya s urn model .the urn models have been studied in the probability theory , have applications ranging from biology to computer science , and remain in the focus of the current research ( see e.g. and references therein ) .let . then in a successful duplication attempt ,the probability of retaining more than one link is very small ( of the order of ) . ignoring it, we conclude that in each successful duplication event , one node and only one link are added , so when the emerging networks are trees .if the degree of the target node is , the probability of the successful duplication is which approaches when .hence any of the neighbors of the target node will be linked to the potentially duplicated node with the same probability .a given node * n * links to the new , duplicated , node in a process which starts with choosing a neighbor of * n * as the target node .the probability of that is proportional to the degree of the node * n*. then the probability of linking to the node * n * is ( as we already established ) so the probability that the new node links to * n * is proportional to its degree .thus we recover the standard preferential attachment model .this model exhibits the well - known behavior : the total number of links is , and the degree distribution is a self - averaging quantity peaked around the average , now move on to the discussion of the general case which is only partially understood .self - averaging of any quantity can be probed by analyzing a relative magnitude of fluctuations of that quantity . as a quantitative measure we shall use the ratio of the standard deviation to the average . for the total number of links, should vanish in the thermodynamic limit if the total number of links is the self - averaging quantity .a lack of self - averaging would be extremely important it would imply that a slight deviation in the earlier development could lead to a very different outcome . even if vanishes in the thermodynamic limit , fluctuations may still play noticeable role if approaches zero too slowly . vs. for ( top to bottom ) .the total number of nodes is obviously a self - averaging quantity for , apparently also self - averaging for , and evidently non self - averaging for .,scaledwidth=45.0% ] simulations ( fig . [ chi ] ) show that the system is apparently self - averaging when .it is somewhat difficult to establish what is happening in the borderline case , though we are inclined to believe that self - averaging still holds .the self - averaging is evidently lost at , and the system is certainly non - self - averaging for ( in this situation , see eqs .( [ lav])([lav2 ] ) ) .these findings suggest that in the range the total number of links is _ not _ a self - averaging quantity . according to the definition of the model , a target nodeis chosen randomly .therefore , the probability that a duplication event is successful , or equivalently , the average increment of the number of nodes per attempt is ,\ ] ] where is a probability for a node to have a degree .similarly the increment of the number of links per step is and therefore }.\ ] ] the inequality is valid for all and therefore implying this is obvious geometrically as ( [ l > n ] ) should hold for any connected network . using eq .( [ ln ] ) we can verify the self - consistency of our conclusion ( [ k3 ] ) derived in the case of . substituting ( [ k3 ] ) in ( [ ln ] )we obtain .\ ] ] it confirms our assumption that for vanishing , each successful duplication event increments the number of links by one .to analyze the growth of versus , we use the definition ( [ nu ] ) of , an identity , and re - write ( [ ln ] ) as which leads to an algebraic growth . noting that can not exceed one ( this follows from ( [ nu ] ) and the sum rule ) we conclude that growth is certainly super - linear when .hence the average degree diverges with system size algebraically , with .since the average degree grows indefinitely , the probability of the failure to inherit at least one link approaches zero , that is as . therefore we anticipate that asymptotically and with .these expectations agree with simulations fairly well ( fig .[ large ] ) .for instance when , the predicted exponent is close to the fitted one , ( fig . [ large ] ) .the agreement is worse when approaches ; the predicted exponent for is notably smaller than .vs for ( bottom to top , dashed lines ) .solid lines are corresponding power - law best fits for the large parts of the plots : , , .the results are averaged over 100 network realizations ., scaledwidth=45.0% ] in the range , we can not establish on the basis of eq .( [ ln - eq ] ) alone whether the growth is super - linear or linear ( the growth is at least linear as it follows from the lower bound ( [ l > n ] ) ) .the average node degree grows with but apparently saturates when is close to zero ( see fig . [ small ] ) . for average degree seems to grow logarithmically , that is .for the growth of is super - logarithmical ( see fig .[ small ] ) and can be fitted both by with , or by a power - law with a fairly small exponent .vs in the self - averaging regime . ( bottom to top ) .the results are averaged over 100 network realizations ., scaledwidth=45.0% ] hence , taking into account the simulation results and limiting cases considered earlier , the behavior of can be summarized as follows : numerically it appears that . in the next subsection we will demonstrate that . a rate equation for the degree distribution is derived in the same manner as eq .( [ ln ] ) : +m_k\ ] ] here we have used the shorthand notation for the probability that the new node acquires a degree .the general term in the sum on the right - hand side of eq .( [ mk ] ) describes duplication event in which links remains and links are lost due to divergence .summing both sides of ( [ nkn ] ) over all we obtain on the left - hand side . on the right - hand side, only the second term contributes to the sum and also gives the same : =\nu , \end{aligned}\ ] ] where the second line was derived using the binomial identity .similarly , multiplying ( [ nkn ] ) by and summing over all we recover ( [ ln - eq ] ) .these two checks show consistency of ( [ nkn ] ) with the growth equations , introduced earlier . vs. for ( bottom to to top ) , , and .the size of the network is for , for , and for .the results are averaged over 100 realizations.,scaledwidth=45.0% ] since depends on all , see ( [ nu ] ) , eqs . ( [ nkn ] ) are non - linear .however , the observations made in the previous subsection allow us to approximate , for any given , as parameter , thus ignoring its possible very slow dependence on . resulting linear eqs .( [ nkn ] ) are still very complicated : if we assume that and employ the continuous approach , we still are left with a system of partial differential equations with a non - local `` source '' term .fortunately , the summand in , that is , is sharply peaked around .hence we can replace by , and eqs .( [ nkn ] ) become still , the analysis of ( [ nkn - pde ] ) is hardly possible without knowing the correct scaling .figure [ degree ] indicates that the form of the degree distribution varies with significantly. we will proceed ( separately for and ) by guessing the scaling and trying to justify the consistency of the guess ._ assuming _ the simplest linear scaling we reduce eq .( [ nkn - pde ] ) to we also used , which is required to assure that is consistent with ( [ ln - eq ] ) . plugging into ( [ nk ] ) we obtain from eq .( [ gamma]).,scaledwidth=45.0% ] this equation has two solutions : and a non - trivial solution which depends on .the second solution decreases from to .the two solutions coincide at .the sum converges when , and the total number of links grows linearly , .apparently the appropriate solution is the one which is larger : for the exponent is , while for the exponent is , fig .[ fig_g ] . in the latter case , and therefore the total number of links grows as .vs for the network of size in the self - averaging regime . ( bottom to top ) .the result for is the exact solution ( [ k3 ] ) , simulation data is averaged over 100 realizations .the corresponding analytical predictions for the exponent are , , and .,scaledwidth=45.0% ] simulations show that for small the degree distribution has indeed a fat tail ( see fig .[ less ] ) .the agreement with the theoretical prediction of the algebraic tail is very good when ( eq . ( [ gamma ] ) gives while numerically ) , not so good when ( vs. ) , and fair at best for .thus we explained the growth law ( [ lnb ] ) .we also arrived at the theoretical prediction of which reasonably well agree with simulation results . due to the presence of logarithms ,the convergence is extremely slow and better agreement will be probably very hard to achieve .finally we note that the behaviors and arise in a surprisingly large number of technological and social networks ( see and references therein ) ., , and nodes with .,scaledwidth=45.0% ] the growth law ( [ lnb ] ) suggests an introduction of a scaling form with .then the sum rules and are manifestly satisfied ( provided that the scaling function falls off reasonably fast for ) . simulation results ( see fig . [ more ] )are in a good agreement with above scaling form .we have shown that a simple one - parameter duplication - divergence network growth model well approximates realistic protein - protein networks .table [ tab_uno ] summarizes how the major network features ( self - averaging , evolution of the number of links , the degree distribution ) change when the link retention probability varies .[ cols="^,^,^,^",options="header " , ] two most striking features of duplication - divergence networks are the lack of self - averaging for and extremely slow growth of the average degree for .these features have very important biological implications : the lack of self - averaging naturally leads to a diversity between the grown networks and the slow degree growth preserves the sparse structure of the network . both of these effects occur in wide ranges of parameter and therefore are robust it is hard to expect that nature would have been able to fine - tune the value of if it were not so .our findings indicate that in the observed protein - protein networks , so biologically - relevant networks seem to be in the self - averaging regime .one must , however , take the experimental protein - protein data with a great degree of caution : it is generally acknowledged that our understanding of protein - protein networks is quite incomplete .usually , as the new experimental data becomes available , the number of links and the average degree in these network increases .hence the currently observed degree distributions may reflect not any intrinsic property of protein - protein networks , but a measure of an incompleteness of our knowledge about them .therefore a possibility that the real protein - protein networks are not ( or have not been at some stage of the evolution ) self - averaging is not excluded .using a multitude of direct and indirect methods , von mering et al predicted 78928 links between 5397 yeast proteins which produces a network with the average degree .a power - law fit to this degree distribution has the exponent . ]it has been suggested that randomly introduced links ( mutations ) must compliment the inherited ones to ensure the self - averaging and existence of smooth degree distribution .while a lack of random linking does affect the fine structure of the resulting network , we have observed that the major features like self - averaging , growth law , and degree distribution are rather insensitive to whether random links are introduced or not , provided that the number of such links is significantly less than the number of inherited ones .we performed a number of simulation runs where links between a target node and its image were added at each duplication step with a probability .introduction of such links is the most direct way to prevent partitioning of the network into a bipartite graph ( see ) . in other words , without such links the target and duplicated nodes are never directly connected to each other .we observed that for reasonable values of ( in the observed yeast , fly , and human protein - protein networks never exceeds this value ) the results remain unaffected .apparently , without randomly introduced links , the network characteristics establish themselves independently in every subset of vertices duplicated from each originally existing node .we leave more systematic study of the effects of mutations as well as of the more symmetric divergence scenarios ( when links may be lost both on the target and duplicated node ) for the future .many unanswered questions remain even in the realm of the present model .for instance , little is known about the behavior of the system in the borderline cases of and .one also wants to understand better the tail of the degree distribution in the region where follows unusual scaling laws .it will be also interesting to study possible implications of these results for the probabilistic urn models .the authors are thankful to s. maslov , s. redner , and m. karttunen for stimulating discussions .this work was supported by 1 r01 gm068954 - 01 grant from nigms .
|
we show that the protein - protein interaction networks can be surprisingly well described by a very simple evolution model of duplication and divergence . the model exhibits a remarkably rich behavior depending on a single parameter , the probability to retain a duplicated link during divergence . when this parameter is large , the network growth is not self - averaging and an average vertex degree increases algebraically . the lack of self - averaging results in a great diversity of networks grown out of the same initial condition . for small values of the link retention probability , the growth is self - averaging , the average degree increases very slowly or tends to a constant , and a degree distribution has a power - law tail .
|
the noise in physical system gives rise to interesting and sometimes counterintuitive effects .the stochastic resonance and the noise enhanced stability are two examples of noise activated phenomena that have been extensively studied in a wide variety of natural and physical systems such as lasers , spin systems , chemical and biological complex systems .specifically the activated escape from a metastable state is important in the description of the dynamics of non - equilibrium complex systems .recently there has been a growing interest in the application of complex systems methodology to model social systems . in particular the application of statistical physics for modeling the behavior of financial marketshas given rise to a new field called _ econophysics _ .the stock price evolution is indeed driven by the interaction of a great number of traders .each one follows his own strategy in order to maximize his profit .there are fundamental traders who try to invest in solid company , speculators who ever try to exploit arbitrage opportunity and also noise traders who act in a non - rational way .all these considerations allow us to say that the market can be thought as a complex system where the rationality and the arbitrariness of human decisions are modeled by using stochastic processes .the price of financial time series was modeled as a random walk , for the first time , by bachelier .his model provides only a rough approximation of the real behavior of the price time series .indeed it does nt reproduce some of the stylized facts of the financial markets : ( i ) the distribution of relative price variation ( price return ) has fat tails , showing strong non - gaussianity ; ( ii ) the standard deviation of the return time series , called volatility , is a stochastic process itself characterized by long memory and clustering ; ( iii ) autocorrelations of asset returns are often negligible . a popular model proposed to characterize the stochastic nature of the volatility is the heston model , where the volatility is described by a process known as the cox , ingersoll and ross ( cir ) process and in mathematical statistics as the feller process .the model has been recently investigated by econophysicists and solved analytically .models of financial markets reproducing the most prominent features of statistical properties of stock market fluctuations and whose dynamics is governed by non - linear stochastic differential equations have been considered recently in literature .moreover financial markets present days of normal activity and extreme days of crashes and rallies characterized by different behaviors for the volatility .the question whether extreme days are outliers or not is still debated .this research topic has been addressed both by physicists and economists .a langevin approach to the market dynamics , where market crisis was modeled through the use of a cubic potential with a metastable state , was already proposed .there feedback effects on the price fluctuations were considered in a stochastic dynamical equation for instantaneous returns .the evolution inside the metastable state represents the normal market behavior , while the escape from the metastable state represents the beginning of a crisis .systems with metastable states are ubiquitous in physics .such systems have been extensively studied .in particular it has been proven that the noise can have a stabilizing effect on these systems . to the best of our knowledgeall models proposed up to now to study the escape from a metastable state contain only a constant noise intensity , which represents in econophysics the volatility .recently theoretical and empirical investigations have been done on the mean exit time ( met ) of financial time series , that is the mean time when the stochastic process leaves , for the first time , a given interval .the authors investigated the met of asset prices outside a given interval of size , and they found that the met follows a quadratic growth in terms of the region size .their theoretical investigation was done within the formalism of the continuous time random walk . within the same formalismthe statistical properties of the waiting times for high - frequency financial data have been investigated in refs . . in this work we model the volatility with the cir process and investigate the statistical properties of the escape times when both an effective potential with a metastable state and the cir stochastic volatility are present .our study provides a natural evolution of the models with constant volatility .the analysis has the purpose to investigate the role of the noise in financial market extending a popular market model , and to provide also a starting model for physical systems under the influence of a fluctuating noise intensity .the paper is organized as follows . in the next sectionthe modified heston model and the noise enhanced stability effect are described . in the third section the results for two extreme cases of this modelare reported . in section we comment the results for the general case and the probability density function of the escape time of the returns , obtained by our model , is compared with that extracted from experimental data of a real market . in the final sectionwe draw our conclusions .the heston model , which describes the dynamics of stock prices as a geometric brownian motion with the volatility given by the cir mean - reverting process , is defined by the following ito stochastic differential equations where is the time - dependent volatility , and are uncorrelated wiener processes with the usual statistical properties in eq .( 1 ) represents a drift at macroeconomic scales . in eq .( [ heston eq ] ) the volatility reverts towards a macroeconomic long time term given by the mean squared value , with a relaxation time . here is the amplitude of volatility fluctuations often called the _ volatility of volatility_. by introducing log - returns in a time window ] are clearly regions of instability for the system . in systems with a metastable state , the random fluctuations can originate the noise enhanced stability ( nes ) phenomenon , an interesting effect that increases the stability , enhancing the lifetime of the metastable state . the mean escape time for a brownian particle moving throughout a barrier given by the the well known exponential kramers law , \label{eqn : kramers}\ ] ] where is a monotonically decreasing function of the noise intensity , and is a prefactor which depends on the potential profile .this is true only if the random walk starts from initial positions inside the potential well .when the starting position is chosen in the instability region , exhibits an enhancement behavior , with respect to the deterministic escape time , as a function of .this is the nes effect and it can be explained by considering the barrier `` _ _ seen _ _ '' by the brownian particle starting at the initial position , that is .moreover is less than as long as the starting position lies into the interval ] ( see fig . [ cubic_pot ] ) and using an absorbing barrier at .each time the walker hits the barrier , the escape time is registered and another simulation starts , placing the walker at the same initial position , but using the volatility value of the barrier hitting time .first of all we present the result obtained in the limit cases where we have only one of the two terms in the cir equation ( [ eqn : bs ] ) .namely : ( a ) only the reverting term ( , revert - only case ) , and ( b ) only the noise term ( , whatever , , noise - only case ) are present . in the case ( a )the volatility is practically constant and equal to , apart from an exponential transient that is negligible for times . -0.2cm the mean escape time as a function of is plotted in fig .[ limit]a for the seven different initial positions indicated in fig .[ cubic_pot ] ( white circles in that figure ) .the curves are averaged over escape events .the nonmonotonic behavior is present . the escape time increases by increasing until it reaches a maximum . after the maximum , when the values of are much greater than the potential barrier height , the kramers behavior is recovered .the nonmonotonic behavior is more evident for starting positions near the maximum of the potential . for starting positions lying in the interval ] .we analyze the escape time through the barrier using different values for parameters , and .we note that the average escape time is measured in arbitrary units ( a. u. ) , the parameter ( measured in a.u . too ) has the dimension of the inverse time , while and are dimensionless. 0.5 cm as a first result we present the behavior observed for as a function of the reverting level . in fig .[ versusb ] we show the curves averaged over escape events .each panel corresponds to a different value of . inside each paneldifferent curves correspond to different values of spanning seven orders of magnitude .the nonmonotonic shape , characteristic of the nes effect , is clearly shown in fig . [ versusb]a .this behavior is shifted towards higher values of as the parameter decreases , and it is always present . in fig .[ versusb]c , which corresponds to a much greater value of ( ) , all the curves are monotonic but with a large plateau .so an increase in the value of causes the nes effect to disappear . to understand this behaviorlet us note that the parameters and play a regulatory role in eq .( [ eqn : bs ] ) . for drift term is predominant while for the dynamics is driven by the noise term , unless the parameter takes great values .in fact in fig .[ versusb]a the nonmonotonic behavior is observed for , provided that . for increasing values of the system approaches the revert - only regime and we recover the behavior shown in fig .[ limit]a . for the shape of the curves changes .the mean escape time is almost constant , and only for very high values of we observe a decreasing behavior .this happens because for smaller values of the reverting term becomes negligible in comparison with the noise term and the dependence on becomes weaker .only when is large enough the reverting term assumes values that are no more negligible with respect to the noise term and we can observe again a dependence of on . by increasing the value of we observe the nonmonotonic behavior only for a very great value of the parameter , that is for ( see fig . [versusb]b ) . for further increase of parameter ( see fig .[ versusb]c ) , the noise experienced by the system is much greater than the effective potential barrier `` _ _ seen _ _ '' by the fictitious brownian particle and the nes effect is never observable .moreover we are near a noise - only regime , and we can say that the magnitude of is so high as to saturate the system . in summary the nes effect can be observed as a function of the volatility reverting level , the effect being modulated by the parameter .the phenomenon disappears if the noise term is predominant in comparison with the reverting term .moreover the effect is no more observable if the parameter pushes the system towards a too noisy region . as a second step we study the dependence of on the noise intensity . -0.1cm fig .[ versusc ] shows the curves of , averaged over escape events .each panel corresponds to a different value of . inside each paneldifferent curves correspond to different values of .the shape of the curves is similar to that observed in fig .[ versusb ] . for small ( panel a )we observe a nonmonotonic behavior , while for great ( panel b ) the curves are monotonic but with a large plateau .let us recall the results of fig .[ limit]b : there was no nes effect in the noise - only case .so for small values of , when the reverting term is negligible , the absence of the nonmonotonic behavior is expected . by increasing the nonmonotonic behavioris recovered ( see fig . [versusc]a ) . once again , if one of the parameter pushes the system into a high noise region , the nonmonotonic behavior disappears ( see fig . [versusc]b ) .specifically if is high , the reverting term drives the system towards values of volatility that are outside the region where the nes effect is observable . indeed a direct inspection of fig .[ versusb ] shows that the value of used in fig .[ versusc]b is located after the maximum of for all values of and . in summarywhen the noise term is coupled to the reverting term we observe the nes effect on the variable .the effect disappears if is so high as to saturate the system .-0.2 cm as a last result we discuss the behavior observed for as a function of the reverting rate .this allow us to observe the transition between the two regimes of the process discussed above : the noise - only regime and the revert - only regime .the results are reported in fig .[ versusa ] for three different values of and three different values of . to reduce the fluctuations in all the curves when the parameter becomes small , we performed simulations by averaging on escape events .it is worthwhile to note that for values of the parameter , we enter in the noise - only regime , which characterizes one of the limit cases discussed in section iii .curves with the same color correspond to the same value of while curves with the same symbol correspond to the same value of .the system tends to the noise - only regime for lower values of and to the revert - only regime for higher values of . on the right end of fig .[ versusa ] the curves corresponding to the same value of tend to group together .the values of the curves approach reflect the nonmonotonic behavior observed in fig .[ versusb]a .indeed all the curves corresponding to the intermediate value of ( ) approach a value of , which almost corresponds to the maximum value of met in fig .[ versusb]a ( we note that the behavior for coincides with that for , even if it is not reported in fig . [versusb]a ) .this value of is greater than that reached by the curves corresponding to other two , lower and greater , values of ( and respectively ) .conversely on the left end the curves corresponding to the same value of tend to group together .it is worth noting that in this last case the curves with the highest value of , namely , show greater fluctuations as those observed in all the previous cases where the noise term is predominant ( see for example fig .[ limit]b ) .it is interesting to show , for our model ( eqs .( 5 ) and ( 6 ) ) , some of the well - established statistical signatures of the financial time series , such as the probability density function ( pdf ) of the stock price returns , the pdf of the volatility and the return correlation . in fig .[ pdf return ] we show the pdf of the returns . to characterize quantitatively this pdf with regard to the width , the asymmetry and the fatness of the distribution, we calculate the mean value , the variance , the skewness , and the kurtosis .we obtain the following values : , , , .these statistical quantities clearly show the asymmetry of the distribution and its leptokurtic nature observed in real market data .in fact , the empirical pdf is characterized by a narrow and large maximum , and fat tails in comparison with the gaussian distribution .specifically we note that the value of the kurtosis , which gives a measure of the distance between our distribution and a gaussian one , is of the same order of magnitude of that obtained for for daily prices ( see fig.7.2 on page 114 of ref . ) . -0.2cm the presence of the asymmetry is very interesting and it will be subject of future investigations .this asymmetry is due to the nonlinearity introduced in the model through the cubic potential ( see fig .[ cubic_pot ] ) .of course a comparison between the pdf of real data and that obtained from the model requires further investigations on the dynamical behavior of the system , as a function of the model parameters . in the following fig .[ pdf volatility ] we show the pdf of the volatility for our model , and we can see a log - normal behavior as that observed approximately in real market data .-0.2 cm finally in fig .[ return correlation ] we show the correlation function of the returns .as we can see the autocorrelations of the asset returns are insignificant , except for very small time scale for which microstructure effects come into play .this is in agreement with one of the stylized empirical facts emerging from the statistical analysis of price variations in various types of financial markets . a quantitative agreement of the pdf of volatility and the correlation of returns with the corresponding quantities obtained from real market data is subject of further studies .-0.2 cm our last investigation concerns the pdf of the escape time of the returns , which is the main focus of our paper . -0.2cm by using our model ( eqs .( 5 ) and ( [ eqn : bs ] ) ) , we calculate the probability density function for the escape time of the returns .we define two thresholds , and , which represent the start point and the end point for calculating respectively . when the return series reaches the value , the simulation starts to count the time and it stops when the threshold is crossed . in order to fix the values of the two thresholds we consider the standard deviation ( sd ) of the return series over a long time period corresponding to that of the real data .specifically is the average of the standard deviations observed for each stock during the above mentioned whole time period ( n is the stock index , varying between and ) .then we set and .we perform our simulations obtaining a number of time series of the returns equal to the number of stocks considered , which is .the initial position is and the absorbing barrier is at . for the cir stochastic process , we choose , , and .the choice of this parameter data set is not based on a fitting procedure as that used for example in ref . . therethe minimization of the mean square deviation between the pdf of the returns , extracted from financial data , and that obtained theoretically is done .we choose the parameter set in the range in which we observe the nonmonotonic behaviour of the mean escape time of the price returns as a function of the parameters and .then by a trial and error procedure we select the values of the parameters , , and for which we obtain the best fitting between the pdf of the escape times calculated from the modified heston model ( eqs .( 5 ) and ( 6 ) ) and that obtained from the time series of real market data .we report the results in fig .of course a better quantitative fitting procedure could be done , by considering also the potential parameters .this detailed analysis will be done in a forthcoming paper . as real data we use the daily closure prices for stocks traded at the nyse and continuously present in the period ( 3030 trading days ) .the same data set was used in previous investigations . from this data set we obtain the time series of the returns and we calculate the time to hit a fixed threshold starting from a fixed initial position .the two thresholds were chosen as a fraction of the average standard deviation on the whole time period , as we have done in simulations .the agreement between real data and those obtained from our model is very good .we note that at high escape times the statistical accuracy is worse because of few data with high values .the parameter values of the cir process for which we obtain good agreement between real and theoretical data are in the range in which we observe the nonmonotonic behavior of met ( see fig . [versusb]a ) .this means that in this parameter region we observe a stabilizing effect of the noise on the prices in the time windows for which we have a variation of returns between the two fixed values and .this encourages us to extend our analysis to large amounts of financial data and to explore other parameter regions of the model .we studied the mean escape time in a market model with a cubic nonlinearity coupled with a stochastic volatility described by the cox - ingersoll - ross equation . in the cir processthe volatility has fluctuations of intensity and it reverts to a mean level at rate .our results show that as long as the mean level is different from zero it is possible to observe a nonmonotonic behavior of met as a function of the two model parameters and .the parameter regulates the transition from a noise - only regime , where reverting term is absent or negligible , to a revert - only regime , where the noise term is absent or negligible . in the former case ,the enhancement of met with a nonmonotonic behavior as a function of the model parameters , that is the nes effect , is not observable .the curves have a monotonic shape with a plateau .moreover , if one of the parameters is so big to push the system into a region where the noise is greater than the barrier height of the effective potential , the effect is no more observable at all . in the revert - only regime , instead , the nes phenomenon is recovered . with its regulatory effect, the reverting rate can be used to modulate the intensity of the stabilizing effect of the noise observed by varying and . in this parameter regionthe probability density function of the escape times of the returns fits very well that obtained from the experimental data extracted by real market .-0.48 cm authors wish to thank dr .f. lillo for useful discussions .this work was supported by miur , infm - cnr and cnism .l. gammaitoni , p. h , p. jung , and f. marchesoni , rev .phys . * 70 * , 223 ( 1998 ) ; v. s. anishchenko , a. b. neiman , f. moss , and l. schimansky - geier , phys .usp . * 42 * , 7 ( 1999 ) ; r. n. mantegna , b.spagnolo , m. trapanese , phys . rev .e * 63 * , 011101 ( 2001 ) ; t. wellens , v. shatokhin , and a. buchleitner , rep .phys . * 67 * , 45 ( 2004 ) .r. n. mantegna , b.spagnolo , phys .lett . * 76 * , 563 ( 1996 ) ; d. dan , m. c. mahato and a. m. jayannavar , phys .e * 60 * , 6421 ( 1999 ) ; r. wackerbauer , phys . rev .e * 59 * , 2872 ( 1999 ) ; a. mielke , phys .lett . * 84 * , 818 ( 2000 ) ; b. spagnolo , a. a. dubkov , and n. v. agudov , acta phys . pol . * 35 * , 1419 ( 2004 ) .n. v. agudov , b. spagnolo , phys .e * 64 * , 035102(r ) ( 2001 ) ; a. fiasconaro , d. valenti and b. spagnolo , physica a * 325 * , 136 ( 2003 ) ; a. a. dubkov , n. v. agudov and b. spagnolo , phys .e * 69 * , 061103 ( 2004 ) ; a. fiasconaro , b. spagnolo and s. boccaletti , phys .e * 72 * , 061110(5 ) ( 2005 ) .g. parisi , nature * 433 * , 221(2005 ) ; h. larralde and f. leyvraz , phys .lett . * 94 * , 160201 ( 2005 ) ; c.m. dobson , nature * 426 * , 884 ( 2003 ) ; m. acar , a. becskei , and a. van oudenaarden , nature * 435 * , 228 ( 2005 ) ; c. lee et al ., nature reviews molecular cell biology * 5 * , 7 ( 2004 ) .pankratov and b. spagnolo , phys .. lett . * 93 * , 177001 ( 2004 ) ; h. larralde and f. leyvraz , phys .lett . * 94 * , 160201 ( 2005 ) ; e. v. pankratova , a. v. polovinkin , and b. spagnolo , phys .a * 344 * , 43 ( 2005 ) .anderson , k.j .arrow and d. pines , _ the economy as an evolving complex system _ , ( addison wesley longman , xx , 1988 ) ; p.w .anderson , k.j .arrow , and d. pines , _ the economy as an evolving complex system ii _ , ( addison wesley longman , xx , 1997 ) .j. cox , j. ingersoll , and s. ross , econometrica * 53 * , 385 ( 1985 ) ; j.p .fouque , g. papanicolau and k.r . sircar _derivatives in financial markets with stochastic volatility _ , ( cambridge university press , cambridge , 2000 ) .a. christian silva , richard e. prange , victor m. yakovenko , physica a * 344 * , 227 ( 2004 ) ; a. christian silva , _ application of physics to finance and economics : returns , trading activity and income _ , arxiv : cond - mat/0507022 ( 2005 ) .y. louzoun . and s. solomon , physica a * 302 * , 220 ( 2001 ) ; s. solomon and p. richmond , eur . phys. j. b * 27 * , 257 ( 2002 ) ; o. malcai , o. biham , p. richmond , and s. solomon , phys .e * 66 * , 031102 ( 2002 ) .e. scalas , r. gorenflo , f. mainardi , physica a * 284 * , 376 ( 2000 ) ; f. mainardi , m. raberto , r. gorenflo , and e. scalas , physica a * 287 * , 468 ( 2000 ) ; m. raberto , e. scalas , and f. mainardi , physica a * 314 * , 749 ( 2002 ) ; e. scalas , _ five years of continuous - time random walks in econophysics _ , arxiv : cond - mat/0501261 ( 2005 ) .
|
we study the mean escape time in a market model with stochastic volatility . the process followed by the volatility is the cox ingersoll and ross process which is widely used to model stock price fluctuations . the market model can be considered as a generalization of the heston model , where the geometric brownian motion is replaced by a random walk in the presence of a cubic nonlinearity . we investigate the statistical properties of the escape time of the returns , from a given interval , as a function of the three parameters of the model . we find that the noise can have a stabilizing effect on the system , as long as the global noise is not too high with respect to the effective potential barrier experienced by a fictitious brownian particle . we compare the probability density function of the return escape times of the model with those obtained from real market data . we find that they fit very well .
|
interplay between classical information and quantum state shows non - trivial and remarkable aspects when quantum entanglement is involved . in quantum teleportation , one qubit in an unknown quantum statecan be transmitted from a sender ( alice ) to a receiver ( bob ) by a maximally entangled quantum channel and two classical bit ( cbit ) communication . in order to teleport a quantum state in a -dimensional space , qubits, alice needs to transmit cbits of classical information to bob .this is actually the minimum amount of classical communication , which can be shown by combining teleportation protocol with another striking scheme utilizing quantum entanglement , superdense coding . in teleportation, neither alice nor bob acquires any classical knowledge on teleported states .the teleportation protocol is said to be oblivious to alice and bob . in remote state preparation ( rsp ), however , it is assumed that alice has complete classical knowledge on the state that is to be prepared by bob .the central concern has been whether quantum and classical resources can be reduced by alice s knowledge on the state . in this respect, lo has conjectured that rsp for a general state requires at least as much as classical communication as teleportation .an experimental implementation of rsp scheme has also been reported .recently , leung and shor showed that the same amount of classical information as in teleportation needs to be transmitted from alice to bob in any deterministic and exact rsp protocol that is oblivious to bob . here the assumption that a protocol is oblivious to bob means specifically two things : first , the probability that alice sends a particular classical message to bob , does not depend on the state to be transmitted .second , after bob s quantum operation to restore the state , the ancilla system contains no information on the prepared state . in this paperwe will study exact and deterministic rsp protocols for a general state , but not necessarily oblivious to bob .first we will show that bob s quantum operation can be assumed to be a unitary transformation .we then derive an equation that is a necessary and sufficient condition for such a protocol to exist . by studying this equation ,we show that in order to remotely prepare one qubit in a general state , alice needs to transmit 2 cbits of classical information to bob , which is the same amount as in teleportation , even if the protocol is not assumed oblivious to bob . for a general dimensional case ,it is still open whether the amount of classical communication can be reduced by abandoning oblivious conditions .in this paper we only consider rsp protocols that are exact and deterministic .the diagram of protocol is depicted in fig . 1 .the prior - entangled state shared by alice and bob is assumed to be a maximally entangled state in space ab defined by where system a and b are -dimensional hilbert spaces , with an orthonormal basis .writing , we note that and .given a pure state randomly chosen from an input state space of dimension , alice performs a povm measurement on system a with possible outcomes : remember that since alice is assumed to have complete knowledge of state , the dependence of povm elements on is not limited .the probability for alice to obtain outcome is given by in this paper we do not assume the probability is independent of , implying the protocol may not be oblivious to bob . with outcome ,bob s system b is given by receiving a classical message ( ) from alice , bob performs a trace - preserving quantum operation on his subsystem b to restore the state : this section we will show that bob s quantum operation is actually a unitary operation , if the rsp protocol works for any state .first we observe the following theorem .let be a trace preserving quantum operation .if for any state there exists a density operator such that , then the quantum operation is a unitary operation , where is a unitary operator . before proving the theorem , we note two general properties of density operator , which will be used in the proof .the first one is that if , then and are identical and pure , which can be shown by the cauchy - schwarz inequality .next , let be a density operator of a system consisting of subsystems q and r. then the second property used in the proof is that if is pure , then , where . this can be seen by observing subadditivity and the triangle inequality of von neumann entropy , by which we find this means that equality in subadditivity holds as , which is true only if .now we will prove the theorem given in the above . in the unitary model of a quantum operation ,the assumption in the theorem is stated as follows : for any there exists a density operator such that where is a standard pure state of ancillary system e and is a unitary operator on the combined system . as we have noted , if a subsystem is pure after tracing out the ancilla system , it is already pure in the combined system and therefore we have we will show that is actually pure and independent of . introducing an orthonormal basis , we write multiplying the above eq.([eq_k ] ) of index with the one of index and taking trace of the product , we find this equation implies that the density operators s have orthogonal supports in the -dimensional space .this is possible only if , where the set is an orthonormal basis of the space .we also find that is pure , since . in the same way as we obtained eq.([eq_kl ] ) , we find summing this equation over and using , we obtain which implies that is pure and given by from this we conclude that is independent of and furthermore for a general has no state dependence either .writing , we thus have sandwiching this between and gives where and .it is clear that the operator must be a unitary operator since is a density operator for any state . since eq.([eq_uni ] ) holds for any , we conclude that the quantum operation is a unitary operation : now remember that bob receives classical message from alice and performs a quantum operation on the state to restore the state that alice wants him to prepare : since this should hold for any state , by the theorem we have just proved , turns out to be a unitary operation : where is unitary .we also note that we did not assume , the state of ancilla system e after bob s quantum operation , is independent of ( oblivious condition ) .but it was shown that should be independent of in the proof of the theorem .now that we have shown that bob s quantum operation is a unitary operation , we can derive an equation that is a necessary and sufficient condition for rsp protocols . from eq.([eq_pm ] ) and eq.([eq_rhom ] ) , we obtain which means that the density operator of system b should not change by alice s povm measurement on system a as long as an outcome of the measurement is unspecified . using the result from the preceding section , we get here s are unitary and is the probability of outcome of alice s povm measurement , therefore and . and we note that this should hold for any state .it is important that eq.([eq_rspb ] ) is also a sufficient condition for rsp protocols .let us assume that eq.([eq_rspb ] ) holds in space b for some unitary operators s and for some probability distribution , then the same equation holds also in space a : since the dimension is the same for spaces a and b , where we wrote for convenience . here for a state , we introduce the state defined as .then it is clear that the following relation also holds : from this relation we can construct povm measurement elements as evidently each is a positive operator and . since alice is assumed to be given complete classical knowledge on state , she can in principle implement this povm measurement .the probability of an outcome is calculated as . and with an outcome being given by , the resultant state of b is given by receiving classical message from alice, bob can restore the state by a single unitary operation as .thus eq.([eq_rspb ] ) is a necessary and sufficient condition for rsp protocols and will be called rsp equation hereafter in this paper .we will study the rsp equation ( [ eq_rspb ] ) , which is a necessary and sufficient condition for rsp protocols : here superscripts a or b are omitted , since the equation should hold in either -dimensional space .first we study the case that the probability is independent of , which is assumed in the paper by leung and shor .we write the -element of eq.([eq_rsp ] ) explicitly : where s are amplitudes of , .it is convenient to introduce an by matrix by and we can further rewrite eq.([eq_explicitrsp ] ) as remember that s are arbitrary apart from the normalization condition and the matrix is assumed to be independent of . therefore the matrix must be a unit matrix : .this implies the rank of is and consequently .therefore the minimum amount of classical information , that alice needs to transmit to bob , is at least cbits in this oblivious case .this is the same amount of classical information as the one in the teleportation . in the case of , implies , from which we obtain therefore solutions are given by a set of unitary operators that are complete and orthonormal with respect to the hilbert - schmidt inner product .as shown by leung and shor , this gives a teleportation protocol , which is also oblivious to alice .this is because alice s povm measurement eq.([eq_povm ] ) can be implemented as a state - independent projective measurement on a combined system of a and input space i : where and it is easy to verify that are complete and orthogonal projectors .one of the sets of s satisfying eq.([eq_trum ] ) are shift operators in coordinate and momentum spaces : where operators and are defined as and `` momentum '' eigenstates s are given by next we will study the case the probability may depend on the state that is to be remotely prepared .the question is whether this dependence can reduce the minimum amount of classical communication . in the case of one qubit rsp ( ), we will show that this is not the case : the minimum amount of classical information turns out to be cbits as in teleportation .unfortunately for general dimension , however , we have only limited results : the rsp equation ( [ eq_rsp ] ) immediately tells us that , which is known as holevo s bound , since the equation requires that is complete in the -dimensional space .we can also show that .suppose that the rsp equation ( [ eq_rsp ] ) holds for . generally in a -dimensional space ,the relation is satisfied if and only if the states s are orthonormal .therefore when , the inner product should vanish for any , implying .this , however , contradicts unitarity of s .now we return to the qubit case ( =2 ) .the bloch sphere representation is convenient for a pure qubit state : where is a 3-dimensional unit vector and and are the pauli matrices .we also introduce a 3 by 3 rotation matrix for each unitary operator through the rsp equation is then reduced to which should hold for any unit vector and we emphasize again that the probability may depend on .it can be readily seen that if eq.([eq_rspn ] ) holds for a set of rotation matrices and some probability , it is also satisfied by a set of transformed rotations , with and being any rotation matrices , and the probability . with this freedom, we can safely assume that is a unit matrix , is a rotation about the axis , and is a rotation about an axis in the plane . now suppose that eq.([eq_rspn ] ) holds for and take ( the unit vector along the axis ) , then we find since is a probability distribution , this equation is satisfied only when , namely is a rotation of about the axis . by a similar argument with ( the unit vector along the axis ) , turns out to be a rotation of about the axis .therefore , for a general unit vector , eq.([eq_rspn ] ) with takes the following matrix form : this equation has only a trivial solution for with , since the determinant of the matrix in the equation is .thus we conclude that in order to remotely prepare a general qubit state ( ) , alice needs to transmit cbits of classical information to bob .in this paper we studied rsp schemes without assuming the protocol is oblivious to bob .bob s quantum operation was shown to be just a unitary operation , if the protocol works for a general state . in this sense ,bob s operation is necessarily oblivious to himself .using this fact we have derived the rsp equation , which is a necessary and sufficient condition for such an rsp protocol to exist . by studying this equation , it was shown that in order to remotely prepare one qubit in a general state , alice needs to transmit 2 cbits of classical information to bob , which is the same amount as in teleportation , even if the protocol is not assumed oblivious to bob .so , for one - qubit rsp , lo s conjecture has been proved without oblivious conditions .unfortunately generalization to higher dimensions is not straightforward . though it is not yet clear whether the amount of classical communication can be reduced by abandoning oblivious conditions in higher dimensions .we believe that the rsp equation will be a key to obtain some insights for further study in this direction . in this paper the input ensemble , from which the state is randomly chosen ,is assumed to be the entire hilbert space of dimensions .we remark that if the state is chosen from a sub - ensemble of the space , the rsp equation should still hold in the sub - ensemble , as long as bob s action can be assumed to be a unitary operation . in the case of qubits on the equatorial circle of the bloch sphere , the rsp equation ( [ eq_rspn ] ) with is satisfied as where is a unit vector on the equator , is a unit matrix , and is a rotation of about the axis .generalizations of the equator and the polar great circle to higher dimensions have been discussed by zeng and zhang .we can also verify that corresponding rsp equations with are satisfied for those ensembles .
|
in quantum teleportation , neither alice nor bob acquires any classical knowledge on teleported states . the teleportation protocol is said to be oblivious to both parties . in remote state preparation ( rsp ) it is assumed that alice is given complete classical knowledge on the state that is to be prepared by bob . recently , leung and shor showed that the same amount of classical information as that in teleportation needs to be transmitted in any exact and deterministic rsp protocol that is oblivious to bob . we study similar rsp protocols , but not necessarily oblivious to bob . first it is shown that bob s quantum operation can be safely assumed to be a unitary transformation . we then derive an equation that is a necessary and sufficient condition for such a protocol to exist . by studying this equation , we show that one qubit rsp requires 2 cbits of classical communication , which is the same amount as in teleportation , even if the protocol is not assumed oblivious to bob . for higher dimensions , it is still open whether the amount of classical communication can be reduced by abandoning oblivious conditions .
|
in his seminal essay , `` the science of the artificial'' the economist herbert simon suggested that biological systems , including those involving humans , are `` satisficing '' rather than optimising .the process of adaptation stops as soon as the result is deemed good enough , irrespective of the possibility that a better solution might be achieved by further search . in reality, there is no way to find global optima in complex environments , so there is no alternative to accepting less than perfect solutions that happen to be within reach , as ashby sustained in his `` design for a brain'' .we shall present results on a schematic `` brain '' model of self - organized learning and adaptation that operates using the principle of satisficing .the individual parts of the system , called synaptic connections , are modified by a negative feedback process until the output is deemed satisfactory ; then the process stops .there is no further reward to the system once an adequate result has been achieved : this is learning by a stick , not a carrot ! the process starts up again as soon as the situation is deemed unsatisfactory , which could happen , for instance , when the external conditions change .the negative signal may represent hunger , physical pain , itching , sex - drive , or some other unsatisfied physiological demand . formally , our sceme is a reinforcement - learning algorithm ( or rather de - inforcement learning since there is no positive feedback ) , where the strengths of the elements are updated on the bases of the signal from an external critic , with the added twist that the elements ( neuronal connections ) do not respond to positive signals .superficially , one might think that punishing unsuccessful neurons is the mirror equivalent to the usual hebbian learning where successful connections are strengthened .this is not the case .the hebbian process , like any other positive feedback , continues ad infinitum , in the absence of some ad hoc limitation .this will render the successful synapse strong , and all other synapses relatively weak , whereas the negative feedback process employed here stops as soon as the correct response is reached .the successful synaptic connections are only barely stronger than unsuccessful ones .this makes it easy for the system to forget , at least temporarily , its response and adjust to a new situation when need be .the synaptic landscapes are quite different in the two cases /citearaujo .positive reinforcement leads to a few strong synapses in a background of weak synapses .negative feedback leads to many connections of similar strength , and thus a very volatile , noncommittal structure .any positive feedback will limit the flexibility and hence the adaptability of the system .of course , there may be instances where positive reinforcement takes place , in situations where hard - wired connection have to be constructed once and for all , without concern for later adaptation to new situations .the process is self - organized in the sense that no external computation is needed .all components in the model can be thought of as representing known biological processes , where the updating of the states of synapses takes place only through local interactions , either with other neighboring neurons , or with extracellular signals transmitted simultaneously to all neurons .the process of suppressing synapses has actually been observed in the real brain and is known as long term depression , or ltd , but its role for actual brain function has been unclear . we submit that _ depression _ of synaptic efficacy is the fundamental dynamic mechanism in learning and adaptation , with ltp , the long term potentiation of synapses usually associated with hebbian learning , playing a secondary role .although we did have the real brain in mind when setting up the model , it is certainly not a realistic representation of the overwhelming intricacies of the human brain .its sole purpose is to demonstrate a general principle that is likely to be at work , and which could perhaps lead to the construction of better artificial learning systems .the model presented here is a `` paper airplane '' . which indeed can fly but is completely inadequate to explain the complexity of real airplanes .most neural network modelling so far has been concerned with the artificial construction of memories , in the shape of robust input - output connections .the strengths of those connections are usually calculated by the use of mathematical algorithms , with no concern for the dynamical biological processes that could possibly lead to their formation in a realistic `` in vivo '' situation . in the hopfield model ,memories are represented by energy minima in a spin - glass like model , where the couplings between ising spins represent synaptic strengths .if a new situation arises , the connection have to be recalculated from scratch .similarly , the back - propagation algorithm underlying most commercial neural networks is a newtonian optimization process that tunes the synaptic connections to maximize the overlap between the outputs produced by the network and the desired outputs , based on examples presented to the network .all of this may be good enough when dealing with engineering type of problems where biological reality is of no concern , but we believe that this modelling gives no insight into how real brain - like function might come about .intelligent brain function requires not only the ability to store information , such as correct input output connections .it is mandatory for the system to be able to adapt to new situations , and yet later to recall past experiences , in an ongoing dynamical process .the information stored in the brain reflects the entire history that it has experienced , and can take advantage of that experience .our model illustrates explicitly how this might take place .the extremal dynamics allows one to define an `` active '' level , representing the strength of synapses connecting currently firing neurons .the negative response assures that synapses that have been associated with good responses in the past have strengths that are barely less than the active ones , and can readily be activated again by supprerssing the currently active synapses .the paper is organized as follows .the next section defines the general problem in the context of our ideas .the model to be studied can be defined for many different geometries . in section iiiwe review the layered version of the model , with a single hidden layer .it will be shown how the correct connections between inputs are generated , and how new connections are formed when some of the output assignments change . in sectioniv we introduce selective punishment of neurons , such that synapses that have never been associated with correct outputs are punished much more severely than synapses that have once participated in the generation of a good output .it will be demonstrated how this allows for speedy recovery , and hierarchical storage , of old , good patterns . in multi - layered networks , and in random networks , recovery of old patterns takes place in terms of self - organized switches that direct the signal to the correct output . also , the robustness of the circuit towards noise will be demonstrated .section v shows that the network can easily perform more complicated operations , such as the exclusive - or ( xor ) process , contrary to recent claims in the literature .it can even solve the much more complicated parity problem in an efficient way . in the parity problem ,the system has to decide whether the number of binary 1s among n binary inputs is even or odd . in those problems ,the system does not have to adapt to new situations , so the success is due to the volatility of the active responses , allowing for efficient search of state space without locking - in at spurious , incorrect , solutions .in the same section we show how the model can readily learn multi - step tasks , adapt to new multi - step tasks , and store old ones for later use , exactly as for the simple single step problems .finally section vi contains a few succinct remarks about the most relevant points of this work .the simple programs that we have constructed can be down - loaded from our web - sites . for an in - depth discussion of the biological justification , we refer the readers to a recent article .schematically , we envision intelligent brain function as follows : the brain is essentially a network of neurons connected with synapses . some of these neurons are connected to inputs from which they receive information from the outside world .the input neurons are connected with other neurons .if those neurons receive a sufficiently strong signal , they fire , thereby affecting more neurons , and so on .eventually , an output signal acting on the outside world is generated .all the neurons that fired in the entire process are `` tagged '' with some chemical for later identification .the action on the outside is deemed either good ( satisfactory ) or bad ( not satisfactory ) by the organism .if the output signal is satisfactory , no further action takes place .if , on the other hand , the signal is deemed unsatisfactory , a global feedback signal - a hormone , for instance - is fed to all neurons simultaneously .although the signal is broadcast democratically to all neurons , only the synapses that were previously tagged because they connected firing neurons react to the global signal. they will be suppressed , whether or not they were actually responsible for the bad result .later , this may lead to a different routing of the signals , so that a different output signal may be generated .the process is repeated until a satisfactory outcome is achieved , or , alternatively , until the negative feedback mechanism is turned off , i.e. , the system gives up . in any case , after a while the tagging disappears .the time - scale for tagging is not related to the time - scale of transmitting signals in the brain but must be related to a time scale of events in the real outside world , such as a realistic time interval between starting to look for food ( opening the refrigerator ) and actually finding food and eating it .it is measured in minutes and hours rather than in milliseconds .all of this allows the brain to discover useful responses to inputs , to modify swiftly the synaptic connection when the external situation changes , since the active synapses are usually only barely stronger than some of the inactive ones .it is important to invoke a mechanism for low activity in order to selectively punish the synapses that are responsible for bad results .however , in order for the system to be able to recall past successes , which may become relevant again at a later point , it is important to store some memory in the neurons . in accordance with our general philosophy , we do not envision any strengthening of successful synapses . in order to achieve this, we invoke the principle of selective punishment : _ neurons which have once been associated with successful outputs are punished much less than neurons that have never been involved in good decisions ._ this yields some robustness for successful patterns with respect to noise , and also helps constructing a tool - box of neuronal patterns stored immediately below the active level , i. e. their inputs are slightly insufficient to cause firing .this `` forgiveness '' also makes the system stable with respect to random noise - a good synapse that fires inadvertently because of occasional noise is not severely punished .also , the extra feature of forgiveness allows for simple and efficient learning of sequential patterns , i. e. patterns where several specific consecutive steps have to be taken in order for the output to become successful , and thus avoid punishment .the correct last steps of will not be forgotten when the system is in the process of learning early steps . in the beginning of the life of the brain ,all search must necessarily be arbitrary , and the selective , darwinian , non - instructional nature of the process is evident .later , however , a tool - box of useful connections has been build up , and most of the activity is associated with previously successful structures - the process appears to be more and more directional , since fewer and fewer mistakes are committed . roughly speaking , the sole function of the brain is to get rid of irritating negative feedback signals by suppressing firing neurons , in the hope that better results may be achieved that way .a state of inactivity , or nirvana , is the goal! a gloomy view of life , indeed !the process is darwinian , in the sense that unsuitable synapses are killed , or at least temporarily suppressed , until perhaps in a different situation they may play a more role .there is no direct `` lamarckian '' learning by instruction , but only learning by negative selection .it is important to distinguish sharply between features that must be hardwired , i. e. genetically generated by the darwinian evolutionary process , and features that have to be self - organized , i. e. , generated by the intrinsic dynamics of the model when subjected to external signals .biology has to provide a set of more or less randomly connected neurons , and a mechanism by which an output is deemed unsatisfactory , a `` darwinian good selector '' , transmitting a signal to all neurons ( or at least to all neurons in a sector of the brain ) .it is absurd to speak of meaningful brain processes if the purpose is not defined in advance .the brain can not learn to define what is good and what is bad . in our modelthis is given at the outset .biology also must provide the chemical or molecular mechanisms by which the individual neurons react to this signal . from there on , the brain is on its own !there is no room for further ad hoc tinkering by `` model builders '' .we are allowed to play god , not man ! of course , this is not necessarily a correct , and certainly not a complete , description of the process of self - organized intelligent behaviour in the brain .however , we are able to construct a specific model that works exactly as described above , so the scenario is at least feasible .superficially , one would expect that the severe limitations impose by the requirements of self - organization will put us in a straight - jacket and make the performance poor .surprisingly , it turns out that the resulting process is actually very efficient compared with non - self - organized processes such as back - propagation - in addition to the fact that it executes a dynamical adaptation and memory process not performed by those networks at all .the amount of activity has to be sparse in order to solve the `` credit ( or rather blame ) assignment '' problem of identifying the neurons that were responsible for the poor result .if the activity is high , say 50% of all neurons are firing , then a significant fraction of synapses are punished at each time step , precluding any meaningful amount of organization and memory .one could accomplish this by having a variable threshold , as in the work by alstrom and stassinopoulos , and by stassinopoulos and bak . here , we use instead `` extremal dynamics '' , as was introduced by bak and sneppen ( bs) in a simple model of evolution , where it resulted in a highly adaptive self - organized critical state ._ at each point in time , only a single neuron , namely the neuron with the largest input , fires ._ the basic idea is that at a critical state the susceptibility is maximized , which translates into high adaptability . in our model ,the specific state of the brain depends on the task to be learned , so perhaps it does not generally evolve to a strict critical state with power law avalanches etc . as in the bs model .nevertheless , it always operate at a very sensitive state which adapts rapidly to changes in the demands imposed by the environment .this `` winner take all '' dynamics has support in well documented facts in neurophysiology .the mechanism of lateral inhibition could be the biological mechanism implementing extremal dynamics .the neuron with the highest input firing rate will first reach its threshold firing potential sending an inhibitory signal to the surrounding , competing neurons , for instance in the same layer , preventing them from firing . at the same time it sends an excitatory signal to other neurons downstream .in any case , there is no need to invoke a global search procedure , not allowed by the ground rules of self - organization , in order to implement the extremal dynamics .the extremal dynamics , in conjunction with the negative feedback , allows for efficient credit assignment . one way of visualizing the process is as follows .imagine a pile of sand ( or a river network , if you wish ) .sand is added at designated input sites , for instance at the top row .tilt the pile until one grain of sand ( extremal dynamics ) is toppling , thereby affecting one or more neighbors .we then tilt the pile again until another site topples , and so on .eventually , a grain is falling off the bottom row .if this is the site that was deemed the correct site for the given input , there are no modifications to the pile . however ,if the output is incorrect , then a lot of sand is added along the path of falling grains , thereby tending to prevent repeat of the disastrous result . eventually the correct output might be reached .if the external conditions change , so that another output is correct , the sand , of course , will trickle down as before , but the old output is now deemed inappropriate . since the path had just been successful , only a tiny amount of sand is added along the trail , preserving the path for possible later use . as the process continues , a complex landscape representing the past experiences , and thus representing the memory of the system , will be carved out .in the simplest layered version , treated in details in ref. , the setup is as follows ( fig .[ fig : one ] ) .there is a number of input cells , an intermediate layer of `` hidden '' neurons , and a layer of output neurons .each of the input neurons , is connected with each neuron in the middle layer , , with synaptic strength .each hidden neuron , in turn , is connected with each output neuron , with synaptic strength . initially , all the connection strengths are chosen to be random , say with uniform distribution between 0 and 1 .each input signal consists ( for the time being ) of a single input neuron firing . for each input signal, a single output neuron represents the pre - assigned correct output signal , representing the state of the external world .the network must learn to connect each input with the proper output for any arbitrary set of assignments , called a map .the map could for instance assign each input neuron to the output neuron with the same label .( in a realistic situation , the brain could receive a signal that there is some itching at some part of the body , and an output causing the fingers to scratch at the proper place must be generated for the signal to stop ) . at each time step ,we invoke `` extremal dynamics '' equivalent with a `` winner take all '' strategy : only the neuron connected with the largest synaptic strength to the currently firing neuron will fire at the next time step .the entire dynamical process goes as follows : + i ) an input neuron is chosen to be active .+ ii ) the neuron in the middle layer which is connected with the input neuron with the largest is firing .+ iii ) next , the output neuron with the maximum is firing .+ iv ) if the output happens to be the desired one , * nothing * is done , + v)otherwise , that is if the output is not correct , and are both depressed by an amount , which could either be a fixed amount , say 1 , or a random number between 0 and 1 .+ vi ) go to i ) .another random input neuron is chosen and the process is repeated . + that is all !the constant is the only parameter of the model , but since only relative values of synaptic strengths are important , it plays no role .if one finds it un - aesthetic that the values of the connections are always decreasing and never increasing , one could raise the values of all connections such that the value of the largest output synaptic strength for each neuron is .this has no effect on the dynamics .we imagine that the synapses and connecting all firing neurons are `` tagged '' by the activity , identifying them for possible subsequent punishment . in real life ,the tagging must last long enough to ensure that the result of the process is available - the time - scale must match typical processes of the environment rather than firing rates in the brain . if a negative feedback is received all the synapses which were involved and therefore tagged are punished , whether or not they were responsible for the bad result .this is democratic but , of course , not fair .we can not imagine a biologically reasonable mechanism that permits identification of synapses for selective punishment ( which could of course be more efficient ) as is assumed in most neural network models .the use of extremal dynamics solves the crucial credit assignment problem , which has been a stumbling block in previous attempts to model self organized learning , in a simple and elegant way .eventually , the system learns to wire each input to its correct output counterpart .the time that it takes is roughly equal to the time that a random search for each input would take . of course, no general search process could in principle be faster in the absence of any pre - knowledge of the assignment of output neurons .it is important to have a large number of neurons in the middle layer in order to prevent the different paths to interfere , and thus destroy connections that have already been correctly learned .figure [ fig : morebetter ] shows the results from a simulated layered system with 7 input and 7 output nodes , and a variable number of intermediate nodes .the task was simply to connect each input with one output node ( it does not matter which one ) . in each stepwe check if the seven pre - established input - output pairs were learnt and compute over many realizations the average time to learn all input - output connections .the figure shows how the average learning time decreases with the number of hidden neurons .more is better !biologically , creating a large number of more or less identical neurons does not require more genetic information than creating a few , so it is cheap .on the other hand , the set - up will definitely loose in a storage - capacity beauty contest with orthodox neural networks - that is the price to pay for self - organization !we are not allowed to engineer non - interfering paths - the system has to find them by itself . at this point all that we have created is a biologically motivated robot that can perform a random search procedure that stops as soon as the correct result has been found .while this may not sound like much , we believe that it is a solid starting point for more advanced modelling .we now subject the model to a new input - output map .this reflects that the external realities of the organism have changed , so that what was good yesterday is not good any more .food is to be found elsewhere today , and the system has to adapt .some input - output connections may still be good , and the synapses connecting them are basically left alone .however , some outputs which were deemed correct yesterday are deemed wrong today , so the synapses that connected those will immediately be punished .a search process takes place as before in order to find the new correct connections .figure [ fig : wrong ] shows the time sequence of the number of `` wrong '' input - output connections , i. e. , which is a measure of the re - learning time , when the system is subjected to a sequence of different input - output assignments .for each re - mapping , each input neuron has a new random output neuron assigned to it . in general ,the re - learning time is roughly proportional to the number of new input - output assignments that have changed , in the limit of a very large number of intermediate neurons .if the number of intermediate neurons is small , the re - learning time will be longer because of `` path interference '' between the connections . in a real world, one could imagine that the relative amount of changes that would occur from day to day is small and decreasing , so that the re - learning time becomes progressively lower .suppose now that after a few new maps , we return to the original input - output assignment .since the original successful synapses have been weakened , a new pathway has to be found from scratch .there is no memory of connections that were good in the past .the network can learn and adapt , but it can not remember responses that were good in the past . in sections 4 and 6 we shall introduce a simple remedy for that fundamental problem , which does not violate our basic philosophy of having no positive feedback . the set - up discussed above can trivially be generalized to include more intermediate layers .the case of multi - layers of neurons that are not fully connected with the neurons in the next layer is depicted in figure 1b .each neuron in the layer connects forward to three others in the next layer .the network operates in a very similar way : a firing neuron in one layer causes firing of the neuron with the largest connection to that neuron in the subsequent layer and so on , starting with the input neuron at the bottom . only when the signal reaches the top output layer will all synapses in the firing chain be punished , by decreasing their strength by an amount as before , if need be .interestingly , the learning time _ does not _ increase as the number of layers increases .this is due to the `` extremal dynamics '' causing the speedy formation of robust `` wires '' .in contrast , the learning time for back - propagation networks grows exponentially with the number of layers -this is one reason that one rarely sees backprop networks with more than one hidden layer .in addition to layered networks , one can study the process in a random network , which may represent an actual biological system better .consider an architecture where each of neuron is arbitrarily connected to a number of other neurons with synaptic strengths .a number of neurons ( and ) are arbitrarily selected as input and output neurons , respectively .again , output neurons are arbitrarily assigned to each input neuron .initially , a single input neuron is firing . using extremal dynamics , the neuron that is connected with the input neuron with the largest strength is then firing , and so on .if after a maximum number of firing events the correct output neuron has not been reached , all the synapses in the string of firing neurons are punished as before .summarizing , the entire dynamical process is as follows : + i ) a single input neuron is chosen .+ ii ) this neuron is connected randomly with several others , and the one which is connected with the largest synaptic strength fires .the procedure is repeated a prescribed maximum number of times , thereby creating and labelling a chain of firing neurons .+ iii ) if , during that process , the correct output has not been reached , each synapse in the entire chain of firings is depressed an amount .+ iv ) if the correct output is achieved , there is no plastic modification of the neurons that fired .go to i ) + a system with , behaves like the layered structure presented above ( and is actually the one shown in the figure .this illustrates the striking development of an organized network structure even in the case where all initial connections are absolutely uncorrelated .the model creates wires connecting the correct outputs with the inputs , using the intermediate neurons as stepping stones .we observed that there was not much memory left the second time around , when an old assignment map was re - employed - the task had to be re - learned from scratch .this turns out to be much more than a nuisance , in particular when the task was complicated , like in the case of a random network with many intermediate neurons , where the search became slow .we would like there to be some memory left from previous successful experiences , so that the earlier efforts would not be completely wasted .there is an analogous situation in the immune system , where the lymphocytes can recognize an invader faster the second time around .the location and activation of memory in biological systems is an important , but largely unresolved problem .speaking about the immune system , it has in fact been suggested in a series of remarkable papers by polly matzinger that the immune system is only activated in the presence of `` danger '' .this is the equivalent of our learning by mistakes .in fact , matzinger realizes that the identification of danger has to be pre - programmed in the innate immune system , and must have evolved on a biological time scale- this is the equivalent of our `` darwinian good '' ( or rather `` bad '' , or `` danger '' selector or indicator that decides if the organism is in a satisfactory state .it turns out that one single modification to the rules describedabove allows for some fundamental improvements of the system s ability to recognize old patterns : + iii a ) when the output is wrong , a firing synapse that has at least once been successful is punished much less than a synapse that has never been successful . + for instance , the punishment of the `` good '' synapse could be of the order of , compared with a depression of order unity for a `` bad '' synapse .the neuron has earned some forgiveness due to its past good performance .biologically , we envision that a neuron that does not receive a global feedback signal after firing , relaxes its susceptibility to a subsequent negative feedback signal by some chemical mechanism .it is important to realize that the synapse `` knows '' that it has been successful by the fact that it was not punished , so no non - local information is invoked .note that we have not , and will not , include any positive hebbian enforcement in order to implement memory in the system - only reduced punishment .we have applied this update scheme to both the layered and the random version of the model . for the random model ,we choose 200 intermediate neurons , plus 5 designated input neurons and 5 output neurons .each neuron was connected randomly with 10 other neurons .first , we arbitrarily assigned a correct output to each input , and ran the algorithm above , until the map had been learned .after unsuccessful firings , punishment was applied ; an amount of 0.001 to previously successful neurons , and a random number between 0 and 1 for those that had never been successful .then we arbitrarily changed one input - output assignment , and repeated the learning scheme .a new random reassignment of a single input - output pair was introduced , and so on . in the beginning , the learning time is large , corresponding roughly to the time for a random search for each connection .new connections have to be discovered at each input - output assignment . however ,after several switches , the time for adaptation becomes much shorter , of the order of a few time steps .figure [ fig : learntimes ] shows the time for adaptation for hundreds of consecutive input - output reassignments .the process becomes extremely fast compared with the initial learning time .typically , the learning time is only 0 - 10 steps , compared with hundreds or thousands of steps in the initial learning phase .this is because any `` new '' input - output assignment is not really new , but has been imposed on the system before .the entire process , in one particular run with 1000 adaptations , involved a total of only 38 neurons out of 200 intermediate neurons to create all possible input - output connections , and thus all possible maps . in order to understand this , it is useful to introduce the concept of the active level " , which is simply the strength of the strongest synaptic output connection from the neuron which has just been selected by the extreme dynamics . for simplicity , and without changing the firing pattern whatsoever, we can normalize this strength to unity .the strengths of the other output synapses are thus below the active level . whenever a previously successful input - output connection is deemed unsuccessful , the synapses are punished slightly , according to rule iii a ) , only until the point where a _single _ synapse in the firing chain is suppressed slightly below the active level defined by the extremal dynamics , thus barely breaking the input - output connection .thus , connections that have been good in the past are located very close to the active level , and can readily be brought back to life again , by suppression of firing neurons at the active level if need be .figure [ fig : landscapea ] shows the synaptic landscape after several re - learning events for a small system with 3 inputs , 3 outputs , and 20 neurons , each connected with 5 other neurons .the arrow indicates a synapse at the active level , i. e. , a synapse that would lead to firing if its input neuron were firing .altogether , there are 7 active synapses for that particular simulation , representing the correct learning of the current map .note that there are many synaptic strengths just below the active level .the memory of past successes is located in those synapses ! the single synapse that broke the input output connection serves as a self - organized switch , redirecting the firing pattern from one neuron chain to another , and consequently from one output to another .the adaptation process takes place by employing these self - organized switches , playing the roles of `` hub neurons '' , assuring that the correct output is reached .thus , when an input - output connection turns unsuccessful , all the neurons in the firing chain are suppressed slightly , and it is likely that an old successful connection re - appear at the active level .perhaps that connection is also unsuccessful , and will be suppressed , and another previously successful connection may appear at the active level .the system sifts through old successful connections in order to look for new ones . every now and then, there is some path interference , and re - learning takes longer time , indicated by the rare glitches of long adaptation times in figure [ fig : learntimes ] .also , now and then previously unused synapses interfere , since the strength of the successful synapses slowly becomes lower and lower .thus even when successful patterns between all possible input - output pairs have been established , the process of adaptation now and then changes the paths of the connections . perhaps this mimics the process of thinking : _ `` thinking '' is the process , where unsuccessful neuronal patterns are suppressed by some `` hormonal '' feedback mechanism , allowing old successful patterns to emerge .the brain sifts through old solutions until , perhaps , a good pattern emerges , and the process stops .if no successful pattern emerges , the brain panics : it has to search more or less randomly in order to establish a new , correct input - output connection ._ the input patterns do not change during the thinking process : one can think with closed eyes .figure [ fig : paths]a shows the entire part of the network which is involved with a single input neuron , allowing it to connect with all possible outputs .the full line indicates synapses at the active level , connecting the input with the correct output .the broken lines indicate synapses that connect the input with other outputs .they are located just below the active level .the neurons marked by an asterisk are switches , and are responsible for redirecting the flow .similarly , fig .[ fig : paths]b shows all the synapses connecting a single output with all possible inputs . the neurons with the asterisks are `` hub neurons '' , directing several inputs to a common output .once such neuron is firing , the output is recognized , correctly or incorrectly .a total of only 5 intermediate neurons are involved in connecting the output with all possible inputs .note that short - term and long - term memories are not located at , or ever relocated to , different locations .they are represented by synapses that are more or less suppressed relative to the currently active level selected by the process of extremal dynamics , and can be reactivated through self - organized switches as described above .the system exhibits aging at a large time scale : eventually all or most of the neurons will have been successful at one point or another , and the ability to selectively memorize good pattern disappears .the process is not stationary .if one does not like that , one can let the neurons die , and replaced by fresh neurons with random connections at a small rate .the death of neurons causes longer adaptation times now and then since new synaptic connections have to be located .it is also interesting to consider the effect of noise .suppose that a small noise , randomly distributed in the interval , is added to the signal sent to the neurons .this may cause an occasional wrong output signal , triggered by synapses with strengths that were close to that of the correct one , i. e. the one that would be at the active level in the absence of noise .however , those synapses will now be suppressed , since they lead to an incorrect result . after a while, there will be no incorrect synapses left , such that the addition of the noise can cause it to exceed the strength of the correct synapse , so no further modifications will take place , and the input - output connections will be perfect from then on .the system deals automatically with noise !_ figure [ fig : landscapeb ] shows all the input - output connections for one input neuron in a simulation with three input neurons , three output neurons , and a total of 50 neurons each connected with 5 neurons .the noise level is 0.02 , and the punishment of previously successful neurons is 0.002 .the numbers are the strengths of the synapses .note that the incorrect synapses connected with the switches are suppressed by a gap of at least 0.02 - the level of the noise - below the correct ones .note also that some of the incorrect synapses not connected with switches are much less suppressed .they are cut - off by switches elsewhere and need not be suppressed in order to have the signal directed to the correct output .the price to be paid in order to have perfect learning with noise is that adaptation to new patterns takes longer , because the active synapses have to be suppressed further to give way for new synapses .figure [ fig : noise ] shows the learning time for 700 successive re - mappings , as in fig . [ fig : learntimes ] , but with noise added .note that indeed adaptation is much slower .so far we have considered only simple input - output mappings where only a single input neuron was activated . however , it is quite straightforward to consider more complicated patterns where several input neurons are firing at the same time . in the case of the layered network ,we simply modify the rule ii ) above for the selection of the firing neuron in the second layer as follows :+ ii b ) the neuron in the middle layer for which the sum of the synaptic connections with the active input neurons is maximum is firing . + for the random network one would modify the rule for the firing of the first intermediate neuron similarly .since the hey - days of minsky and papert who demonstrated that only linearly separable functions can be represented by simple -one layer- perceptrons , the ability to perform the exclusive - or ( xor ) operation has been considered a litmus test for the performance of any neural network .how does our network measure up to the test ? following klemm et al . we choose to include three input neurons , two of them representing the input bits for which we wish to perform the xor operation , and a third input neuron which is always active .this bias assures that there is a non - zero input even when the response to the 00 bits is considered .the two possible outputs for the xor operation determines that the network have two output neurons .the inputs are represented by a string of binary units , . as explained in section 3 , neurons are connected by weights from each input ( ) to each hidden ( ) unit and from each hidden unit to each output( ) unit .the dynamics of the network is defined by the following steps .one stimulus is randomly selected out of the four possible ( i.e.,001,101,011,111 ) and applied to .each hidden node then receives a weighted input .the state is chosen according to the winner - take - all rule i.e. , the neuron with the largest fires ( i. e. , .since there is only one active intermediate neuron , the output neuron is chosen as before to be the one connected with that neuron by the largest strength .adaptation to changing tasks is not of interest here , so we choose to simulate the simplest algorithm in section 3 without any selective punishment ) . as shown in fig .[ fig : xora ] , networks with the minimum number of intermediate neurons ( three for this problem ) are able to solve the task in as few as tens of presentations . of course , networks with larger middle layers learn significantly faster , up to an asymptotic limit which for this problem is reached for about 20 nodes . even in the present of noise , the tolerant version of the model presented above , and in our previous paper for perfect , but slightly slower learning .klemm et al introduced forgiveness in a slightly different , and much more elaborate way , by allowing the synapses a small number of mistakes before punishment .we do not see the advantage of this ad hoc scheme over our simpler original version , which also appear to be more feasible from a biological point of view .indeed much harder problems of the same class as the xor , can be learned by our network without any modification .xor belongs to the `` parity '' class of problem , where for a string of arbitrary length there are realizations composed of all different combinations of 0 and 1 s . in order to learn to solve the parity problemthe system must be able to selectively respond to all the strings with odd ( or even ) number of 1 s ( or zeros ) .the xor function is the simplest case with .we used the same network as for the xor problem , but now with increasing up to string lengths of 6 .for all cases we chose a relatively large intermediate layer with 3000 neurons .figure[fig : xorb ] shows the results of these simulations . in panela the mean error ( calculated as in klemm et al . for consistency ) is the ratio between those realizations which have learned the complete task and those that have not , as a function of time . for each n ,a total of 1024 realizations was simulated , each one initiated from a different random configuration of weights .notice that the time axis ( for presentation purposes ) is in logarithmic scale .at least for the sizes explored here , the network solves larger problems following not very explosive power - law scaling relationship .panel b of fig .[ fig : xorb ] shows that learning time scales with problem size with an exponent . in conclusion, the nonlinearity does not appear to introduce additional fundamental problems into our scheme .the general focus of most neural network studies has been on the ability of the network to generalize , i.e. , to distinguish between classes of inputs requiring the same output . in general , the task of assigning an output to an input which has not been seen before is mathematically ill - defined , since in principle any arbitrary collection of inputs might be assigned to the same output .practically , one would like to map `` similar '' inputs to the same output ; again `` similar '' is ill - defined .we believe that similarity is best defined in the context of ( biological ) utility : similar inputs are by definition inputs requiring the same reaction ( output ) on order to be successful ( this is circular , of course ) . for a frog , things that flyrequires it to stick its tongue out in the direction of the flying object , so all things that fly might be considered similar ; there is not much need for the frog to bother with things that do nt fly . actually , a frog can only react to things that move as demonstrated in the classical paper by lettvin , maturana , mcculloch and pitts almost half a century ago .roughly , the generalization problem can be reduced to the problem of identifying useful ( or dangerous ) features in the input that have consequences for the action that should be taken .so how does our network learn to identify useful features in the input ?suppose ( fig .[ fig : generalize ] ) , that we present two different inputs to , for instance , the random network , one where input neurons and are firing , and another one where inputs and are firing .consider the two cases a ) where the output neuron for the two inputs should be the same , and b ) where the assigned outputs are different . in the casewhere the outputs should be different , say , , and , respectively the algorithm solves the problem by connecting the input to and the input to the neuron through different intermediate neurons , while ignoring the input .the brain identifies features in the input that are _ different_. the irrelevant feature is not even `` seen '' by our brain , since it have no internal representation in the form of firing intermediate neurons . in the case where the assigned outputs for the two inputs are the same , say , the problem is solved by connecting the common input neuron with the output neuron with a single string of synaptic connections .the network identifies a feature that is _ the same _ for the two inputs , while ignoring the irrelevant outputs and , that are simply not registering in the brain.in a simulation , it was imposed that when inputs or were active without being active , success was achieved only if the output was not : the frog should not try to eat non - flying objects .this mechanism can supposedly be generalized to more complicated situations : depending on the task at hand , the brain identifies useful features that allows it to distinguish , or not to distinguish ( generalize ) between inputs .suppose the system is subsequently presented to a pattern that in addition to the input neurons above includes more firing neurons . in casethe additional neurons are irrelevant for the outcome , the system will take advantage of the connections that have already been created and ignore the additional inputs . if some of the new inputs are relevant , in the sense that a different output is required , further learning involving the new inputs will take place in order to allow the system to discriminate between outputs .we envision that this process of finer and finer discrimination between input classes allows for better and better generalization of inputs requiring identical outputs .the important observation to keep in mind is that the concept of generalization is intimately connected with the desired function , and can not be pre - designed .we feel that , for instance with respect to theories of vision , there is an undue emphasis on constructing general pattern detection devises that are not based on the nature of the specific problem at hand . whether edges , angles , contrasts , or whatever are the important feature must be learned , not hardwired .in general , the brain has to perform several successive tasks in order to achieve a successful result .for instance , in a game of chess or backgammon , the reward ( or punishment ) only takes place after the completion of several steps .the system can not `` benefit '' from immediate punishment following trivial intermediate steps , no matter how much the bad decisions contributed to the final poor result .consider for simplicity a set - up where the system has to learn to present four successive outputs , 1 , 2 , 3 , and 4 , following a single firing input neuron , 1 .in general , the output decision at any intermediate step will affect the input at the next step .suppose , for instance that in order to get from one place to another in a city starting at point 1 , one first has to choose road 1 to get to point 2 , and then road 2 to go to point 3 , and so on .thus , the output represents the point reached by the action , which is then seen by the system and represents the following input .we represent this by feeding the output signal to the input at the next step .thus , if output number 5 fires at an intermediate step , input neuron will fire at the next step : this is the outer worlds reaction to our action .we will facilitate the learning process by presenting the system not only with the final problem , but also with the simpler intermediate problems : we randomly select an input neuron 1 to 4 . if the neuron 4 is selected , the output neuron 4 must respond . otherwise the firing neurons are punished. if the input neuron 3 is selected , the output neuron 3 must first fire .this creates an input signal at input neuron 4 .then the output neuron 4 must fire . for any other combination ,all the synapses participating in the two step operation are punished . in casethe input 2 is presented , output neuron 2 must first fire , then output neuron 3 , and finally output neuron 4 must fire , otherwise all synapses connecting firing neurons in the three step process are punished . when the input 1 is presented , the four output neurons must fire in the correct sequence .of course , we never evaluate or punish intermediate successes ! for this to work properly , it is essential to employ the selective punishment scheme where neurons that have once participated in correct sequences are punished less than neurons that have never been successful , in order for the system to remember partially correct end games learned in the past .in one typical run , we choose a layered geometry with 10 inputs , 10 outputs , and 20 intermediate neurons .after 4 time steps , the last step was learned for the first time . after 35 time steps , the sequence was also learned , after 57 steps the sequence was learned , and finally , after 67 steps the entire sequence had been learned .these results are typical .the brain learned the steps backwards , which , after all , is the only logical way of doing it . in chess , one has to learn that capturing the king is essential before the intermediate steps acquire any meaning ! in order to imitate a changing environment ,we may reassign one or more of the outputs to fire in the sequence . as in the previous problems, the system will keep the parts that were correct , and learn the new segments .older sequences can be swiftly recalled .finally we added uniform random noise of order to the outputs ; this extended the learning time in the run above to 193 time steps .the employment of the simple principles produces a self - organized , robust and simple , biologically plausible model of learning .it is , however , important to keep in mind in which contexts these ideas do apply and in which they do not .the model discussed is supposed to represent a mechanism for biological learning , that a hypothetical organism could use in order to solve some of the tasks that must be carried out in order to secure its survival . on the other handthe model is not supposed to solve optimally any problem - real brains are not very good at that either .it seems illogical to seek to model brain function by constructing contraptions that can perform tasks that real brains , such as ours , are quite poor at , such as solving the travelling salesman problem .the mechanism that we described is not intended to be optimal , just sufficient for survival .extremal dynamics in the activity followed eventually by depression of only the active synapses results in preserving good synapses for a given job .in contrast to other learning schemes , the efficiency also scales as one should expect from biology : bigger networks solve a given problem more efficiently than smaller networks . andall of this is obtained without having to specify the network s structure - the same principle works well in randomly connected , lattices or layered networks . in summary , the simple action of choosing the strongest and depressing the inadequate synapses leads to a permanent counter - balancing which can be analogous to a critical state in the sense that all states in the system are barely stable , or `` minimally '' stable using the jargon of ref .this peculiar meta - stability prevents the system from stagnating by locking into a single ( addictive ) configuration from which it can be difficult to escape when novel conditions arise .this feature provides for flexible learning and un - learning , without having to specifically include an ad - hoc forgetting mechanism - it is already embedded as an integrated dynamical property of the system . when combined with selective punishment , the system can build - up a history - dependent tool box of responses that can be employed again in the future .un - learning and flexible learning are ubiquitous features of animal learning as discussed recently by wise and murray .we are not aware of any other simple learning scheme mastering this crucial ability .work supported by the danish research council snf grant 9600585 .the santa fe institute receives funding from the john d. and catherine t. macarthur foundation , the nsf ( phy9021437 ) and the u.s .department of energy ( de - fg03 - 94er61951 ) . in a recent paper `` function and form in networks of interacting agents '' tanya araujo and r. vilela mendesanalyze the two schemes and explicitly points out the differences in terms of adaptability and robustness.(http://xyz.lanl.gov / abs / nlin.ao/0009018 ) c. a. barnes , a. baranyi , l. j. bindman , y. dudal , y. fregnac , m. ito , t. knopfel , s. g. lisberger , r. g. m. morris , m. moulins , j. a. movshon , w. singer , l. r. squirre .group report : relating activity - dependent modifications of neuronal function to changes in neural systems and behaviour . in _ cellular and molecular mechanisms underlying higher neural functions ._ 81 - 79 .a. i. selverston and p. ascher , ( eds ) ; ( john wiley and sons ltd , new york , 1994 ) p. bak , c. tang , and k. wiesenfeld , phys . rev . lett . * 59*,381 ( 1987 ) ; phys .a. * 38 * , 364 ( 1988 ) ; for a review see p. bak , _ how nature works : the science of self - organized criticality , _( copernicus , new york , 1996 ; oxford university press , oxford , 1997 ) . s. boettcher and a. g. percus , extremaloptimization : methods derived from co - evolution .in gecco-99 : proceedings of the genetic and evolutionary computation conference ( morgan kaufmann , san francisco , 1999 ) , 825 - 832 . see also math.oc/9904056 at http://xxx.lanl.gov/ ; nature s way of optimizing .cond - mat/9901351 at http://xxx.lanl.gov/.
|
we describe a mechanism for biological learning and adaptation based on two simple principles : ( i ) neuronal activity propagates only through the network s strongest synaptic connections ( extremal dynamics ) , and ( ii ) the strengths of active synapses are reduced if mistakes are made , otherwise no changes occur ( negative feedback ) . the balancing of those two tendencies typically shapes a synaptic landscape with configurations which are barely stable , and therefore highly flexible . this allows for swift adaptation to new situations . recollection of past successes is achieved by punishing synapses which have once participated in activity associated with successful outputs much less than neurons that have never been successful . despite its simplicity , the model can readily learn to solve complicated nonlinear tasks , even in the presence of noise . in particular , the learning time for the benchmark parity problem scales algebraically with the problem size n , with an exponent .
|
( ben - gurion equi - propagation encoder ) is a tool which applies to encode finite domain constraint models to cnf .was first introduced in and is further described in . during the encoding process , performs optimizations based on equi - propagation and partial evaluation to improve the quality of the target cnf .is implemented in ( swi ) prolog and can be applied in conjunction with any sat solver .it can be downloaded from where one can also find examples of its use .this version of is configured to apply the cryptominisat solver through a prolog interface .cryptominisat offers direct support for ` xor ` clauses , and can be configured to take advantage of this feature .a main design choice of is that integer variables are represented in the unary order - encoding ( see , e.g. ) which has many nice properties when applied to small finite domains . in the _ order - encoding _ , an integer variable in the domain ] .each bit is interpreted as implying that is a monotonic non - increasing boolean sequence .for example , the value 3 in the interval ] .it is well - known that the order - encoding facilitates the propagation of bounds .consider an integer variable ] . to restrict to take values in the range ] , assigning and propagates to give ] : its ability to specify that a variable can not take a specific value in its domain by equating two variables : .this indicates that the order - encoding is well - suited not only to propagate lower and upper bounds , but also to represent integer variables with an arbitrary , finite set , domain .for example , given ] ( note the repeated literals ) , signifying that .the order - encoding has many additional nice features that can be exploited to simplify constraints and their encodings to cnf .to illustrate one , consider a constraint of the form where ` a ` and ` b ` are integer values in the range between 0 and 5 represented in the order - encoding . at the bit level ( in the order encoding )we have : } ] .the constraint is satisfied precisely when } ] and )} ] ( decreasing order ) results in the pair } ] & & + ( 6 ) & ,~x)} ] + + + ( 9 ) & & & + ( 10 ) & & & + ( 11 ) & )} ] & & + + ( 15 ) & ,~i)} ] & & + ( 17)&,c,~i)} ] & & + ( 19)&,[i_1,\ldots , i_n],~i)} ] & & + + ( 21 ) & & & ( lex order ) + ( 22 ) & & & ( lex order ) + ( 23 ) & & & + ( 24 ) & & & + ( 25 ) & & & ( lex order ) + ( 26 ) & & & ( lex order ) + the compilation of a constraint model to a cnf using goes through three phases : * ( 1 ) * unary bit - blasting : integer variables ( and constants ) are represented as bit vectors in the order - encoding . *( 2 ) * constraint simplification : three types of actions are applied : equi - propagation , partial evaluation , and decomposition of constraints .simplification is applied repeatedly until no rule is applicable . * ( 3 ) * cnf encoding : the best suited encoding technique is applied to the simplified constraints .bit - blasting is implemented through prolog unification .each declaration of the form triggers a unification } ] then it is replaced by . in the general case ` as ` is split into two halves , then constraints are generated to sum these halves , and then an additional constraint is introduced to sum the two sums .cnf encoding is the last phase in the compilation of a constraint model .each of the remaining simplified ( bit - blasted ) constraints is encoded directly to a cnf .these encodings are standard and similar to those applied in various tools such as sugar .cardinality constraints take the form where the are boolean literals , is a constant , and the relation might be any of . there is a wide body of research on the encoding of cardinality to cnf .we focus on those using sorting networks .for example , the presentations in , , and describe the use of odd - even sorting networks to encode pseudo boolean and cardinality constraints to boolean formula .we observe that for applications of this type , it suffices to apply `` selection networks '' rather than sorting networks .selection networks apply to select the largest elements from inputs . in , knuth shows a simple construction of a selection network with size whereas , the corresponding sorting network is of size .totalizers are similar to sorting networks except that the merger for two sorted sequences involves a direct encoding with clauses instead of clauses .totalizers have been shown to give better encodings when cardinality constraints are not excessively large .enables the user to select encodings based on sorting networks , totalizers or a hybrid approach which is further detailed below .consider the constraint in a context where is a list of boolean literals and integer variable defined as . applies a divide and conquer strategy . if , the constraint is trivial and satisfied by unifying .if and } ] and the constraint is decomposed to . in the general case , where , the constraint is decomposed as follows where and are a partitioning of such that , , and : this decomposition process continues as long as there remain and when it terminates the model contains only ` comparator ` and ` int_plus ` constraints .the interesting discussion is with regards to the constraints where offers two options and depending on this choice the original ` bool_array_sum_eq ` constraint then takes the form either of a sorting network or of a totalizer .so , consider a constraint where } ] and }\begin{array}{ll } \mathtt{new\_int(y,0,n)},\\ \mathtt{new\_int(t_1,0,n_1)},~ & \mathtt{bool\_array\_sum\_eq(as_1,t_1)},\\ \mathtt{new\_int(t_2,0,n_2)},~ & \mathtt{bool\_array\_sum\_eq(as_2,t_2)},\\ \mathtt{int\_plus(t_1,t_2,y ) } , & \mathtt{int\_leq(y , k ) } \end{array} ] , and +a[j , k]+a[k , l ] + a[l , i ] < 4 ] , we constrain the rows of the adjacency matrix to be sorted lexicographically ( justified in ) , and we impose lower and upper bounds on the degrees of the graph nodes as described in .table [ table : cep ] illustrates results , running with and without cep .here , we focus on finding a graph with the prescribed number of graph nodes with the known maximal number of edges ( all instances are satisfiable ) , and cep is applied to the set of clauses derived from the symmetry constraints ( 2 ) detailed above .the table indicates the number of nodes , and for each cep choice : the compilation time , the number of clauses and variables , and the subsequent sat solving time .the table indicates that cep increases the compilation time ( within reason ) , reduces the cnf size ( considerably ) , and ( for the most ) improves sat solving time.experiments are performed on a single core of an intel(r ) core(tm ) i5 - 2400 3.10ghz cpu with 4 gb memory under linux ( ubuntu lucid , kernel 2.6.32 - 24-generic ) . ].search for graphs with no cycles of size 4 or less ( comp .& solve times in sec . )[ cols="^ , > , > , > , > , > , > , > , > , > " , ] [ [ squaring ] ] * squaring : * + + + + + + + + + + + + + consider the special case of multiplication specifying that where we introduce two additional optimizations .first , consider the variables introduced in equation [ eq : sq ] , we have and hence equi - propagation applies to remove the redundant variables .the result of this is illustrated in figure [ fig : square ] . in figure[ fig : square ] we reorder the bits in the columns , as if , letting the bits drop down to the baseline .second , consider the `` columns '' in the ,c)}$ ] constraint .each variable of the form with in a column occurs twice .so , both can be removed and one inserted back in the column to the left .this is illustrated in figure [ fig : square ] where we highlight the move of the two instances .these optimizations reduce the size of the cnf and are applied both in the binary and in the unary encodings . to evaluate the encoding of for the special case when , we consider the application of to model and solve the number partitioning problem , also known as ` csplib 049 ` .here , one should finding a partition of numbers into two sets and such that : and have the same cardinality , the sum of numbers in equals the sum of numbers in , and the sum of the squares of the numbers in equals the sum of the squares of the numbers in .figure [ fig : comparenumpartition ] depicts our results .we consider four settings .the first two are the binary and unary approaches described above where buckets of bits of the same binary weight are summed using full adders or sorting networks respectively . in the other two settings ,we apply complete equi - propagation per individual constraint ( on binary numbers ) , on top of the ad - hoc rules implemented in .figure [ fig : comparenumpartition ] illustrates the size of the encodings ( number of cnf variables ) for each of the four settings in terms of the instance size .the two top curves coincide and correspond to the unary encodings which create slightly larger cnfs .however note that the unary core of with its ad - hoc ( and more efficient ) implementation of equi - propagation , detects all of the available equi - propagation .there is no need to apply cep .the bottom two curves correspond to the binary encodings and illustrate that cep detects further optimizations beyond what is detected using .figure [ fig : comparenumpartition ] details the sat solving times .here we ignore the compilation times ( which are high when using cep ) and focus on the quality of the obtained cnf .the graph indicates a clear advantage to the unary approach ( where cep is not even required ) .the average solving time using the unary approach approach ( without cep ) is 270 ( sec ) vs 1503 ( sec ) using the binary approach ( with cep ) .this is in spite of the fact that unary approach involves larger cnf sizes .figures [ fig : comparenumpartition ] and further detail the effect of cep in the binary and unary encodings depicting the numbers of clauses and of variables reduced by cep in both techniques .the smaller this number , the more equi - propagation performed ad - hoc by . in both graphs the lower curve corresponds to the encodings based on the unary core indicating that this is the one of better quality .see footnote [ [ machine ] ] for details on machine .we have detailed two features of not described in previous publications .these concern the hybrid approach to encode cardinality constraints and the procedure for applying complete equi - propagation .we have also described our approach to enhance the unary kernel of for binary numbers .our approach is to rely as much as possible on the implementation of equi - propagation on unary numbers to build the task of equi - propagation for binary numbers .we have illustrated the power of this approach when encoding binary number multiplication .the extension of for binary numbers is ongoing and still requires a thorough experimentation to evaluate its design .r. asn , r. nieuwenhuis , a. oliveras , and e. rodrguez - carbonell .cardinality networks and their applications .in o. kullmann , editor , _ sat _ , volume 5584 of _ lecture notes in computer science _ , pages 167180 .springer , 2009 .k. e. batcher .sorting networks and their applications . in _afips spring joint computing conference _ , volume 32 of _ afips conference proceedings _ , pages 307314 , atlantic city , nj , usa , 1968 .thomson book company , washington d.c .m. codish , y. fekete , c. fuhs , and p. schneider - kamp .optimal base encodings for pseudo - boolean constraints . in p.a. abdulla and k. r. m. leino , editors , _ tacas _ , volume 6605 of _ lecture notes in computer science _ , pages 189204 .springer , 2011 . j. m. crawford and a. b. baker .experimental results on the application of satisfiability algorithms to scheduling problems . in b.hayes - roth and r. e. korf , editors , _ aaai _ , volume 2 , pages 10921097 , seattle , wa , usa , 1994 .aaai press / the mit press .n. en and a. biere .effective preprocessing in sat through variable and clause elimination . in f. bacchus and t. walsh , editors , _ sat _ , volume 3569 of _ lecture notes in computer science _ , pages 6175 .springer , 2005 .m. heule , m. jrvisalo , and a. biere .efficient cnf simplification based on binary implication graphs . in k.a. sakallah and l. simon , editors , _ sat _ , volume 6695 of _ lecture notes in computer science _ , pages 201215 .springer , 2011 .n. manthey .coprocessor 2.0 - a flexible cnf simplifier - ( tool presentation ) . in a.cimatti and r. sebastiani , editors , _ sat _ , volume 7317 of _ lecture notes in computer science _ , pages 436441 .springer , 2012 .j. marques - silva , m. janota , and i. lynce . on computing backbones of propositional theories .in h. coelho , r. studer , and m. wooldridge , editors , _ ecai _ , volume 215 of _ frontiers in artificial intelligence and applications _ , pages 1520 .ios press , 2010 .extended version : http://sat.inesc - id.pt/~mikolas / bb - aicom - preprint.pdf .n. nethercote , p. j. stuckey , r. becket , s. brand , g. j. duck , and g. tack .minizinc : towards a standard cp modeling language . in c.bessiere , editor , _ cp2007 _ ,volume 4741 of _ lecture notes in computer science _ , pages 529543 , providence , ri , usa , 2007 .springer - verlag .
|
is a compiler which facilitates solving finite domain constraints by encoding them to cnf and applying an underlying sat solver . in constraints are modeled as boolean functions which propagate information about equalities between boolean literals . this information is then applied to simplify the cnf encoding of the constraints . we term this process _ equi - propagation_. a key factor is that considering only a small fragment of a constraint model at one time enables to apply stronger , and even complete reasoning to detect equivalent literals in that fragment . once detected , equivalences propagate to simplify the entire constraint model and facilitate further reasoning on other fragments . is described in several recent papers : , and . in this paper , after a quick review of , we elaborate on two undocumented details of the implementation : the hybrid encoding of cardinality constraints and complete equi - propagation . we then describe on - going work aimed to extend to consider binary representation of numbers . [ firstpage ]
|
the smart grid is a power network composed of intelligent nodes that can operate , communicate , and interact autonomously to efficiently deliver electricity to their consumers .it features ubiquitous interconnections of power equipments to enable two - way flow of information and electricity so as to shape the demand in order to balance the supply and demand in real - time .such pervasive equipment interconnections necessitate a full - fledge communication infrastructure to leverage a fast , accurate , and reliable information flow in the smart grid . in this context , the research on different aspects of smart grid has gained significant attention in the past few years , e.g. , the literatures surveyed in . although a lot has been done from theoretical perspectives , it is until recently when the actual implementation of prototypes has been given deliberate consideration . currently , a considerable number of research groups are working towards establishing testbeds to validate designs and implemented protocols related to smart grid .these testbeds have various aims , scale , limitations and features ; these have been summarized and presented in table [ table:1 ] .most of these testbeds conduct experiments either in a lab environment ( e.g. , smartgridlab , vast , micro grid lab , cyber - physical ) , or in isolation , in a residential ( e.g.,powermatching city ) or a commercial space ( e.g. , smart microgrid ) . in the testbeds surveyed , only the jeju testbed is comprehensive enough to consider both the residential and commercial paradigm with user and grid interactions .moreover , in spite of considerable on - going studies towards smart grid prototypes , and the massive efforts by utilities and local authorities , deployment of smart grids has met with near customer rejection .therefore , there is much impetus on designing and implementing well accepted solutions to bolster consumer - grid interactions . in this respect, this paper presents the main accomplishments towards a user - centric smart grid testbed design at the singapore university of technology and design ( sutd ) .the highlight of the paper is mainly on the communication infrastructure that has been implemented in the testbed in order to provide ict services to support a greener smart grid .the testbed is simulated in an approximate real world scenario , where the student dormitory ( a 3 bedroom unit with 6 to 9 resident students ) is the approximated residential space , and the faculty offices and shared meeting rooms are proxy commercial office spaces .the study focuses on both residential and commercial consumers and their interaction with a central administrative body , e.g. , the grid or the intelligent energy system ( ies ) , where the ies is an entity that provides alternative energy management services to the consumers .the testbed at sutd consists of two networks : a home area network ( han ) and a neighborhood area network ( nan ) .a han network is implemented within each residential unit or at sutd office in order to collect different energy related data via a unified home gateway ( uhg ) .a nan , on the contrary , is developed to connect each han to the data concentrator through various communication protocols .we provide further detail discussion on both han and nan networks that are implemented at sutd in section [ sec : section-3 ] .due to the interactive nature of the system , an efficient and reliable communication infrastructure of low delay is very important .therefore , this paper mainly discusses the various communication aspects such as broadband power line communication cables ( bpl ) , tv white space ( tvws ) , han and nan of the testbed at sutd .benefits that could be attributed and examined in this system include lowering of operational costs , increasing ancillary electricity options and better peak demand management .we would also design and examine suitable incentives , which would encourage consumers to adopt these demand response mechanisms for participation in the market by exploiting the use of the testbed .[ table:1 ] the smart grid is envisioned as a future power network with hundreds of millions of endpoints , where each endpoint may generate , sense , compute , communicate and actuate .deployment of such complex systems necessitates sophisticated validation prior installation , e.g. , by establishing smart grid testbeds .however , the rapid fluctuation in power supply and demand , voltage and frequency , and active consumers interaction with the grid make the practical realization of testbeds extremely challenging . to this end, we classify the suitable technologies and features of such testbeds into three categories : 1 ) hardware based components , 2 ) software based elements , and 3 ) other features .we give a brief description of different technologies and elements within each of the defined categories and their impact on the design of a communication network as follows . in a smart grid ,a large number of sensors , actuators , and communication devices are deployed at generators , transformers , storage devices , electric vehicles , smart appliances , along power lines and in the distributed energy resources .the optimal control and management of these nodes in real - time is essential for the successful deployment of smart grids , which gives rise to the need for fast and reliable two - way communication infrastructures .for example , depending on the online requirements to response , reserve power is required in the smart grid in case of an unexpected outage of a scheduled energy resource , and required response has to be deployed within seconds seconds ) , secondary ( less than minutes ) and tertiary ( minutes ) , depending on required response times to monitor and control . ] .the ami measures , collects and analyzes the energy usage , and communicates with metering devices either on requests or on a schedule .there is an increasing concern regarding their costs , security and privacy effects , and the remote controllable kill switch " included in them .a scalable secured mac protocol becomes critical for the ami system .in accordance with , we organize the smart grid network into a han and nan .a han is the core component of establishing a residential energy management system as part of a demand response management ( drm ) scheme in a smart grid . establishing the hanis required to maintain an internet - of - things protocol for different sensors / actuators that run on different physical layer communication protocols within a home .this consequently imposes the necessity of devising gateways / controllers / central communicators , which are capable of handling these miscellaneous communication techniques . depending on the business model, there could be multiple nans .for example , ami is setup and maintained by the grid , and is connected to one nan ; while third party ies ( e.g. google nest ) could be on another nan that provides additional services such as home automation , energy management , or security services .the generation from renewable sources such as wind and solar is intermittent and unpredictable .hence , how to design the control architectures , algorithms , and market mechanisms to balance supply and demand , support voltage , and regulate frequency in real - time are the key challenges in the presence of these volatile green energy supplies .smart components such as dc / ac inverters , flexible ac transmission systems and smart switches will be increasingly deployed on various components such as the transmission systems , transformers and power generators in the smart grid . hence , there are needs of solutions that will enable the design of power electronics and control appliances to maximize reliability , efficiency and cost reduction for the power grid .an energy storage system stores electricity when the demand is low and provides the grid / ies with electricity when the demand is high .energy storage systems are becoming an essential part of smart grids due to the increasing integration of intermittent renewables and the development of micro grids .innovative and user - centric solutions are required to determine the type and size of energy storage according to the application , to determine the optimal storage location and reduce the cost of storage devices .evs can act as mobile storage devices and assist the grid to fulfil various energy management objectives through grid - to - vehicle ( g2v ) and vehicle - to - grid ( v2 g ) settings .key challenges are : addressing issues like quick response time , power sharing , ev charging protocol design and exploiting the use of evs storage devices for power compensation , and voltage and frequency regulation services . the two - way flow of information and electricity in smart grids establishes the foundation of drm ; which is in fact the change of electricity usage patterns by the end - users in response to the incentives or changes in price of electricity over time .drm can be accomplished in either a centralized or a distributed fashion .a distributed drm algorithm greatly reduces the amount of information that has to be transmitted when compared to a centralized drm algorithm .therefore , a smart grid testbed needs to have the capability to investigate both drm schemes , if possible , in order to examine their effectiveness on the target load management .dynamic pricing is critical for effective drm in smart grids as a driver for behavior change .the pricing scheme needs to be beneficial to the grid in terms of reduction in operational cost , energy peak shaving and valley filling .it also needs be economically attractive to consumers by reducing their electricity bills and should not cause significant inconvenience to them for changing their energy consumption behavior , i.e. , for scheduling or throttling .securing the pricing information is important to protect against malicious attacks , e.g. , an outdated pricing information is injected to destabilize the smart grid . in a time of crisis, the capacity for fault detection and self - healing with minimal restoration time and improved efficiency , i.e. , outage management is important .hence , with real - time insight , the smart grid testbed should possess automatic key processes to easily locate and route power around affected spots to reduce unnecessary truck rolls and save costs .cyber security is a pivotal challenge for smart grid .in fact , cyber - based threats to critical infrastructure are real and increasing in frequency .however , the testing of potential threats are challenging due to the general lack of defined methodologies and prescribed ways to quantify security combined with the constantly evolving threat landscape .moreover , as the participation of consumers in smart grid becomes more prevalent , privacy information , e.g. , their home occupancy , will be more vulnerable .hence , applied methods and rigorous privacy and security assessments are necessary for complex systems where heterogeneous components and their users regularly interact with each other and the ies .regulation power allows for the grid to manage second - to - second fluctuations from forecasted demand and reserve markets enable electricity providers to have instant - on solution to kick in when power delivery problems emerge .theoretically , smart grids are able to provide such services through drm .therefore , smart grid solutions need to enable the grid providers to combine the drm schemes with energy storage solutions to make the grid system more reliable .a smart grid system may consists of millions of active nodes , which need to be managed in real - time .accordingly , optimal economic mechanisms and business models are required for engendering desired global outcomes .however , it is extremely difficult to maintain such a large scale and complex cyber - physical system in real - time .besides , features like automatic fault detection , self - healing , autonomous distributed demand - side management and disaster management make the conservation more strenuous .hence , innovations are required to improve the scalability of such a huge system without affecting any of its features .for instance , instead of using different gateways for communicating with different equipment in a home ( where each devices may run on different communication protocols ) , devising a single uhg could make a han considerably scalable .a scalable mac protocol would be interesting to explore in dense residential areas , where over thousands of machine - type devices such as smart meters communicate at the same time .one of the key challenges for the success of smart grids is to be able to clearly elucidate the benefits for the consumers .one solution could be a clear user friendly interface ( e.g. , smart phone apps ) to demonstrate the ability to visualise electricity usage and real - time electricity prices , so as to ease energy savings for the consumers through smarter load shifting .it can be noticed that advanced ict technologies and communications networks play important roles in various applications , features , and even in the acceptance of a smart grid . to this end, we now discuss the main features of the testbed at sutd in the next section .the aims of the sutd smart grid testbed are to develop innovative technical approaches and incentives for smart energy management , through the integration of commercial space strategies , residential space strategies , and the design and development of supporting technologies .hence , testing the suitability of various communications technologies for smart grid so as to address some of the challenges mentioned in section [ sec : challenges ] is especially critical .the interaction between the consumers and the grid / ies has been prioritized in this work , which leads us to divide the whole smart grid testbed into two networks : 1 ) nan and 2 ) han .the ies set up the han , which is connected to the cloud directly though broadband internet , while the grid will set up the nan that connects the smart meters . however ,if the ies is provided by the grid , then the han and the uhg would certainly be connected under the same network .to keep the structure general , we consider them to be separated in the sutd testbed . to this end , what follows are the brief descriptions of the nan and han , and the associated technologies used in the sutd testbed .the nan of the testbed is composed of a number of han units from either the sutd dormitories or the university main campus and the cloud . in our setup , we employ two technologies for nan : one is bpl and tvws for smart meters setup by the grid , and another is the traditional fiber / cable internet , where all the uhgs are connected via wi - fi access points , which could be setup by a 3rd party ies . in fig . [fig : nan ] , we show a schematic diagram of how the nan is set up within the sutd testbed .now , we provide a brief description of the used communications technologies , i.e. , bpl and tvws , in the following subsections . in the testbed , the communication between the smart meter outside each residential unit and the data concentrator at the top of the apartment building is based on bpl .this choice of communication between them is simple and obvious as no additional cabling is needed in connecting all the units within an apartment block .it provides superior performance across thick walls as compared to any other types of communications and simplified the entire network management of smart meters . at sutd, tvws is used for linking the data concentrator from every apartment block to the base - station before data is uploaded to the cloud . in table[ table : tvws ] we show the detail specification of tvws that we have used at sutd .currently , infocomm development authority ( ida ) is setting up trials and the standardization of tvws in singapore .please note that tvws is a freely available spectrum and thus provides a great opportunity for many new small and virtual operators to setup their networks for m2 m applications at low cost . in this testbed , building a nan on tvws spectrum also provides more operational flexibility that can easily be scaled up by adding additional base stations ( bss ) .therefore , for nan wireless access provides good solution and wide coverage without much installation cost .furthermore , tvws ( unlicensed band ) can greatly reduce the cost , as compared to licensed band and fixed access .our testbed based on bpl and tvws provides a dedicated and separated network from the internet cabling , which is also used for web surfing or video streaming .it provides an opportunity to test the network delay and reliability for smart grid applications , and can be compared against the han that will be discussed next . while all participating units from the dormitories and the campus form the nan together , each unit is equipped with its own han inside the house as shown in fig .[ fig : han ] .a uhg is the central point of attention of a han , through which the ies can monitor the energy usage pattern or control each of the equipments of the unit within the network . here ,on one hand , the gateway adopts two - way communication protocols such as zigbee , z - wave and bluetooth to connect with equipments and sensors in the house . on the other hand, it is connected to the ies through wi - fi for sending the monitored information to the ies .a novel aspect of the sutd testbed is the introduction of a uhg in each of the han unit of both residential and commercial space areas , built on a raspberry pi computer ( model - b rev 1 ) as shown in fig .[ fig : han ] .please note that the han network consists of different type of smart plug and sensors that may need to communicate with each other . in this context ,universal home gateway ( uhg ) is developed and used in the testbed to facilitate such heterogeneous communication between different sensors and smart plugs .the uhg is capable of handling a number of communication protocols including z - wave , zigbee and bluetooth for communicating with different smart devices such as smart plugs and various sensors .the uhg directly transmits the monitored data from a han unit to the ies through wi - fi .the challenge of such a uhg is that it needs to communicate with devices ( e.g. smart - plug / sensors ) of various protocols .the delay between the commands from the ies to the uhg and from the uhg to the end devices could also pose challenges depending on the applications .it however provides a scalable solution that enables thousands or even ten thousands of households in the system . in the sutd testbed , we have implemented restful http and xmpp protocol for the communication between the uhg and the cloud of the ies .hence , the uhg serves as a gateway for many non - internet protocol based sensors to connect to the internet .zigbee is a low - cost and low - power wireless mesh network specification , which is built on the top of the ieee 802.15.4 standard for low - rate wireless personal area networks ( wpans ) , and operates in the ism frequency band 2.4 ghz .zigbee is essentially a non - ip based communication protocol and thus suitable for communication in testbeds set up in a non - residential setting .this is due to the fact that in sutd campus ( which is also the same as many other commercial / industrial network ) , the wi - fi network has very strict security , where the password is changed on regular basis ( e.g. , in every three months at sutd campus ) .however , it is significantly difficult to change the password of every sensor in every three months time . therefore , non - ip based solution is required and we use zigbee in the mpn to serve that purpose . besides , for a multi - hop extension considering the office scenario , we require a protocol that can support multi - hop to extend the coverage to reduce the needs of a gateway ( within a house , however , single hope with uhg is usually good enough ) .furthermore , zigbee provides security ( e.g. the zigbee provides basic security where only nodes of the same network i d can join in the network ) and the simplicity of having two - way communication facilities . the multi - purpose node ( mpn ) designed in sutd , as shown in fig .[ fig : han ] , consists of a number of low power sensors and actuators .it is equipped with a motion detector , a temperature and humidity sensor , a noise sensor and a lux sensor .it communicates with the uhg through its xbee chip , which is a zigbee communication module , to send the monitored information .the mpn is capable of controlling devices through its actuators like an ir blaster ( to control the air conditioning system ( acs ) ) and potentiometer ( to control the led light power supply ) .the mpn is driven by an arduino fio micro controller , which is installed within the mpn . in every hostel unit in sutd , each han consists of 4 mpns , one for each bedroom , and one for the living room . while in the office , there are 20 mpns ( one for each faculty office ) in one section of an office block connected through multiple zigbee relays to a single uhg .the uhg also fulfils an important role later on , when distributed drm is implemented , such that users occupancy and electric appliance usage info can be stored locally without sending to the ies to protect users privacy .z - wave is a wireless communication protocol around mhz that uses a low power technology for home automation .for instance , to monitor and remotely control the energy consumption of different equipments in the han , we have used a z - wave smart - plug for each of the devices in the unit . in the current set - up , electric appliances to be monitored are connected to power outlets through z - wave smart - plugs , whereby each smart - plug is connected to the uhg through a razberry module .the plug is able to monitor the amount of energy consumed by the connected device in real - time and instantaneously send that information to the uhg via z - wave communication .further , it also has a remote actuation capability e.g. , delaying an electric appliances due to peak in energy demand .note that the switching signals of smart - plugs are initiated through either the ies ( for centralized control ) or the uhg ( for distributed control ) .we use a bluetooth low energy ( ble ) sensor based on texas instrument s ( ti s ) cc2541 mcu to monitor the temperature , humidity and pressure level of its surroundings , and to communicate with the uhg via bluetooth . the ble sensor is also equipped with an accelerometer , a gyroscope and a magnetometer that provide the uhg with information like acceleration , orientation and magnetization of an object respectively . in the course of the experiments, we will distribute such ti sensors among the students so that they can design new internet - of - things product that can connect to our uhg and provide new green energy services in smart homes .we have developed a mobile application to install in students and faculties cellphones to engage them in the energy management experiments at sutd .for example , they can remotely switch on and off the smart plug or the acs , or monitor the energy usage collected by the testbed .dynamic pricing information can also be pushed to them over the application . during the experiments in the testbed , we face a number of challenges including communication delay , subject recruitment for case studies , modeling non - homogeneous scenarios and investigating the mismatch between the theoretical results and the outcomes from the experiments .first , communication delay is one of the challenges that we have encountered while running the experiments .we understand that there is a trade - off between the sampling rate and communication delay , and the delay can be reduced by increasing the rate of sampling .delays can also be caused by the time required for data savings and retrieval at the server .for example , if a server needs to retrieve data from large number of sensors at the same time , it may incur some delays . however , one potential way to reduce such delay is to efficiently design the database and use more customized script for increasing the processing speed .nevertheless , such delay is beyond the scope of this paper .now considering the response times of primary , secondary and tertiary reserves , our implemented centralized monitor and control scheme can comfortably use for secondary ( less than 15 minutes ) and tertiary reserve ( 15 minutes ) .however , it might not be suitable for primary reserve requiring very fast response time , e.g. , less than one minute .implementing distributed control instead of centralized and thus enable each node under observation to respond very fast , if necessary , could be a potential way to resolve this issue .second challenge is to recruit the subjects for experiments who will allow us to install sensors and communication modules in their rooms for monitoring and control purposes .although the experiments are completely institutional review board ( irb ) approved and are designed to conduct in an academic environment , it is hard to find participants for the experiments .this is due to the fact that the participants are concerned about their privacy and worry about the inconvenience that may create due to energy management .further , students lifestyle is significantly different from the lifestyle of typical residents . in this context , as a token of encouragement for participation , we have provided monetary incentives to each of the participants at the end of the study . after the rooms were chosen to set up the testbed , the third challenge was to interpret the readings from the sensors installed within the rooms .this is due to the fact that each user has different preferences and hence sets the sensors at different locations of the rooms .for instance , some may keep the sensors near the window whereby some prefer to keep them far from the windows .similarly , some prefer to always keep their window curtains open whereas some prefer them to remain close most of the time . as a consequence ,the readings on temperature , light , noise , and motion from the sensors have the possibility to be very different although they may represent the same environment .hence , there is a need to consider such behavioral differences in our experiments , and we have tried to differentiate the sensors data based on the context . for example , we perform some sort of learning before interpreting the readings from sensors . finally , we find it difficult to match the theoretical load consumption model with the model that we derive from the experiments .for example , let us consider the case of air conditioning systems ( acss ) .although there are many studies on the energy consumptions of acs in the literature , it is extremely difficult to find a study that has used a setup as same as the one we use , e.g. , in terms of acs type , room size and weather .therefore , algorithms that are designed based on the theoretical model may not behave as expected in practical scenarios . as a result ,additional steps are needed to build a model based on practical system setup , and building such model could be time consuming and very customized to a practical setting . after discussing various challenges ,now we will discuss the experiments that are currently ( or will be ) implemented at sutd in the next section to give a glimpse of the competence of this extensive testbed setup .most of the experiments using the testbed are deliberately related to electricity management either through direct control of electricity from the ies or by designing incentives that will encourage behavioral modification towards efficient energy use .our experiments at the sutd testbed are conducted 1 ) within the student hostels , and 2 ) at the sutd campus . based on their electricity usage pattern ,the sutd campus is considered as the commercial space in our experiments , whereby student hostels are thought of as residences .the energy consumed by residences differ based on the set of home appliances , standard of living , climate , social awareness and residence type .we use our testbed to develop solutions for residential energy management to attain different objectives . in the sutd testbed , we are interested in determining the potential of residential smart grid participation in ancillary markets .real - time ancillary markets allow participants to supply both regulation and reserve power .reserve power is generation capacity that is required during an unexpected outage of a scheduled generating plant and we are concerned with whether the residential users can provide the grid with either primary or secondary reserve power . in most systems ,discretionary loads can voluntarily participate in these markets as interruptible loads , willing to load - shed " in an exceptional event ; however , current market rules only allow participation of big loads . in our system ,the ies can aggregate smaller distributed loads and participate in these markets as a unified entity .although theoretically possible , the practical implementation with consideration of practical limitations have not been fully examined in full scale testbeds , especially the communication constraints . with the sutd testbed, we will verify its feasibility , and the requirements on the communication network in supporting such an application . using the implemented testbed, we also focus on designing dynamic pricing schemes for residential users .different design objectives such as penetration rates , flexibility thresholds and flexibility constraints will be considered . as an initial step towards this development ,a mobile application is being developed for the residential users at sutd that can periodically notify them about their real - time energy consumption amount and the associated price per unit of energy .we will use such dynamic pricing information to achieve certain drm goals and to promote energy saving awareness .the smart meter is installed in parallel with original meter , such that the smart meter can be used to simulate the electricity price based on dynamic pricing , while the original meter by the grid can continue provide meter readings for actual electricity billing .real - time drm can be affected significantly by the quality of the communication network of the smart grid . in this context, we are conducting experiments to observe the effect of the status of communication network on drm .for instance , if the communication network is congested , it would result in packet losses or delay .we are interested in investigating how this congestion of the network affects the drm in terms of delay and consequently the cost , reduction of peak load and energy savings .the installed smart - plugs and mpns in each han unit at the sutd student hostel enables real - time monitoring of energy consumption of different appliances as shown in fig [ fig : application ] .this provides insights on the students personal preferences and allows to feed the energy consumption data back to the consumers in real - time , which along with the incentive based drm policies would enable the users to modify their usage pattern towards more efficient energy management .it is important to note that our testbed is to monitor the energy usage . however, each appliance has a unique characteristic profile as shown in fig .[ fig : application ] , which can be used to identify what appliance that is , and in return , speculate the activity and number of users .this prompts the need for secure communication .hostel units in order to reduce peak load . ]now to demonstrate how the proposed testbed can be used to perform energy management under practical communication impairment , we run experiment for peak load shaving in our testbed . to do so , we have considered home units where each unit is equipped with two flexible loads ( we use lights as flexible loads that provide on / off control ) .we model the daily base load according to the reports of national electricity market of singapore ( nems ) and scale according to the energy rating of light bulbs such that we can obtain a significant percentage of flexible load .we assume a threshold for total maximum allowable load consumption by all the units and design an algorithm to control the flexible load of each home units in order to keep the energy consumption always below the threshold .the algorithm is based on a centralized control scheme in which the energy management service provider conduct demand management by switching off several appliances of the users to maintain the total power demands below the given threshold .as designed , the energy management service provider considered not only the engagement of users in the demand response but also the inconvenience of users when the demand response protocol is conducted .we find that the demand response can be affected by communication delay .the detail of the algorithm and assumption of the experiment can be found in .the results from the experiment are demonstrated in fig .[ fig : peak_load_shave ] . in fig .[ fig : peak_load_shave ] , the green zone and blue zone of the figure denote the load profile with and without demand management respectively whereby grey zone indicate the base load and red line show the maximum allowable peak demand threshold ( which is kw in the considered case ) .now , as demonstrated in fig .[ fig : peak_load_shave ] , when the total demand is under peak limit , there is no need to do any controlling and hence no difference is observed between green zone and the blue line .however , once the total demand exceed the peak limit , the algorithm is executed and thus controls the flexible loads in each home unit and turns some of them off to reduce demand load promptly as indicated by the blue line . it is important to note that the algorithm continuously run in the backend and monitors for situations when the total demand may exceed the threshold .once such situation arises , the algorithm switches off some of the appliances to keep the demand within the threshold .however , the algorithm , as it is designed , considers not only the demand control but also the associated inconvenience that the users may experience for such control. therefore , control of appliances is always kept at as low as possible in maintaining the demand .furthermore , while controlling some appliances for handling excess demand , there could be new loads switched on by the users that can also contribute to the overall demand . as a consequence ,there is no sudden decrease is observed in the blue line as can be seen from fig .4 . thus , the result in fig .[ fig : peak_load_shave ] clearly shows that our developed testbed is effective to preform energy management applications .[ table : office ] we are also interested in studying energy management in commercial buildings with a view to reduce energy costs for all stakeholders . to this end , we are currently conducting the following experiments at the sutd campus .one of our ongoing experiments is to implement drm schemes for offices that considers the real - time control of acss .the objectives are mainly two - fold : 1 ) real - time thermostat control to manage peak demand , and 2 ) optimal management of acs demand under dynamic pricing for energy cost management . to support this ,we are developing energy consumption models and real - time control algorithms for acss .the testbed allows us to monitor and collect the data on consumed power by the acss under different permutations of indoor and outdoor conditions .these are pivotal for algorithm development and acs energy consumption modeling . in offices , usually there exists a large number of meeting rooms that are used only occasionally .hence , the scheduling of their usage is a possible way to shape the energy consumption of commercial buildings .we formulate a comprehensive study , where for a given set of meeting room requests over a fixed time period and a set of meeting rooms in a dynamic pricing market , the meeting scheduler finds a feasible routine that minimizes the total energy consumption ( and/or cost ) .the sutd testbed is used for obtaining the real - data , and to test the scheduler in order to verify the effectiveness of the optimal meeting scheduling protocol .now , in order to save energy and associated costs of individual office rooms , we experiment to identify the potential wastage in an office environment . to do so , we have deployed a number of sensors , i.e. , the mpn of fig .[ fig : han ] in each of the offices .for instance , if the sensors do not detect any motion or noise in a room for a predefined period of time , we can assume that there is no one inside the room .a record of such observation is shown in table [ table : office ] , which shows the duration of turned - on lights and acss of eight different office rooms when there are no occupants .according to table [ table : office ] , the energy wastage in most of the considered rooms is significant in absence of the occupants , which is on average kwh and kwh per room from lights and acs respectively . now , if we assume that the average wastage is the same for all 200 office rooms of a typical medium size university campus , the total energy that wasted during the considered time duration of 54 days can be estimated as kwh . now considering the electricity rate in singapore , which is calculated via home electricity audit form available online, this wastage can be translated into a total of around sgd from the considered 200 rooms during a period of days .therefore , if the proposed testbed can be used in offices for the considered setting , about sgd can be saved from wastage .the saving will be even more if the electricity price is dynamic , as the office hour is typically overlap with day time peak period .one can then perform on / off control to the acss ( such as in fig .[ fig : peak_load_shave ] ) to achieve energy saving or peak load shaving .in this paper , we have discussed some aspects on the development of the smart grid testbed at the singapore university of technology and design ( sutd ) .these testbeds within the university campus have been setup to approximate both residential and commercial spaces .we have discussed the general components , features and related challenges of such a testbed implementation with emphasis on the creation of the communication system architecture .ongoing experiments at the testbed include the harnessing of detailed data streams from the testbed for different customized energy management programs such as demand response .the flexibility and extensivity of the deployed testbed allows for the research team to explore and implement effective and practical communications technologies and smart grid applications , which could ultimately increase the acceptance and adoption rate of these systems .this work is supported in part by the singapore university of technology and design through the energy innovation research program singapore under grant nrf2012ewt - eirp002 - 045 , in part by the sutd - mit international design center , singapore under grant idg31500106 and in part by nsfc-61550110244 .s. maharjan , q. zhu , y. zhang , s. gjessing , and t. baar , `` dependable demand response management in the smart grid : a stackelberg game approach , '' _ ieee trans . smart grid _ ,vol . 4 , no . 1 , pp . 120132 ,mar 2013 .w. tushar , c. yuen , s. huang , d. smith , and h. v. poor , `` cost minimization of charging stations with photovoltaic : an approach with ev classification , '' _ ieee trans ._ , vol .17 , no . 1 ,156169 , jan . 2016 .a. naeem , a. shabbir , n. u. hassan , c. yuen , a. ahmed , and w. tushar , `` understanding customer behavior in multi - tier demand response management program , '' _ ieee access ( special issue on smart grids : a hub of interdisciplinary research ) _ , vol . 3 , pp . 26132625 , nov .i. atzeni , l. g. ordez , g. scutari , d. p. palomar , and j. r. fonollosa , `` demand - side management via distributed energy generation and storage optimization , '' _ ieee trans . smart grid _ ,vol . 4 , no . 2 , pp . 866876 , june 2013 .y. liu , c. yuen , s. huang , n. ul hassan , x. wang , and s. xie , `` peak - to - average ratio constrained demand - side management with consumer s preference in residential smart grid , '' _ ieee j. sel .topics signal process . _ ,vol . 8 , no . 6 , pp .10841097 , dec 2014 .n. u. hassan , m. a. pasha , c. yuen , s. huang , and x. wang , `` impact of scheduling flexibility on demand profile flatness and user inconvenience in residential smart grid system , '' _ energies _ , vol . 6 , no . 12 , pp .66086635 , dec 2012 . l. yu , t. jiang , and y. cao , `` energy cost minimization for distributed internet data centers in smart microgrids considering power outages , '' _ ieee trans .parallel distrib ._ , vol . 26 , no . 1 ,120130 , jan 2015 .w. tushar , b. chai , c. yuen , d. b. smith , k. l. wood , z. yang , and h. v. poor , `` three - party energy management with distributed energy resources in smart grid , '' _ ieee trans ._ , vol .62 , no . 4 ,24872498 , apr .2015 .li , c. yuen , n. u. hassan , w. tushar , and c .- k .wen , `` demand response management for residential smart grid : from theory to practice , '' _ ieee access ( special issue on smart grids : a hub of interdisciplinary research ) _ , vol . 3 , pp .24312440 , nov .
|
successful deployment of smart grids necessitates experimental validities of their state - of - the - art designs in two - way communications , real - time demand response and monitoring of consumers energy usage behavior . the objective is to observe consumers energy usage pattern and exploit this information to assist the grid in designing incentives , energy management mechanisms , and real - time demand response protocols ; so as help the grid achieving lower costs and improve energy supply stability . further , by feeding the observed information back to the consumers instantaneously , it is also possible to promote energy efficient behavior among the users . to this end , this paper performs a literature survey on smart grid testbeds around the world , and presents the main accomplishments towards realizing a smart grid testbed at the singapore university of technology and design ( sutd ) . the testbed is able to monitor , analyze and evaluate smart grid communication network design and control mechanisms , and test the suitability of various communications networks for both residential and commercial buildings . the testbeds are deployed within the sutd student dormitories and the main university campus to monitor and record end - user energy consumption in real - time , which will enable us to design incentives , control algorithms and real - time demand response schemes . the testbed also provides an effective channel to evaluate the needs on communication networks to support various smart grid applications . in addition , our initial results demonstrate that our testbed can provide an effective platform to identify energy wastage , and prompt the needs of a secure communications channel as the energy usage pattern can provide privacy related information on individual user .
|
the various entropy bounds that exist in the literature ( see for a review ) suggest that an underlying theory of quantum gravity should predict these bounds from a counting of microstates and should clarify which are the fundamental degrees of freedom one is actually counting .this verification of the thermodynamic laws is an important consistency check for any approach to quantum gravity .in what follows we review an earlier work by dou and sorkin defining a microscopic measure for black hole entropy together with our recent proposal for measuring the maximum entropy contained in a spherically symmetric spacelike region , within the causal set approach to quantum gravity .causal set theory is an approach to fundamentally discrete quantum gravity ( see for a recent review ) . besides taking fundamental discreteness as a first principle , the primacy of causal structure is the main observation underlying causal sets . mathematically a causal set is a locally finite partially ordered set , or in other words a set endowed with a binary relation ` precedes ' , which satisfies : _ transitivity _ : if and then , _ irreflexivity _ : , _ local finiteness _ : for any pair of elements and of , the set of elements lying between and is finite , .some useful definitions are the past of an element and its future .further , a relation is called a link iff .elements of the causal set whose future ( past ) is empty are called maximal ( minimal ) .the hypothesis of causal set theory is that spacetime at short scales such as the planck length is fundamentally discrete , and is better described by a causal set than a differentiable manifold .the notion of continuum lorentzian spacetime at larger scales is recovered as an approximation of the causal set .this occurs when the causal set can be faithfully embedded into , where faithfully means that the embedding respects not only the causal relations , but also a correspondence between cardinality and spacetime volume . .( b ) spherically symmetric spacelike region , its future domain of dependence and future cauchy horizon .,width=528 ]in an earlier work , dou and sorkin considered the four - dimensional schwarzschild black hole in its dimensionally reduced form , where is the schwarzschild radius of the black hole and and are the kruskal coordinates .assuming that this spacetime arises as an approximation to a causal set which can be faithfully embedded into it , they propose to count the number of causal links from causal set elements to elements ( see fig .the motivation for counting links comes from regarding the black hole entropy as arising from quantum entanglement across the horizon evaluated at a null hypersurface , and noting that the links are effectively irreducible elements of potential information flow in a causal set .the number of such links is given by , where denotes the volume of .( the dimensional reduction is necessary to make feasible the computation of such regions . ) to suppress certain unphysical nonlocal links one further has to impose that the elements are minimal in .evaluating the above integral at scales much larger than the discreteness scale then yields ( where the represent higher order terms in the ratio of the discreteness scale to the macroscopic scale ) . unfortunately ,when one considers the angular dimensions , it now seems clear that the expected number of links will diverge , essentially because the intersection of the future light cone of a candidate element with has an infinite extent .however , it seems likely that a minor variation , such as counting triples of elements rather than pairs , will lead to a convergent integral in the full four - dimensional case .we now discuss our recently proposed microscopic evidence for the spherical entropy bound arising from causal set theory .susskind s spherical entropy bound states that the entropy of the matter content of a spherically symmetric spacelike region ( of finite volume ) is bounded by a quarter of the area of the boundary of in planck units , , where is the planck length . in the case of black holesthe counting of links is computationally difficult in the full four - dimensional geometry , because of the complicated causal structure in the angular coordinates . forthe simpler case of the spherically symmetric region let us now propose the following measure of entropy .note that the entropy of the matter contained in must eventually `` flow out '' of the region by passing over the boundary of its future domain of dependence , the future cauchy horizon ( see fig .but because spacetime is fundamentally discrete , the amount of such entropy flux is bounded above by the number of discrete elements comprising this boundary .these elements can be seen as just the maximal elements of the causal set faithfully embedded into the future domain of dependence .this is similar to the case of the black hole , where the links started at the elements which were maximal in ( by definition of being linked to ) .hence we define the _ maximal entropy _ contained in as the number of maximal elements in , where is the volume of .the claim is that if the fundamental discreteness scale is fixed at a dimension - dependent value this proposal leads to susskind s spherical entropy bound in the continuum approximation , , where is the area of the boundary of . for the casewhere is a three dimensional - ball in four - dimensional minkowski spacetime , can be evaluated analytically yielding , at scales much larger than the discreteness scale , .this shows that indeed the result is proportional to the area of the boundary of .if we fix the fundamental discreteness scale to {6}\ , l_p$ ] , we arrive at the desired result .further , we could numerically show that one obtains the same result in the case of different spherically symmetric spacelike regions in four - dimensional minkowski spacetime as well as for different dimensions , where the value of the fundamental discreteness scale changed with the dimension .work in progress indicates that this result is also true in the case of conformally flat friedmann - robertson - walker spacetime .the authors acknowledge support by the european network on random geometry , enrage ( mrtn - ct-2004 - 005616 ) .further , we would like to thank f. dowker for enjoyable discussions , comments , and critical proof reading of the manuscript .
|
the finiteness of black hole entropy suggest that spacetime is fundamentally discrete , and hints at an underlying relationship between geometry and `` information '' . the foundation of this relationship is yet to be uncovered , but should manifest itself in a theory of quantum gravity . we review recent attempts to define a microscopic measure for black hole entropy and for the maximum entropy of spherically symmetric spacelike regions , within the causal set approach to quantum gravity . imperial - tp-06-sz-06 + _ d. rideout _ and _ s. zohren _ talk given by s. zohren at the eleventh marcel grossmann meeting on general relativity at the freie u. berlin , july 23 - 29 , 2006 .
|
dataflow matrix machines can be understood as recurrent neural networks ( rnns ) generalized as follows .neurons can have different types .neurons are not limited to receiving and emitting streams of real numbers , but can receive and emit streams of other vectors ( depending on the type of a neuron ) . among possible types of vectors are probability distributions and signed measures over a wide class of spaces ( including spaces of discrete objects ) , and samples from the underlying spaces can be passed over the links as the representations of the distributions and measures in question .thus it is possible to send streams of discrete objects overs the links of so generalized neural nets while retaining the capabilities to take meaningful linear combinations of those streams ( section [ sec : linear_streams ] ) .because built - in neurons can be pretty powerful , dataflow matrix machines of very small size can already exhibit complex dynamics .so meaningful dataflow matrix machines can be quite compact , which is typical also for probabilistic programs and less typical for conventional rnns . as a powerful new tool , dataflow matrix machines can be used in various ways . herewe would like to focus on the aspects related to program synthesis ( also known as program learning , symbolic regression , etc ) .one way which might improve the performance of program learning systems would be to find continuous models of computations allowing for continuous deformations of software .computational architectures which admit the notion of linear combination of execution runs are particularly attractive in this sense and allow to express regulation of gene expression in the context of genetic programming .it turns out that if one computes with linear streams , such as probabilistic sampling and generalized animations , one can parametrize large classes of programs by matrices of real numbers , obtaining _ dataflow matrix machines_. in particular , recurrent neural networks provide examples of such classes of programs , where linear streams are taken to be streams of real numbers with element - wise arithmetic , and there is usually a very limited number of types of non - linear transformations of those streams associated with neurons ( it is often the case that all neurons are of the same type ) . turing universality of some classes of recurrent neural networks and first schemas to compile conventional programming languages into recurrent neural networks became known at least 20 years ago ( see and references therein ) . nevertheless , recurrent neural networks are not used as a general purpose programming platform , or even as a platform to program recurrent neural networks themselves .the reason is that universality is not enough , one also needs practical convenience and power of available primitives .dataflow programming languages , including languages oriented towards work with streams of continuous data ( e.g. labview , pure data ) , found some degree of a more general programming use within their application domains .dataflow matrix machines have architecture which generalizes both recurrent neural networks and the core architecture of dataflow languages working with the streams of continuous data .dataflow matrix machines allow to include neurons encoding primitives which transform the networks ( as long as it makes sense for those primitives to be parts of linear combinations ) , and thus potentially allow to move towards using generalized rnns to program generalized rnns in a higher - order fashion .this gives hope that , on one hand , one would be able to use methods proven successful in learning the topology and the weights of recurrent neural networks to synthesize dataflow matrix machines , and that at the same time one would be able to use dataflow flow matrix machines as a software engineering platform for various purposes , including the design , transformation , and learning of recurrent neural networks and dataflow matrix machines themselves .two prominent and highly expressive classes of linear streams are probabilistic sampling and generalized animations . the linear combinations for each type of linear stream might be implemented on the level of vectors , which is what we do for the streams of real numbers in the recurrent neural networks , and for the streams of generalized images in generalized animations .however , the linear combination might be also implemented only on the level of streams , which allows to represent large or infinite vectors by their compact representatives .for example , probability distributions can be represented by samples from those distributions , and linear combinations with positive coefficients can be implemented as stochastic remixes of the respective streams of samples .this provides opportunities to consider the architectures which are hybrid between probabilistic programming and rnns .here we briefly describe the formal aspect of dataflow matrix machines .we follow , but transpose the matrix notation .we fix a particular _ signature _ by taking a finite number of neuron types , each with its own fixed finite non - negative arity ( zero arity corresponds to inputs ) and associated nonlinear transform .we take a countable number of copies of neurons for each neuron type from the signature .then we have a countable set of inputs of those operations , , and a countable set of their outputs , .associate with each a linear combination of all with real coefficients .we require that no more than finite number of elements of the matrix are nonzero .thus we have a countable - sized program , namely a countable dataflow graph , all but a finite part of which is suppressed by zero coefficients . any finite dataflow graph over a particular signaturecan be embedded into a universal countable dataflow graph over this signature in this fashion .hence we represent programs over a fixed signature as countable - sized real - valued matrices with no more than finite number of nonzero elements , and any program evolution would be a trajectory in this space of matrices .there is quite a bit of interest recently in using recurrent neural networks and related machines to learn algorithms ( see e.g. and references therein ) .however , a typical result of program learning is a program which is functional , but is almost impossible to read and comprehend . at the same time, there is a lot of interest and progress in learning readable structures in various areas , and in understanding the details of learned models .for example , in recent years people demonstrated progress in automatic generation of readable mathematical proofs , in automatically capturing syntactic patterns of software written by humans , and in understanding and visualization the details of functioning of learned neural models .bukatin , m. , matthews , s. : linear models of computation and program learning , in : gottlob , g. , , sutcliffe , g. , voronkov a. ( eds . ) , gcai 2015 , easychair proceedings in computing , vol .6678 , http://easychair.org/publications/download/linear_models_of_computation_and_program_learning
|
dataflow matrix machines are a powerful generalization of recurrent neural networks . they work with multiple types of arbitrary linear streams , multiple types of powerful neurons , and allow to incorporate higher - order constructions . we expect them to be useful in machine learning and probabilistic programming , and in the synthesis of dynamic systems and of deterministic and probabilistic programs .
|
in scientific literature there exist many classical sets of functions which can decompose a signal in terms of `` simple '' functions .for example taylor or fourier expansions are used routinely in scientific and engineering applications.(and many other exist ) .however in all these expansions the underlying functions are not intrinsic to the signal itself and a precise approximation to the original signal might require a large number of terms .this problem become even more acute when the signal is non - stationary and the process it represents is nonlinear . to overcome this problem many researchers used in the past the `` principal component algorithm '' ( pca ) to come up with an `` adaptive '' set of functions which approximate a given signal . a new approach tothis problem emerged in the late 1990 s when a nasa team has developed the `` empirical mode decomposition '' algorithm(emd ) which attempt to decompose a signal in terms of it `` intrinsic mode functions''(imf ) through a `` sifting algorithm '' .a patent for this algorithm has been issued [ 1 ] .the emd algorithm is based on the following quote [ 2 ] : `` according to drazin the first step of data analysis is to examine the data by eye . from this examination, one can immediately identify the different scales directly in two ways : by the time lapse between successive alterations of local maxima and minima and by the time lapse between the successive zero crossings .... we have decided to to adopt the time lapse between successive extrema as the definition of the time scale for the intrinsic oscillatory mode '' a step by step description of the emd sifting algorithm is as follows : 1 . let be given a function which is sampled at discrete times .2 . let .3 . identify the max and min of .4 . create the cubic spline curve that connects the maxima points .do the same for the minima .this creates an envelope for .5 . at each time evaluate the mean of and ( is referred to as the sifting function ) .evaluate .if norm of for some predetermined set the first intrinsic function ( and stop ) . 8 . if the criteria of ( 7 ) are not satisfied set and return to ( 3 ) ( `` sifting process '' ) .the algorithm has been applied successfully in various physical applications .however as has been observed by flandrin [ 3 ] and others the emd algorithm fails in many cases where the data contains two or more frequencies which are close to each other . to overcome this difficulty we propose hereby a modification of the emd algorithm by replacing steps and in the description above by the following : \4 .find the midpoints between two consecutive maxima and minima and let be the values of at these points .create the spline curve that connects the points .the essence of this modification is the replacement of the mean which is evaluated by the emd algorithm as the average of the max - min envelopes by the spline curve of the mid - points between the maxima and minima .this is in line with the observation by drazin ( which was referred to above ) that the scales inherent to the data can be educed either from the max - min or its zero crossing . in the algorithmwe propose hereby we mimic the `` zero - crossings '' by the mid - points between the max - min .it is our objective in this paper to justify this modification of the emd algorithm through some examples and theoretical work .the plan of the paper is as follows : in sec . we provides examples of signals composed two or three close frequencies ( with and without noise ) where the classical emd algorithm fails but the modified one yields satisfactory results . in sec . we carry out analytical analysis of the two algorithms which are applied to the same signal . in sec . we discuss the convergence rate , resolution and related issues concerning the classical and new `` midpoint algorithm '' . sec . address the application of this algorithm to atmospheric data and in sec . we compare the emd and pca algorithmsextensive experimentations were made to test and verify the efficiency of the modified algorithm .we present here the results of one of these tests in which the signal contains three close frequencies .( in our tests we considered also the effects of noise and phase shifts among the different frequencies ) \ ] ] where to apply the emd algorithm to this signal , we used a discrete representation of it over the interval ] the extrema of the signal are given by and therefore it is easy to construct the spline approximation , to the maximum and minimum points and compute their average .similarly we can find the midpoints between the maxima and minima and evaluate the corresponding spline approximation to the signal at these points .after one iteration of the sifting process the `` sifted signal '' is given respectively by and the efficiency of the two algorithm can be deduced by projecting these new signals on the fourier components of the original signal . to this endwe compute and the amplitude of the fourier components of the two frequencies in the classical emd algorithm is similarly for the mid - point algorithm we the objective of the sifting process is to eliminate one of the fourier components in favor of the other . as a resultthe first imf will contains , upon convergence , only one of the fourier components in the original signal .therefore the efficiency of the two algorithm can be inferred by comparing versus and versus .computing the integrals that appear in eqs.([3.4])-([3.7 ] ) we obtain these results show that after one iteration the classical emd did not separate the two frequencies effectively . on the other handthe mid - point algorithm performed well .to compare the convergence rates of the classical versus the midpoint algorithm we considered three cases all of which were composed of two frequencies . in the first case the two frequencies were well separated . in the second casethe two frequencies were close while in the third case they were almost `` overlapping '' . in all casesthe signal was given by this signal was discretized on the interval $ ] with .for the first case the two frequencies were as can be expected both the classical and midpoint algorithm were able to discern the individual frequencies through the sifting algorithm .however it took the classical algorithm iterations to converge to the first imf . on the other hand the midpoint algorithm converged in only iterations ( using the same convergence criteria ) .we wish to point out also that the midpoint algorithm has a lower computational cost than the classical algorithm .it requires in each iteration the computation of only one spline interpolating polynomial . on the other handthe classical algorithm requires two such polynomials , one for the maximum points and one for the minimum points .for the second test the frequencies were that is the difference between the two frequencies is . in this casethe midpoint algorithm was able to separate the two frequencies .fig and fig compare the power spectrum of the original frequencies versus those of and which were obtained through this algorithm .convergence to was obtained in 18 iterations and was obtained by additional iterations .the classical emd algorithm did converge to in iterations but the power spectrum of this deviated significantly from the first frequency in the signal(see fig ) . failed ( completely ) to detected correctly the second frequency . in third casethe frequencies were in this case the classical algorithm was unable to separate the two frequencies i.e contained both frequencies ( see fig ) .the midpoint algorithm did somewhat better but the resolution was not complete ( see fig ) .moreover the sifting process in both cases led to the creation of `` ghost frequencies '' which were not present in the original signal . at this junctureone might wonder if a `` hybrid algorithm '' whereby the sifting function is the average ( or some similar combination ) of those obtained by the classical and midpoint algorithms might outperform the separate algorithms ( in spite of the obvious additional computational cost ) .however our experimentations with such algorithm did not yield the desired results ( i.e. the convergence rate and resolution did not improve ) .there have been recent interest in the observation and properties of gravity waves which are generated when wind is blowing over terrain . in partthis interest stems from the fact that these waves carry energy and accurate measure of this data is needed to improve the performance of numerical weather prediction models .as part of this scientific campaign the usaf flew several balloons that collected information about the pressure and temperature as a function of height .the temperature data collected by one of these balloons is presented in fig . [ 6 ] . to analyze this signal we detrended first it by subtracting its mean from the data .when the mid - point emd algorithm was applied to this detrended - signal the first imf extracted the experimental noise from while the second and third imfs educed clearly the gravity waves ( the second imf is depicted in fig . ) . on the other hand the classical emd algorithm failed to educe these waves from the detrended - signal . subtracting the gravity waves that were detected by the mid - point algorithm from the detrended - signal we obtain the `` turbulent residuals ''whose spectrum is shown in fig .the slope of this signal in the `` inertial frequency range '' is which corresponds well with the fact that the flow in stratosphere is `` quasi two - dimensional '' [ 7 - 9 ] .before the emergence of the emd algorithm an adaptive data analysis was provided by the `` principal component algorithm''(pca ) which is referred to also as the `` karahunan - loeve ( k - l ) decomposition algorithm '' .( for a review see [ 10 ] ) here we shall give only a brief overview of this algorithm within in the geophysical context .let a signal be represented by a a time series ( of length ) of some variable.we first determine a time delay for which the points in the series are decorrelated . using we create copies of the original series ( to create these one uses either periodicity or choose to consider shorter time - series ) .then one computes the auto - covariance matrix let be the eigenvalues of with their corresponding eigenvectors the original time series can be reconstructed then as where the essence of the pca is based on the recognition that if a large spectral gap exists after the first eigenvalues of then one can reconstruct the mean flow ( or the large component ( of the data by using only the first eigenfunctions in ( [ phi ] ) .a recent refinement of this procedure due to ghil et al ( [ 10 ] ) is that the data corresponding to eigenvalues between and up to the point where they start to form a `` continuum '' represent waves .the location of can be ascertained further by applying the tests devised by axford [ 11 ] and dewan [ 7 ] .thus the original data can be decomposed into mean flow , waves and residuals ( i.e. data corresponding to eigenvalues which we wish to interpret at least partly as turbulent residuals ) .the crucial step in this algorithm is the determination of the points and whose position has to ascertained by additional tests whose results might be equivocal .we applied this algorithm to the geophysical data described in sec . with and computed the resulting spectrum of the correlation matrix .this spectrum is depicted in fig . . based on this spectrumwe choose and we obtain the corresponding wave component of the signal that is shown in fig . .we conclude that while the pca algorithm provides an alternative to the emd algorithm the determination of the cutoff points is murky in many cases. however it will be advantageous if one apply the two algorithms in tandem in order to obtain a clear cut confirmation of the results .* n. e. huang - usa patent , date oct 30,2001 * n. e. huang et all , the empirical mode decomposition and the hilbert spectrum for nonlinear and non - stationary time series analysis " , proceedings of the royal society vol .454 pp.903 - 995 ( 1998 ) * gabriel rilling and patrick flandrin , one or two frequencies ? the empirical mode decomposition answers " , ieee trans .signal analysis vol .56 pp.85 - 95 ( 2008 ) .* zhaohua wu and norden e. huang , on the filtering properties of the empirical mode decomposition , advances in adaptive data analysis " , volume : 2 , issue : 4 pp .397 - 414 .( 2010 ) * albert ayenu - prah and nii attoh - okine , a criterion for selecting relevant intrinsic mode functions in empirical mode decomposition " , advances in adaptive data analysis , vol .2 , issue : 1(2010 ) pp . 1 - 24 .* george jumper , private communication " ( 2001 ) * dewan , e.m . , on the nature of atmospheric waves and turbulence , radio sci . " 20 , p. 1301 - 1307( 1985 ) . *kraichnan , r. , on kolmogorov inertial - range theories " , j. fluid mech ., 62 , p. 305 - 330( 1974 ) . * lindborg , e. , can the atmospheric kinetic energy spectrum be explained by two dimensional turbulence " , j. fluid mech , 388 , p. 259- 288 ( 1999 ) .* c. penland , m. ghil and k.m .weickmann , `` adaptive filtering and maximum entropy spectra , with application to changes in atmospheric angular momentum '' , j. geophys .res . , 96 , 22659 - 22671 ( 1991 ) .* d. n. axford , `` spectral analysis of aircraft observation of gravity waves '' , q.j .royal met .soc . , 97 , 313 - 321 ( 1971 ) .
|
the classical emd algorithm has been used extensively in the literature to decompose signals that contain nonlinear waves . however when a signal contain two or more frequencies that are close to one another the decomposition might fail . in this paper we propose a new formulation of this algorithm which is based on the zero crossings of the signal and show that it performs well even when the classical algorithm fail . we address also the filtering properties and convergence rate of the new algorithm versus the classical emd algorithm . these properties are compared then to those of the principal component algorithm ( pca ) . finally we apply this algorithm to the detection of gravity waves in the atmosphere . * keywords : * filtering , emd algorithm
|
the interplanetary space is permeated by the solar wind , a rarefied , magnetized plasma continuously expanding from the solar corona .the solar wind blows radially away from the sun , and extends up to about 100au , at supersonic and superalfvnic speed .measurements collected by spacecraft instruments during last decades have evidenced that low frequency fluctuations have power law spectra .this supports the study of fluctuations in the framework of magnetohydrodynamic ( mhd ) turbulence .the mhd nature of the turbulent fluctuations has recently been confirmed by more detailed analysis of the linear scaling of the mixed third order moment .one of the most interesting properties of solar wind turbulence is the intermittent character of the fluctuations of fields such as velocity , magnetic field , or the elsasser fields .intermittency is related to the non - homogeneous generation of energetic structures in the flow , due to the nonlinear transfer of energy across the scales as observed in geophysical flows and heliospheric plasmas , whose efficiency can be correlated with levels of cross - helicity or self - generated kinetic helicity . being ubiquitous in turbulence , intermittency plays a relevant role in the statistical description of the field fluctuations .the main manifestation of intermittency in fully developed turbulence is the scale dependent variation of the statistical properties of the field increments , customarily defined as for a unidimensional generic field at the scale .in particular , many studies have focused on scaling properties of the probability distribution functions ( pdfs ) of the field fluctuations , ) , and of the structure functions , defined as the moments of the distribution function of the field fluctuations ( here indicates an ensemble average ) .the intermittent ( i.e. non - homogeneous ) concentration of turbulent energy on small scale structures , as the energy is transferred through the scales , results in the enhancement of the pdfs tails , indicating that large amplitude fluctuations are more and more probable at smaller and smaller scale .correspondingly , the structure functions scaling exponents deviate from the linear prescription valid for scale invariant pdf . such deviation has been studied since early works on turbulence .methods based on wavelet transform or threshold techniques have been developed for the identification , description and characterization of the intermittent structures .on the other hand , models for intermittency try to reproduce the shape of the pdfs or the structure functions anomaly , allowing the quantitative evaluation of intermittency . in solar wind plasma, intermittency has been extensively characetrized in the recent years . as result of intermittent processes , small scale structures , like for example thin current sheets or tangential discontinuities ,have been identified in solar wind .their dependence on parameters such as solar activity level , heliocentric distance , heliolatitude and wind speed has been also pointed out .one of the models for the description of the increments pdfs was introduced by castaing , and successfully applied in several contexts . within the multifractal framework , for each scale , the energy transfer rate has non - homogeneous scaling properties , as for example the fractal dimension ( which is directly related to the cascade efficiency ) in different regions of space .the turbulent fields can be thus interpreted as a superposition of subsets , each characterized by a given fractal dimension , and with a typical energy transfer rate .each region can then be reasonably assumed to have the same distribution of the field fluctuations , with variable width ( depending on the cascade efficiency , and related with the local fractal dimension ) and weight ( depending on the fraction of space characterized by the same statistics ) .the castaing model pdf consists thus of the continuous superposition of such distributions , each contributing to the statistics with its appropriate weight .the latter is introduced through the distribution function of the widths .this leads , for each time scale , to the convolution : based on empirical large scale pdf shape , a gaussian parent distribution is normally used .it is known from turbulence studies that pdfs of fluctuations have to be skewed . indeed , symmetric pdfs would result in vanishing odd - order moments of the fluctuations , in contrast with experimental results .asymmetric pdfs are also necessary to satisfy the linear scaling of the ( non - vanishing ) third - order moment of the fluctuations , as required by theoretical results .thus , in order to account for this , a skewness parameter must also be included , so that \ , .\label{gaussian}\ ] ] the pdf of variances needs theoretical prescription .a log - normal ansatz has been often used , as conjectured in the framework of the multifractal cascade . \label{lognorm}\ ] ] such choice has been justified by assuming that the nonlinear energy transfer is the result of a multifractal fragmentation process , giving rise to a multiplicative hierarchy of energetic structures . by assuming random distribution of the multipliers ( namely , of the local efficiency of the cascade ) , the central limit theorem suggests a log - normal distribution of the local energy transfer rate .then , from dimensional considerations , the fluctuations variance can be expected to share the same statistical properties as the energy transfer rate , therefore giving the log - normal distribution ( [ lognorm ] ) . in equation ( [ lognorm ] ) , for , the log - normal pdf is a -function , so that the convolution ( [ convolution ] ) gives one gaussian of width the most probable value of . as increases , the convolution includes more and more values of , and the pdf tails are enhanced .therefore , the scaling of the parameter controls the shape of the pdf tails , and describes the deviation from the parent distribution , characterizing the intermittency in the inertial range .in fully developed turbulence , a power - law scaling is usually observed for .finally , a relationship can be established between the scaling exponent of and the multifractal properties of the flow . as briefly described above, the castaing model is based on hypotheses on the physical processes governing the turbulent cascade .the main assumption is the choice of the weights distribution , which needs appropriate theoretical modeling . in this paper , after verifying that the multifractal cascade framework applies to solar wind turbulence , we show that it is possible to describe solar wind intermittency without any hypothesis on the shape of such distribution , but rather using empirical weights . in section [ data ]we briefly introduce the data used for the analysis ; in section [ epsilon ] we estimate the empirical distribution function of the local energy dissipation rate ; section [ conditioned ] shows the conditioned analysis performed on the data in order to extract the empirical weights for the castaing model , the self - consistent castaing probability distribution functions are built and compared with the experimental pdfs .this work is based on the analysis of three different samples of _ in situ _ measurements of velocity , mass density , estimated using proton and particle , and magnetic field .the elsasser variables have also been evaluated from the time series .two samples were taken in the ecliptic wind by the helios 2 spacecraft during the first 4 months of 1976 , when the spacecraft orbited from on day 17 , to on day 108 .data resolution is seconds , and eleven fast or slow wind streams , each about hours long , have been extracted to avoid stream - stream interfaces and to ensure a better stationarity .as pointed out in the literature , fast and slow wind turbulence should be studied separately , because of the different plasma conditions .therefore , for our statistical analysis we built two distinct samples by putting together six streams for the fast wind ( which we name the _ fast _ sample , totalizing data points ) , and five streams for the slow wind ( hereafter the _ slow _ sample , consisting of data points ) .the third sample was recorded in the solar wind out of the ecliptic , by instruments on - board ulysses spacecraft , during the first eight months of 1996 .the spacecraft spanned distances in the range from to , and latitudes from about to .sampling resolution is minutes .we refer to this dataset as _ polar _ sample .all samples were taken at solar minima , when the solar wind is more steady , and free from disturbances of solar origin .we remind that , in order to study spacecraft time series , all scales are customarily transformed in the time lags through the bulk flow speed averaged over the entire data set .this is allowed by the taylor hypothesis , that is generally valid for solar wind fluctuations in the inertial range , and which we have tested in our data . in figure [ fig - pdfs ] , samples of the pdfs of standardized magnetic and velocity field fluctuations at different scalesare shown for fast and polar samples .the typical tail enhancement toward small scales appears evident for all cases .similar results are observed for the velocity and for the elsasser fields ( see e.g. ) .pdfs of magnetic field , velocity and elsasser fields fluctuations have been successfully reproduced using the castaing model given in equation ( [ convolution ] ) .in particular , the heavy tails are extremely well captured by the model , showing that the effect of intermittency has been well described .the fitting parameter shows a power - law scaling extended over about one decade . as mentioned above ,the scaling exponents are related to the fractal dimension of the most intermittent structures generated at the bottom of the cascade , therefore providing important physical information on the turbulent flow properties .the first step for the self - consistent characterization of the pdfs is to map the properties of the plasma energy transfer rate using the experimental time series . in solar wind turbulence , understanding the mechanism responsible for the dissipation ( and/or dispersion ) of the energy at the bottom of the nonlinear cascade is still an open issue .therefore , the actual expression of the dissipative ( and/or dispersive ) terms within mhd equations is not known .it should also be pointed out that , unlike ordinary turbulent flows , the solar wind plasma is only weakly collisional , so that molecular viscosity and resistivity can not be defined in a simple way , nor estimated directly from the measurements .these considerations show that it is not possible , to date , to measure the local energy dissipation rate in solar wind turbulence .however , it is possible to define proxies of the energy dissipation rate , that can be reasonably used to represent the statistical properties of the field . in this paper, we use a definition based on the third order moment scaling law for mhd , often referred to as politano - pouquet law ( pp ) .the pp law establishes , under given hypotheses ( stationarity , homogeneity , isotropy , incompressibility ) , the linear scaling of the mixed third order moment of the elsasser fields , in the right hand side of equation ( [ yaglom ] ) , is the mean energy transfer rate , estimated over the whole domain . by analogy , we define the `` local '' pseudo - energy transfer rate as : so that the local energy transfer rate at the scale reads . at a given scale ,each field increment can thus be associated with the local value of .since we are interested , in particular , in the small scale intermittent effects , from now on we will only use the resolution scale values of ( namely 81 seconds for helios 2 data and 8 minutes for ulysses data ) , which we will refer to as energy dissipation rate .figure [ fig - epsilon ] shows the variable , computed for fast and slow helios 2 streams and for the ulysses polar sample .differences between the three samples are evident , in particular for the ulysses dataset , probably because of the lower data resolution .for all cases , the field is highly irregular and inhomogeneous , with spikes of large dissipation alternated with quiet periods .the probability distribution functions of are shown in figure [ fig - pepsi ] for fast and slow wind , as obtained from helios 2 data , and for polar ulysses data .because of the inhomogeneity , pdfs have been computed using bins of variable width , by imposing a fixed number of data points in each bin ( for helios datasets , for ulysses data ) .the error bars are estimated as the counting ( poisson ) error on , and the three pdfs have been vertically shifted for clarity .log - normal fits of the distributions are overplotted in light grey , showing good agreement with the framework of a multiplicative cascade .alternatively , pdfs are even better reproduced ( with two to ten times smaller ) by a stretched exponential fit , where is a characteristic value of the energy dissipation rate , and is the parameter controlling the shape of the tails of the pdf . in particular , indicates gaussian , exponential , and indicates heavy tailed , almost power - law distribution . in the present case , the shape parameters are indicated in figure [ fig - pepsi ] . both fast and slow wind measured by helios 2 show a strong deviation from gaussian ( ) , with presence of energy dissipation bursts that result in the heavy tails of the pdfs . on the contrary ,ulysses shows `` thinner '' tails , i.e. populated with less extreme events .this is to be expected , because of the different time resolution of the data . indeed , while ulysses sampling time sits in the middle of the inertial range , for helios 2 it is closer to its bottom .it should be recalled that stretched exponential pdfs were predicted for a variable generated as the result of a multiplicative process controlled by a few extreme events , in the framework of the extreme deviation theory .this supports once more the presence of a intermittent , multiplicative energy cascade in solar wind turbulence .estimated from the helios 2 data for fast wind ( blue circles ) and slow wind ( red squares ) , and from the ulysses polar data ( green triangles ) .fits with log - normal ( grey lines ) and with a stretched exponential function ( see legend ) are superposed , and the corresponding shape exponents are indicated .the three distributions have been vertically shifted for clarity.,width=377 ]in order to verify that the castaing model can properly be applied to solar wind turbulence , a conditioned analysis of the data was performed .the range of values of the energy dissipation rate was divided into bins of variable width ( with for fast , slow and polar samples , respectively ) , in order to separate different regions of the time series .the pdfs of field fluctuations , at the resolution time lag , were thus estimated separately for each bin , i.e. conditioned to the values of the energy dissipation rate .top - left panel of figure [ fig - conditioned ] shows , superimposed , all the ten conditioned pdfs of the resolution scale radial velocity field fluctuations for the polar wind dataset . conditioned pdfsno longer have the heavy tails observed at small time lags ( see figure [ fig - pdfs ] ) , and present roughly gaussian shape .the top - right panel of the same figure shows examples of the gaussian fits of the conditioned distributions . for clarity ,only four out of ten conditioning values of are plotted .this observation confirms the multifractal scenario , in which regions with the same energy dissipation rate are characterized by self - similar ( gaussian in this case ) pdfs of the field fluctuations , even at small scale .thus , the conditioned data lose intermittency , which indeed arises from the local fluctuations of the energy dissipation rate .top panels of figure [ fig - conditioned ] also qualitatively highlights the result of superimposing several gaussian distributions of variable width and amplitude , obtaining the heavy tails observed in the usual pdf .the use of the castaing distribution is therefore appropriate to describe solar wind intermittent turbulence . for the radial component of the velocity in the polar ulysses sample , for ten different values of .top - right panel : the gaussian fit of for the same case , only shown for four values of .central panels : pdfs of for the three dataset , for the radial component of velocity ( center - left panel ) and magnetic field ( center - right panel ) . fits with a stretched exponential law are superposed ( solid lines ) , and the fitting parameters are indicated .bottom panels : relationship between the energy dissipation rate and the variance for velocity ( bottom - left panel ) and magnetic field ( bottom - right panel ) fluctuations.,title="fig:",width=283 ] for the radial component of the velocity in the polar ulysses sample , for ten different values of .top - right panel : the gaussian fit of for the same case , only shown for four values of .central panels : pdfs of for the three dataset , for the radial component of velocity ( center - left panel ) and magnetic field ( center - right panel ) . fits with a stretched exponential law are superposed ( solid lines ) , and the fitting parameters are indicated .bottom panels : relationship between the energy dissipation rate and the variance for velocity ( bottom - left panel ) and magnetic field ( bottom - right panel ) fluctuations.,title="fig:",width=283 ] for the radial component of the velocity in the polar ulysses sample , for ten different values of .top - right panel : the gaussian fit of for the same case , only shown for four values of .central panels : pdfs of for the three dataset , for the radial component of velocity ( center - left panel ) and magnetic field ( center - right panel ) . fits with a stretched exponential law are superposed ( solid lines ) , and the fitting parameters are indicated .bottom panels : relationship between the energy dissipation rate and the variance for velocity ( bottom - left panel ) and magnetic field ( bottom - right panel ) fluctuations.,title="fig:",width=283 ] for the radial component of the velocity in the polar ulysses sample , for ten different values of .top - right panel : the gaussian fit of for the same case , only shown for four values of .central panels : pdfs of for the three dataset , for the radial component of velocity ( center - left panel ) and magnetic field ( center - right panel ) . fits with a stretched exponential law are superposed ( solid lines ) , and the fitting parameters are indicated .bottom panels : relationship between the energy dissipation rate and the variance for velocity ( bottom - left panel ) and magnetic field ( bottom - right panel ) fluctuations.,title="fig:",width=283 ] for the radial component of the velocity in the polar ulysses sample , for ten different values of .top - right panel : the gaussian fit of for the same case , only shown for four values of .central panels : pdfs of for the three dataset , for the radial component of velocity ( center - left panel ) and magnetic field ( center - right panel ) . fits with a stretched exponential law are superposed ( solid lines ) , and the fitting parameters are indicated .bottom panels : relationship between the energy dissipation rate and the variance for velocity ( bottom - left panel ) and magnetic field ( bottom - right panel ) fluctuations.,title="fig:",width=283 ] for the radial component of the velocity in the polar ulysses sample , for ten different values of .top - right panel : the gaussian fit of for the same case , only shown for four values of .central panels : pdfs of for the three dataset , for the radial component of velocity ( center - left panel ) and magnetic field ( center - right panel ) . fits with a stretched exponential law are superposed ( solid lines ) , and the fitting parameters are indicated .bottom panels : relationship between the energy dissipation rate and the variance for velocity ( bottom - left panel ) and magnetic field ( bottom - right panel ) fluctuations.,title="fig:",width=283 ] for each conditioned pdf , corresponding to a given set of constant dissipation rate , it is possible to obtain the probability density function of the standard deviations , where is the ( fixed ) number of points in each dissipation bin .this is done for each dataset by exploiting the correspondence between a given bin of energy dissipation rate and the standard deviation of the corresponding field fluctuations . in other words , for each field and for each value of , a value of is obtained through the gaussian fit of the conditioned pdfs .then , the probability is the fraction of data associated with that interval of values .this allows to establish an explicit relationship between the statistical properties of the energy dissipation rate and of the standard deviation , postulated in the castaing model .central panels of figure [ fig - conditioned ] shows the probability distribution functions of the standard deviation of the fields associated to the distributions of , along with log - normal and stretched exponential fits . in this case , the log - normal fit is not always able to capture the form of the distribution , while the stretched exponential fits are very good . for the latter ,the smaller exponents observed for the velocity fluctuations in all dataset , and suggesting a lower intermittency , agree with the higher level of intermittency of the magnetic field .it is also interesting to check directly from the data the validity of the assumption originally used in the castaing model , namely the phenomenological relationship between the energy dissipation rate and the variance of the fluctuations. this can be simply done by plotting _ versus _ , and verifying the power - law relation expected from a simple dimensional argument on the third orer scaling relation given by eq .( [ yaglom ] ) , resulting in . as can be seen in the bottom panels of figure[ fig - conditioned ] , the power - law relationship is very well verified , with exponents compatible with the expected value .the error bars are estimated as half the size of the energy dissipation rate bin .similar results apply to the other components of the fields and to the elsasser fields ( not shown ) .note that a similar analysis performed on ordinary fluid turbulence provided instead . with all this information at hands, the self - consistent castaing distribution ( [ convolution ] ) can be finally discretized as follows : equation ( [ selfconsistent ] ) only includes two free parameters : the normalization factor , and the skewness factor , while the information on the pdf shape is self - consistently included through the distribution .the pdfs of the fields fluctuations can now be compared with the self - consistent castaing model , by fitting the data with equation ( [ selfconsistent ] ) in order to adjust only for the two free parameters .these are however irrelevant for the tails curvature , i.e. for intermittency , so that there is no need to discuss them here .figure [ fig - self - castaing ] shows the pdfs of fluctuations , as in previous figure [ fig - pdfs ] ( markers ) , together with the fit with equation ( [ selfconsistent ] ) ( full line ) .the self - consistent model reproduces the shape of the pdf tails very satisfactorily in all cases . in order to test the goodness of the fit, a synthetic has been built by averaging over realizations of random distributions , generated for in the same interval as each solar wind dataset , $ ] .the fit of the data with the self - consistent castaing model , using the synthetic , is shown in figure [ fig - self - castaing ] as dotted line .it is evident that a superposition of gaussians with a random choice of the weigth distribution does not allow the correct reproduction of the data .quantitatively , the random gaussian superposition fits have values larger by a factor of ten with respect to the self - consistent castaing fit .in the attempt to understand the mechanism responsible for the generation of the highly intermittent turbulent fluctuations of solar wind fields , we have explored here the existence of a non - homogeneous energy cascade , a base hypothesis of several models .the intermittent solar wind field fluctuations , deeply investigated in the last decade , have often been described through the castaing pdf model . in this work ,we have evaluated empirically the weights distribution of the castaing model ( eq . ( [ convolution ] ) ) , rather than assuming an analytical prescription ( usually a log - normal distribution ) . for the sake of generality , we have selected three different datasets , consisting of one fast and one slow ecliptic wind samples ( helios 2 data ) , and one fast polar wind sample ( ulysses data ) . since it is not possible , to date , to measure the energy dissipation rate from the data ,we have used a proxy derived from the exact statistical scaling law for the mixed third order moment of the magnetohydrodynamic fluctuations .although such law is expected to apply only for homogeneous , isotropic , incompressible mhd turbulence , its validity in the solar wind time series has recently been established by several authors ( e.g. .therefore , the use of the proxy introduced here is appropriate . after estimating the local value of the energy dissipation rate , we have evaluated its statistical properties .the probability distribution functions are consistent with a log - normal function , predicted by the typical theoretical models used for the description of the turbulent fields as the result of a non - homogeneous , random multiplicative cascade .alternatively , a stretched exponential fit was also used for all datasets . stretched exponential distributions match the theoretical framework of the extreme deviations theory ( edt ) , where the statistical properties of the multiplicative cascade process are controlled by the most extreme intermittent events ( the heavy tails of the pdfs ) .the stretching parameter , obtained from the fit , can be used to describe quantitatively the degree of inhomogeneity , being related to presence of correlations or clustering in the dissipation field .the results obtained here ( see figure [ fig - pepsi ] ) confirm the stronger intermittency of slow wind with respect to fast and polar wind . starting from this observation , we have then studied the conditioned pdfs of the fields fluctuation , the conditioning parameter being the local energy dissipation rate . as expected within the multifractal picture ( but never observed in the solar wind so far ) , conditioning results in loss of intermittency , so that self - similarity is restored when the inhomogeneity of the energy dissipation rate is removed .this result shows that the framework of the multifractal energy cascade applies to solar wind turbulence .upon observation of their gaussian shape , the conditioned pdfs have finally been used to build the castaing distribution self - consistently , through the empirical distribution of their standard deviations , .the model pdf drawn following this procedure fits very well the shape of the experimental pdfs , and in particular can capture the curvature of the tails .the self - consistent castaing model can therefore reproduce the intermittency of the fields .we remark that , the selection process for the conditioned pdfs being based on the dissipation rate , the relevant physical information necessary to describe the intermittent pdfs is the non - homogeneous distribution of the local energy dissipation rate .the role of the shape parameter , used to describe quantitatively the degree of intermittency , is now played by the stretching parameter of the distributions of the energy dissipation rate , not directly involved in the self - consistent castaing model . in conclusion , the stretched exponential distribution of the local energy dissipation rate , the gaussian shape of the conditioned pdfs , and the successful application of the self - consistent castaing model , all strongly support the picture of the multifractal energy cascade being at the origin of the intermittency of solar wind turbulence .the validity of such scenario holds for both fast , slow and polar samples , confirming that the turbulent cascade is always active in the solar wind .macbride b. t. , forman m. a. and smith c. w. , in : b. fleck , t. h. zurbuchen , and h. lacoste ( ed . ) , solar wind 11/soho 16 , connecting sun and heliosphere .592 of esa special publication .( 2005 ) .veltri p. and mangeney a. , in : s. t. suess , g. a. gary , and s. f. nerney ( ed . ) , american institute of physics conference series .471 of american institute of physics conference series .
|
the intermittent behavior of solar wind turbulent fluctuations has often been investigated through the modeling of their probability distribution functions ( pdfs ) . among others , the castaing model has successfully been used in the past . in this paper , the energy dissipation field of solar wind turbulence has been studied for fast , slow and polar wind samples recorded by helios 2 and ulysses spacecraft . the statistical description of the dissipation rate has then be used to remove intermittency through conditioning of the pdfs . based on such observation , a self - consistent , parameter - free castaing model is presented . the self - consistent model is tested against experimental pdfs , showing good agreement and supporting the picture of a multifractal energy cascade at the origin of solar wind intermittency .
|
in recent years , a number of papers have considered the feedback control of systems whose dynamics are governed by the laws of quantum mechanics rather than classical mechanics ; e.g. , see . in particular , the papers consider a framework of quantum systems defined in terms of a triple where is a scattering matrix , is a vector of coupling operators and is a hamiltonian operator .the papers consider the problem of absolute stability for a quantum system defined in terms of a triple in which the quantum system hamiltonian is decomposed as where is a known nominal hamiltonian and is a perturbation hamiltonian , which is contained in a specified set of hamiltonians .in particular the papers consider the case in which the nominal hamiltonian is a quadratic function of annihilation and creation operators and the coupling operator vector is a linear function of annihilation and creation operators .this case corresponds to a nominal linear quantum system ; e.g. , see . the results in have recently been extended to allow for uncertainties in the coupling operator .also , the results of have been applied to the robust stability analysis of a quantum system which consists of a josephson junction in a resonant cavity . in the paper , it is assumed that is contained in a set of non - quadratic perturbation hamiltonians corresponding to a sector bound on the nonlinearity . in this case, obtains a robust stability result in terms of a frequency domain condition .also , the paper restricts attention to quadratic perturbation hamiltonians . in this case , which corresponds to linear perturbed quantum systems ,a frequency domain robust stability condition is also obtained .an example considered in the paper involves the robust stability analysis of a quantum system consisting of a linearized optical parametric amplifier ( opa ) .optical parametric amplifiers are widely used in the field of experimental quantum optics ; e.g. , see .in particular , they can be used as optical squeezers which produce squeezed light which has a smaller noise variance in one quadrature than the standard quantum limit .this is at the expense of a larger noise variance in the other quadrature ; e.g. , see .such an opa can be produced by enclosing a second - order nonlinear optical medium in an optical cavity ; e.g. , see .thus , an opa is an inherently nonlinear quantum system .however , the paper only dealt with linear perturbed quantum systems and thus the results of this paper could only be used to analyze the robust stability of a linearized version of the opa .furthermore the results of on nonlinear perturbed quantum systems can not be directly applied to the opa system since the results of only deal with scalar nonlinearities but the nonlinearity in the opa model is dependent on two variables ; e.g. , see . in this paper, we extend the result of on the robust stability of nonlinear quantum systems to allow for non - quadratic perturbations in the hamiltonian which depend on multiple variables .this enables us to analyze the robust stability of the opa nonlinear quantum system .in this section , we describe the general class of quantum systems under consideration . as in the papers , we consider uncertain nonlinear open quantum systems defined by parameters where is the scattering matrix which is typically chosen as the identity matrix , l is the coupling operator and is the system hamiltonian operator which is assumed to be of the form m \left[\begin{array}{c}a \\ a^\#\end{array}\right]+f(z , z^*).\ ] ] here is a vector of annihilation operators on the underlying hilbert space and is the corresponding vector of creation operators . also , is a hermitian matrix of the form \ ] ] and , . in the casevectors of operators , the notation refers to the transpose of the vector of adjoint operators and in the case of matrices , this notation refers to the complex conjugate transpose of a matrix . in the casevectors of operators , the notation refers to the vector of adjoint operators and in the case of complex matrices , this notation refers to the complex conjugate matrix .also , the notation denotes the adjoint of an operator .the matrix is assumed to be known and defines the nominal quadratic part of the system hamiltonian .furthermore , we assume the uncertain non - quadratic part of the system hamiltonian is defined by a formal power series of the form which is assumed to converge in some suitable sense . here , , and ^t ] ; e.g. , see . to define the set of allowable perturbation hamiltonians , we first define the following formal partial derivatives : and for given constants , , , we consider the sector bound condition and the condition then we define the set of perturbation hamiltonians as follows : note that the condition ( [ sector4b ] ) effectively amounts to a global lipschitz condition on the quantum nonlinearity . as in , we will consider the following notion of robust mean square stability .[ d1 ] an uncertain open quantum system defined by where is of the form ( [ h1 ] ) , , and of the form ( [ l ] ) is said to be _ robustly mean square stable _ if there exist constants , and such that for any , ^\dagger \left[\begin{array}{c}a(t ) \\ a^\#(t)\end{array}\right ] \right>}\nonumber \\ & \leq & c_1e^{-c_2t}\left < \left[\begin{array}{c}a \\a^\#\end{array}\right]^\dagger \left[\begin{array}{c}a \\ a^\#\end{array}\right ] \right > + c_3~~\forall t \geq 0.\end{aligned}\ ] ] here ] ; e.g. , see . we will show that the following small gain condition is sufficient for the robust mean square stability of the nonlinear quantum system under consideration when : 1 . the matrix 2 . + where . ] denotes the commutator between two operators . in the case of a commutator between a scalar operator and a vector of operators , this notation denotes the corresponding vector of commutator operators .also , denotes the heisenberg evolution of the operator and denotes quantum expectation ; e.g. , see . we will consider quadratic `` lyapunov '' operators of the form \left[\begin{array}{c}a \\ a^\#\end{array}\right]\ ] ] where is a positive - definite hermitian matrix of the form .\ ] ] hence , we consider a set of non - negative self - adjoint operators defined as [ l4 ] given any , then \right ] = \left[z_i^*,[z_i^*,v]\right]^ * = -\tilde e_i\sigma jpj\tilde e_i^t,\ ] ] which are constants for ._ proof : _ the proof of this result follows via a straightforward but tedious calculation using ( [ ccr2 ] ) . [ lb ] given any , then & = & \sum_{i=1}^p[v , z_i]w_{1i}^ * -\sum_{i=1}^pw_{1i}[z_i^*,v]\nonumber \\ & & + \frac{1}{2}\sum_{i=1}^p\mu_i w_{2i}^*\nonumber \\ & & -\frac{1}{2}\sum_{i=1}^p w_{2i}\mu_i^*\end{aligned}\ ] ] where ^t ] and hence , [ z^\#,v ] = 4\left[\begin{array}{c}a \\ a^\#\end{array}\right]^\dagger pj\sigma \tilde e^t \tilde e^\ # \sigma jp \left[\begin{array}{c}a \\ a^\#\end{array}\right].\end{aligned}\ ] ] also , we can write ^\dagger \sigma \tilde e^t \tilde e^\ # \sigma\left[\begin{array}{c}a \\ a^\#\end{array}\right].\ ] ] hence using lemma [ l2 ] , we obtain m \left[\begin{array}{c}a \\ a^\#\end{array}\right]]\nonumber \\ & & + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l + [ v , z^t][z^\#,v ] + \frac{z^tz^\#}{\gamma^2 } \nonumber \\ & = & \left[\begin{array}{c}a \\ a^\#\end{array}\right]^\dagger\left(\begin{array}{c } f^\dagger p + p f\\ + 4 pj\sigma \tilde e^t \tilde e^\ # \sigma jp \\ + \frac{1 } { \gamma^2}\sigma \tilde e^t \tilde e^\ # \sigma\\ \end{array}\right)\left[\begin{array}{c}a \\ a^\#\end{array}\right]\nonumber \\ & & + \tr\left(pjn^\dagger\left[\begin{array}{cc}i & 0 \\ 0 & 0 \end{array}\right]nj\right)\end{aligned}\ ] ] where .we now observe that using the strict bounded real lemma , ( [ hurwitz1 ] ) and ( [ hinfbound1 ] ) imply that the matrix inequality will have a solution of the form ( [ pform ] ) ; e.g. , see .this matrix defines a corresponding operator as in ( [ quadv ] ) . from this, it follows using ( [ lyap_ineq3 ] ) that there exists a constant such that m \left[\begin{array}{c}a \\ a^\#\end{array}\right]]\nonumber \\ & & + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l + \sum_{i=1}^p[v , z_i][z_i^*,v ] \nonumber \\ & & + \frac{1 } { \gamma^2}\sum_{i=1}^p z_iz_i^ * + cv \leq \tilde \lambda .\nonumber \\\end{aligned}\ ] ] with \right ) \geq 0.\ ] ] also , it follows from lemma [ lb ] that + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l}\nonumber \\ & = & -\imath[v , f(z , z^*)]-\imath[v,\frac{1}{2}\left[\begin{array}{cc}a^\dagger & a^t\end{array}\right]m \left[\begin{array}{c}a \\ a^\#\end{array}\right]]\nonumber \\ & & + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l\nonumber \\ & = & -\imath[v,\frac{1}{2}\left[\begin{array}{cc}a^\dagger & a^t\end{array}\right]m \left[\begin{array}{c}a \\ a^\#\end{array}\right]]\nonumber \\ & & + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l\nonumber \\ & & -\imath\sum_{i=1}^p[v , z_i]w_{1i}^*+\imath\sum_{i=1}^pw_{1i}[z_i^*,v]\nonumber \\ & & -\frac{1}{2}\imath\sum_{i=1}^p\mu_i w_{2i}^*+\frac{1}{2}\imath \sum_{i=1}^p w_{2i}\mu_i^*. \end{aligned}\ ] ] furthermore , ^ * = z_i^*v - vz_i^*=[z_i^*,v] ] , and } \geq 0 ] , , , .we now investigate whether this function satisfies the conditions ( [ sector4a ] ) and ( [ sector4b ] ) .first we calculate from this , we can immediately see that the conditions ( [ sector4a ] ) and ( [ sector4b ] ) will not be globally satisfied . in order to overcome this difficulty , we first note that any physical realization of a optical nonlinearity will not be exactly described by the model ( [ opa ] ) but rather will exhibit some saturation of the nonlinear effect . in order to represent this effect, we could assume that the true function describing the hamiltonian of the opa is such that the first two non - zero terms in its taylor series expansion ( [ h2nonquad ] ) correspond to the standard hamiltonian defined by ( [ opaf ] ) .furthermore , we could assume that the true function is such that the conditions ( [ sector4a ] ) and ( [ sector4b ] ) are satisfied for suitable values of the constants , , . herethe quantity will be proportional to the saturation limit .an alternative approach to dealing with the issue that the conditions ( [ sector4a ] ) and ( [ sector4b ] ) will not be globally satisfied by the function defined in ( [ opaf ] ) is to assume that these conditions only hold over some `` domain of attraction '' and then only conclude robust asymptotic stability within this domain of attraction .this approach requires a semi - classical interpretation of the function since formally the operators and are unbounded operators .however , it leads to results which are consistent with the known physical behavior of an opa in that it can become unstable and oscillate if the magnitudes of the driving fields are too large ; e.g. , see . in practice, the true physical situation will combine aspects of both solutions which we have mentioned but we will concentrate on the second approach involving a semi - classical `` domain of attraction '' . in order to calculate the region on which our theory can be applied , we note that for our opa model , the condition ( [ sector4a ] ) will be satisfied if we now consider the case in which . in this case , the condition ( [ domain_1 ] ) is equivalent to the condition in the case that , the left hand side of ( [ domain_1 ] ) is always negative and the right hand side of ( [ domain_1 ] ) is always positive .hence in this case , the condition ( [ domain_1 ] ) will always be satisfied .also , the condition ( [ sector4b ] ) will be satisfied if the conditions ( [ domain_2 ] ) and ( [ domain_3 ] ) define the region to which our theory can be applied in guaranteeing the robust mean square stability of the opa system .this region is represented diagrammatically in figure [ f2 ] .the constraints ( [ domain_2 ] ) and ( [ domain_3 ] ) can be interpreted as bounds on the average values of the internal cavity fields for which robust mean square stability can be guaranteed ; see also . ) and ( [ sector4b ] ) are satisfied . here . ,width=302 ] we now investigate the strict bounded real conditions ( [ hurwitz1 ] ) , ( [ hinfbound2 ] ) . for this system , it follows from the definition ( [ hurwitz1 ] ) that the matrix is given by \ ] ] which is hurwitz for all and .thus , the condition ( [ hurwitz1 ] ) is always satisfied .also , we calculate the transfer function matrix as .\ ] ] it is straightforward to show that this transfer function matrix has an norm of thus , for this system , the condition ( [ hinfbound2 ] ) is equivalent to the condition hence , using theorem [ t4 ] and lemma [ l3 ] , we can conclude that the opa system ( [ opa ] ) is robustly means square stable provided that the condition ( [ gamma_bound ] ) is satisfied and heisenberg evolution of the quantities and are such that the conditions ( [ domain_2 ] ) and ( [ domain_3 ] ) remain satisfied .note that in most experimental situations , ; e.g. , see .this means that if we then equate , we obtain which can be substituted into the right hand side of ( [ domain_2 ] ) to obtain an upper bound on the region for which the conditions ( [ sector4a ] ) and ( [ sector4b ] ) are satisfied .this region is defined by ( [ domain_3 ] ) and the inequality also , note that the region defined by ( [ domain_2 ] ) and ( [ domain_4 ] ) will only be an upper bound on a domain of attraction for the opa system . to find an actual domain of attraction for this system, we would need to find an invariant subset contained in the region defined by ( [ domain_2 ] ) and ( [ domain_4 ] ) .such an invariant set could be chosen to be an ellipsoidal region defined by the quadratic lyapunov function arising from the matrix solving ( [ qmi2 ] ) .in this paper , we have extended the robust stability result of to the case of non - quadratic perturbations to the hamiltonian which depend on multiple parameters .this led to a robust stability condition of the form of a multi - variable small gain condition .this condition was then applied the robust stability analysis of a nonlinear quantum system consisting of an opa and the stability region for this system was investigated .m. yanagisawa and h. kimura , `` transfer function approach to quantum control - part i : dynamics of quantum feedback systems , '' _ ieee transactions on automatic control _ ,48 , no . 12 , pp . 21072120 , 2003 .s. z. s. hassen and i. r. petersen , `` optimal amplitude quadrature control of an optical squeezer using an integral lqg approach , '' in _ proceedings of the ieee multi - conference on systems and control _ , yokohama , japan , 2010 .
|
this paper considers the problem of robust stability for a class of uncertain nonlinear quantum systems subject to unknown perturbations in the system hamiltonian . the case of a nominal linear quantum system is considered with non - quadratic perturbations to the system hamiltonian . the paper extends recent results on the robust stability of nonlinear quantum systems to allow for non - quadratic perturbations to the hamiltonian which depend on multiple parameters . a robust stability condition is given in terms of a strict bounded real condition . this result is then applied to the robust stability analysis of a nonlinear quantum system which is a model of an optical parametric amplifier .
|
football is probably the most popular sport in the world with around 265 million active players around the globe and even more people enjoy watching it .every year a lot of money is spent by football clubs in attempt to build a strong team by buying good players from their rivals .most of the data available on official football unions or tournament websites normally only addresses a specific match , tournament or season . in order to collect the data for top leagues in the last few seasons ,we need to look elsewhere .a very interesting website from this perspective is ` www.transfermarkt.co.uk ` .it contains all the major leagues including all the clubs , rankings , players , information about the players and also their estimated market value .+ in an attempt to analyse players and the connections between them we construct a large network of professional football players from different clubs in different leagues .we are particularly interested in the influence of the teammates on a football player and if it is possible to identify the best players , based on knowing with whom they play now and where they played in the past . using these analyseswe could be able to find out which players are the best according to different metrics . in a football player network, two players are connected to each other if they have ever played for the same club .such network can be represented by a bipartite graph consisting of clubs and players .every player is connected to all the clubs he has played for and through the nodes that represent clubs we are able to see which players played together for a specific club . for simpler analysiswe separate these problems and project the bipartite graph to a network constructed only from nodes representing football players .two nodes are connected to each other if they were ever teammates .this is an undirected network .apart from the analysis of players , we also want to identify the best springboard clubs that are the players entry point into the best football clubs in the world .because we do not include information about clubs in the first network , we construct a second network .the second network is a club transfer network .clubs from the top twenty leagues represent nodes which are connected if any player was ever transferred from one club to another .the direction of the edge points from the club that sold the player to the club that bought the player .+ preliminary analysis on an unweighted undirected player collaboration network shows that weighted networks are needed in order to extract information about the best players .we expect very well known football players to come on top when analysing a weighted player collaboration network . in order to identify springboard clubsa weighted directed club transfer network has to be constructed .weights of the edges are calculated using different equations that take into account multiple metrics .using those networks we identify the top players in the world of football and the top springboard clubs .as it has been pointed out in , football data is becoming more easily available in the past years since fifa has made more data regarding different matches available on their website .many authors took advantage of that and constructed different networks to perform network analysis and gather information from the networks . in the authors used some interesting approaches to reveal key players of a certain team , performing analysis on a passing network of a specific team .they showed we are able to identify different kinds of strategies of a team such as focusing passes on a single player or evenly distributing passes between all players in the team .they performed several analyses on a team passing network using very well known network analysis methods such as pagerank , betweenness centrality and closeness centrality .+ player contribution to a team was also analysed in .they used a variation of betweenness centrality of the player with regard to opponent s goal , which authors denoted as flow centrality .we use similar network theory methods , but we adapt them to test different theories . in dug a little deeper but followed the same idea .they only concentrated on one specific team and constructed more networks for the same match , introducing the time dimension .+ although there are various papers regarding football network analysis , the majority of football networks are only considering a certain match or tournament . in this paperwe construct a much larger network consisting of thousands of players .in other team sports , such as cricket , some authors have already tried to identify the best individuals among all the players that played over a certain time period .a very interesting networks considering sportsmen throughout several decades were analysed in . in authors attempted to find out who is the best player of tennis in history of this sport .we try to construct a somewhat similar network but since football is very different from tennis the networks still differ a lot .since the main difference is that football is a team sport , we can not just link players based on their matches . hereplayers are connected based on their affiliation to a club .in this paper we analyse a large set of football players throughout the past fifteen seasons . in order to collect this data we use the site ` www.transfermarkt.co.uk ` , which is becoming the leading portal when it comes to football players and information about them .several scripts are used to extract relevant data for different clubs and players .network is constructed from players out of 20 most valuable football leagues from year 2001 to 2016 .the leagues and their values are presented in table [ fig : leaguevalue ] .+ using the gathered data we constructed two separate networks , first one consisting of football players and the other consisting of football clubs .football player network is a player collaboration network where players are connected if they ever played together at the same club .it is an undirected weighted network consisting of 36,214 nodes and 1,412,232 edges .other basic network properties are shown in table [ fig : playersnetwork ] .club network is a directed transfer network between all the clubs in the top twenty world football leagues .nodes represent clubs and a club is connected to another club if a player was ever transferred from the first club to the second club .it is a directed weighted network consisting of 330 nodes and 12,841 edges .other basic network properties are shown in table [ fig : clubsnetwork ] .llcl & & * value * & + & & + & & + & & + & & + & & + llcl & & + & & + & & + & & + & & + & & + [ fig : playersdistribution ] * league * & * value [ ] * & & * league * & * value [ ] * + permier league ( eng ) & 3,01bn & & pro league ( bel ) & 354 m + la liga ( esp ) & 2,25bn & & primera division ( arg ) & 306 m + serie a ( ita ) & 1,79bn & & premier liga ( ukr ) & 299 m + 1 .bundesliga ( ger ) & 1,65bn & & super league(gre ) & 227 m + ligue 1 ( fra ) & 1,06bn & & super league ( swi ) & 175 m + super lig ( tur ) & 698 m & & mls ( usa ) & 162 m + premier liga ( rus ) & 638 m & & liga 1 ( rom ) & 118 m + serie a ( bra ) & 608 m & & 1 .hnl ( cro ) & 115 m + liga nos ( por ) & 574 m & & bundessliga ( aut ) & 110 m + eredivisie ( ned ) & 375 m & & premiership ( sco ) & 85 m + in order to reveal the best players in our network , we choose an appropriate method of determining node importance .since we wanted to identify the best players in the last fifteen seasons , we expected the most known and valued names of football to be at the top of the list . not to neglect younger players , we also separate players into age groups .we analysed each age group individually in order to identify the most perspective players .since our network is a collaboration network , we have to categorize the edges .the players that play with the best players are usually good themselves . players with a lower value may change a lot of clubs and change a lot of teammates in a couple of seasons , but this categorization penalises their edges . in general , player market value is a good identifier of the quality of a player .therefore we choose market value as a core property to calculate the edge weight .since our data spans over fifteen seasons , we have to take into account the inflation , so that good players that played in the past are not penalised .we gather average inflation rate from .the final formula for calculating the weight of a specific edge is symbols and are values of players that are connected by the edge , represents the seasons in which players played together and represents average inflation ratio per year for europe in the last 13 years .the equation is divided by 100000 , to obtain smaller numbers .to calculate which node is the most important , we choose one of the most popular node importance algorithms , pagerank .we calculate the pagerank score of every node in our weighted network . to identify the most perspective players , we separate players into age groups .the most perspective players have the highest score in their age groups . from the club transfer networkwe want to identify the springboard clubs .these are the clubs where younger players gather experience and are later sold to better or even the best clubs in the world . similar to the player collaboration network , this network has to be weighted as well .we are able to extract the number of transfers in both directions for all pairs of clubs but the absolute number does not provide the necessary information for springboard club identification .thus , we have to weight every edge , representing the number of transfers from one club to another , with a weight related to the importance of the destination club .the importance of the destination club is calculated using two different equations .one is based on average ranking of the destination club in the past fifteen seasons and the ranking of the league they play in , and the other one is based on the destination club value . both equations are stated and explained below . weight in the equation [ eq : clubsweightranking ] is calculated as a reciprocal value of destination club average ranking in the past fifteen seasons multiplied by the our predefined destination club league ranking .predefined league rankings can be found in table [ fig : leaguerankings ] and are defined for the purpose of this paper .weight in the equation [ eq : clubsweightvalue ] is calculated as destination club average value in the past fifteen seasons divided by to lower the weight values .+ to identify springboard clubs we have to choose a different method from the one we use for player collaboration network .the most important thing in this network are the transfer paths from less valuable to the most valuable clubs .a club is considered a springboard if it is involved in a lot of transfers to the most valuable clubs .thus , the betweenness centrality is the most suitable measure .we implement a fast betweenness algorithm discussed in .since our network is weighted we have to modify the proposed algorithm so it takes weights into account . the only difference from the proposed algorithm is calculation of path lengths where we do not add one for every hop but take weight into account .we have to take the reciprocal value of weight as in our network larger weight is better and we want to favour edges with larger weights . *league * & * ranking * & & * league * & * ranking * + la liga ( esp ) & 100 & & premier league ( ukr ) & 20 + premier league ( eng ) & 95 & & super league ( swi ) & 20 + serie a ( ita ) & 85 & & serie a ( bra ) & 20 + bundesliga ( ger ) & 75 & & super lig ( tur ) & 15 + ligue 1 ( fra ) & 50 & & primera division ( arg ) & 15 + primera liga ( por ) & 40 & & super league(gre ) & 13 + eredivisie ( ned ) & 40 & & liga 1 ( rom ) & 12 + pro league ( bel ) & 25 & & 1 .hnl ( cro ) & 10 + premier league(rus ) & 25 & & bundesliga ( aut ) & 10 + premiership ( sco ) & 20 & & mls ( usa ) & 5 +after running the analysis on the player collaboration network , we can show that the best player according to our analysis is cristiano ronaldo .he is followed by several other players that have played for several of the best clubs . by looking at the table [ fig : resultsplayers ] , where top 20 players identified by our algorithm and their scores are listed, we can see that the value of the player is not the only thing that affects the score of a player .players like beckham , ronaldinho , kak and keane , whose market value decreased a lot lately because of their age , but they played for a lot of important clubs in their career , have high scores .most players on the top 20 list are still active today and are playing in the best leagues .+ the most perspective players in each age group are listed in .when assessing player s perspectiveness , the most important factor besides his value and the values of his teammates is the player s age .since our network is an undirected network connecting two players , age can not be simply added to the weight equation . including age into weight equation would favour players that have valuable teammates and also players that have younger teammates , which is not desired .therefore , for identifying the most perspective players , the network can stay the same , we just need to interpret results differently .we divide players into different groups based on their age and compare only scores of players in the same groups . on average ,older players have higher scores , which is expected as they played more seasons , which results in higher degree .thus , the separation into age groups is beneficial .some of the most perspective players based on our algorithm already play for the best clubs and others , despite their young age , play an important role in their clubs .+ based on the results , we can conclude that pagerank is an appropriate algorithm for determining the best players in our weighted network ..player collaboration network pagerank results [ cols="^,^,^",options="header " , ] from the club transfer network analysis we can show that the best springboard club among the clubs in the top twenty leagues is standard liege .the analysis provides very good results , since the top 15 clubs list is lacking the most valuable and the best clubs in the world .top 15 clubs by betweenness centrality scores and their scores calculated on network using both weight equations are listed in table [ fig : resultsclubs ] .the results also show very slight difference between both proposed weight equations .the top two clubs are the same regardless of the weight and the third and the fourth switch positions if we change the weight calculation equation .all the clubs on the top 15 list are from less valuable leagues and these clubs normally buy younger players that are more affordable and sell the ones whose value rises above a certain level .this makes them a perfect springboard for younger and less experienced players .because of such transfer activity such clubs get high score according to betweenness centrality as they play an important role in the transfer paths from less valuable clubs to the best clubs . * club * & * score by value ( eq . [ eq : clubsweightvalue ] ) * & & * club * & * score by rank ( eq . [ eq : clubsweightranking ] ) * + standard liege & 0.013605 & & standard liege & 0.012823 + aek athens & 0.011217 & & aek athens & 0.012240 + sl benfica & 0.010937 & & sporting cp & 0.010424 + sporting cp & 0.010312 & & sl benfica & 0.010172 + skoda xanthi & 0.009605 & & as monaco & 0.009275 + dinamo bukarest & 0.008743 & & fc porto & 0.008988 + as monaco & 0.008704 & & rubin kazan & 0.008884 + dinamo zagreb & 0.008675 & & cfr cluj & 0.008681 + olympiacos pir .& 0.008553 & & skoda xanthi & 0.008638 + cfr cluj & 0.008542 & & dinamo bukarest & 0.008518 + steaua bucharest & 0.008180 & & olympiacos pir . &0.008397 + udinese calcio & 0.007899 & & rangers fc & 0.008216 + fc porto & 0.007889 & & dinamo zagreb & 0.008170 + celtic fc & 0.007849 & & iraklis thess . &0.007925 + petrolul ploiesti & 0.007794 & & red bull salzburg & 0.007907 +player collaboration network from the past fifteen seasons from the top twenty football leagues consists of over 36 thousand nodes and nearly 1.5 million edges . therefore , time and space consuming algorithms can prove too demanding to run on regular computers .weighted pagerank algorithm however was able to calculate the scores for all the players in a very reasonable time . with the pagerank algorithm and proper edge weight ,we are able to identify the top players from the period of last fifteen seasons .a very important factor in the weight equation is the inflation rate which ensures that older players that were never as valuable as the best players of the last seasons are also present on the top players list . +using the same network , we are also able to identify the most perspective football players by separating their pagerank scores into age groups . using this approach, we compare only players of similar age that have played for similar number of seasons .this ensures the same conditions for all the players in a specific age group .results highlight some young players that already play for the best football clubs and some young players from less known clubs , where they play an essential role .+ results from club transfer network analysis are very similar to initial hypothesis .we expect clubs from less valuable leagues to come on top .we are able to identify springboard clubs by using the data about player transfers from the past fifteen seasons by constructing a directed weighted network with adequate weights using the data we have on the club value or the club rankings in the past seasons . with the proposed network, we use a weighted betweenness centrality algorithm to reveal the best springboard clubs in the top football leagues in the world .our algorithm identifies some clubs from belgian , greek and portuguese leagues as the best springboard clubs .
|
we consider all players and clubs in top twenty world football leagues in the last fifteen seasons . the purpose of this paper is to reveal top football players and identify springboard clubs . to do that , we construct two separate weighted networks . player collaboration network consists of players , that are connected to each other if they ever played together at the same club . in directed club transfer network , clubs are connected if players were ever transferred from one club to another . to get meaningful results , we perform different network analysis methods on our networks . our approach based on pagerank reveals _ christiano ronaldo _ as the top player . using a variation of betweenness centrality , we identify _ standard liege _ as the best springboard club . football network , sports networks , network analysis , measures of centrality
|
regularization methods play important roles in many ill - posed inverse problems arising in science and engineering .examples include inverse problems considered in signal processing and image sciences such as image denoising , image impainting , image deconvolution , just to name a few .mathematically , a image restoration problem can be viewed as reconstructing a clean image from a degraded image based on the degradation relationship .it is challenging to reconstruct from as the problem is usually ill - posed due to the highly underdetermined constraints and possible noise .observations of natural image with prior information such as piecewise smoothness , shape edges , textures , repetitive patterns and sparse representations under certain transformations make regularization methods quite effective to handle image processing problems .successful methods include the total variation ( tv ) methods , nonlocal methods and wavelet tight frame methods and many others .moreover , regularization methods can also be considered in problems arising from data science .a typical example is semi - supervised learning , where tasks aim at labeling data from a small amount of labeled training data set .regularization methods such as the harmonic extension method have been considered to this type of ill - posed problem . in this paper, we consider a different regularization , called manifold based low - rank ( mlr ) regularization as a linearization of manifold dimension , which generalizes the global low - rank prior knowledge for linear objects to manifold - region - based locally low - rank for nonlinear objects .the idea of the mlr proposed in this paper is inspired by a recent method called the low - dimensional manifold model ( ldmm ) discussed in . using the image patchesdiscussed in nonlocal methods , the ldmm interprets image patches as a point cloud sampled in a low - dimensional manifold embedded in a high dimensional ambient space , which provides a new way of regularization by minimizing the dimension of the corresponding image patch manifold .this can be explained as a natural extension of the idea of low - rank regularization for linear objects to data with more complicated structures .moreover , the authors in elegantly find that the point - wisely defined manifold dimension can be computed as a dirichlet energy of the coordinate functions on the manifold , whose corresponding boundary value problem can be further solved by a point integral method proposed in .the ldmm performs very well in image inpainting and super - resolution .this model is later considered in collaborative ranking problems .based on weighted graph laplacian ( wgl ) , an improvement of ldmm called ldmm+wgl is proposed more recently in . [ cols="^,^ " , ]in this paper , we propose a manifold based low - rank regularization method for image restoration and semi - supervised learning .the proposed regularization can be viewed as a point - wise linearization of the manifold dimension , which generalize the concept of low - rank regularization for linear objects as a concept of manifold based low - rank for nonlinear objects . using the proposed regularization, we investigate new methods of image inpaining , image super - resolution and x - ray ct image reconstruction .we further extend this method to a general data analysis problem , semi - supervised learning .intensive numerical experiments demonstrate that the proposed mlr method is comparable to or even outperforms the existing wavelet based models and pde based models .several directions will be investigated in our future work .for instance , the current method can be adapted to handle images with noisy input .it is also an important problem to explore a better method to pick the local regions " or manifold representation .for example , for semi - supervised learnings , the left image in figure [ fig : minstlocalsimilar ] shows that the knn obtained by euclidean distance may still include some ambiguity .in particular , some knns may have local rank as high as 7 or 8 , which reduces the reliability of local low rank regularization .therefore , developing a data - driven approach to non - euclidean geometry for mlr will be a very interesting direction to investigate in our future work .we thank prof .stanley osher , prof .zuoqiang shi and mr .wei zhu kindly share their valuable comments and codes of both ldmm and ldmm+wgl for comparisons .
|
low - rank structures play important role in recent advances of many problems in image science and data science . as a natural extension of low - rank structures for data with nonlinear structures , the concept of the low - dimensional manifold structure has been considered in many data processing problems . inspired by this concept , we consider a manifold based low - rank regularization as a linear approximation of manifold dimension . this regularization is less restricted than the global low - rank regularization , and thus enjoy more flexibility to handle data with nonlinear structures . as applications , we demonstrate the proposed regularization to classical inverse problems in image sciences and data sciences including image inpainting , image super - resolution , x - ray computer tomography ( ct ) image reconstruction and semi - supervised learning . we conduct intensive numerical experiments in several image restoration problems and a semi - supervised learning problem of classifying handwritten digits using the minst data . our numerical tests demonstrate the effectiveness of the proposed methods and illustrate that the new regularization methods produce outstanding results by comparing with many existing methods .
|
corruption influences important aspects of social and economic life .the level of corruption in a given country is widely believed to be an important factor to consider when projecting economic growth , estimating the effectiveness of the government administration , making decisions for strategic investments , and forming international policies .the relation between corruption level and key parameters of economic performance is largely qualitative .corruption has become increasingly important with the globalization of the international economic and political relations between countries , which has led various governmental and non - governmental organizations to search for adequate measures to quantify levels of corruption .systematic studies of corruption have been hampered because of the complexity and secretive nature of corruption , making it difficult to quantify .there have been concerted efforts to introduce quantitative measures suitable for describing levels of corruption across diverse countries .however , a specific functional dependence between quantitative measures of corruption and economic performance has not been established .previous studies have suggested a negative association between corruption level and country wealth . there is active debate concerning the relation between corruption level and economic growth .some earlier studies suggest that corruption may help the most efficient firms bypass bureaucratic obstacles and rigid laws leading to a positive effect on economic growth , while more recent works do not find a significant negative dependence between corruption and growth .further , studies of net flow of foreign investment report conflicting results .some studies find no significant correlation between inward foreign investment and corruption level in host countries , while others indicate a negative association between corruption and foreign investments .this debate reflects the inherent complexity of the problem as countries in the world vary dramatically in their social and economic development .thus , an open question remains whether there is a general functional relation between corruption level and key aspects of the economic performance of different countries .we develop and test the hypothesis that there may be a power - law dependence between corruption level and economic performance which holds across diverse countries regardless of differences in specific country characteristics such as country wealth ( defined in our paper as gross domestic product per capita ) or foreign direct investment .recent studies show that diverse social and economic systems exhibit scale invariant behavior e.g. , size ranking and growth of firms , universities , urban centers , countries and even people s personal fortunes follow a power law over a broad range of scales .since countries in the world greatly differ in their wealth and foreign investments , we test the possibility that there may be an underlying organization , such that the cross - country relations between corruption level and country wealth , and corruption level and foreign investments exhibit a significant negative correlation characterized by scale - invariant properties over multiple scales , and thus they can be described by power laws .specifically , we test if this scale - invariant behavior remains stable over different time periods , as well as its validity for different subgroups of countries . finally , we demonstrate a strong correlation between corruption level and past long - term economic growth .we analyze the corruption perceptions index ( cpi ) introduced by _ transparency international _ , a global civil organization supported by a wide network of government agencies , developmental organizations , foundations , public institutions , the private sector , and individuals .the cpi is a composite index based on independent surveys of business people and on assessments of corruption in different countries provided by more than ten independent institutions around the world , including the _ world economic forum _ , _ united nations economic commission for africa _ , the _ economist intelligence unit _ , the _ international institute for management development _ .the cpi spans 10-year period 1996 - 2005 .the different surveys and assessments use diverse sampling frames and different methodologies . some of the institutions consult a panel of experts to assess the level of corruption , while others , such as the _ international institute for management development _ and the _ political and economic risk consultancy _ , turn to elite businessmen and businesswomen from different industries .further , certain institutions gather information about the perceptions of corruption from _ residents _ with respect to the performance of their home countries , while other institutions survey the perceptions of _ non - residents _ in regard to foreign countries or specifically in regard to neighboring countries .all sources employ a homogeneous definition of corruption as the misuse of public power for private benefit , such as bribing public officials , kickbacks in public procurement , or embezzlement of public funds .each of these sources also assesses the `` extent '' of corruption among public officials and politicians in different countries .transparency international uses non - parametric statistics for standardizing the data and for determining the precision of the scores . while there is a certain subjectivity in people s perceptions of corruption , the large number of independent surveys and assessments based on different methodologies averages out most of the bias .the cpi ranges from 0 ( highly corrupt ) to 10 ( highly transparent ) .we also analyze a different measure of corruption , the control of corruption index ( cci ) provided by the _ world bank _ .the cci ranges from 2.5 to 2.5 , with positive numbers indicating low levels of corruption . as a measure of country wealth , we use the _ gdp _ , defined to be the annual nominal gross domestic product per capita in current prices in u.s .dollars , provided by the _ international monetary fund _ ( imf ) over the 26-year period 1980 - 2005 . as a measure of foreign direct investment we use annual data from the _ bureau of economic analysis _ of the united states ( u.s . ) government , which represents the direct investment received by different countries from the u.s . over the period 2000 - 2004 .these data are appropriate for our study since ( i ) the u.s .has been the dominant source of foreign investment in the past decades and ( ii ) the 1977 foreign corrupt practices act ( fcpa ) holds u.s. companies legally liable for bribing foreign government officials , which makes the u.s . a source country which penalizes its multinational companies for corruption practices .to test if there is a common functional dependence between corruption level and country wealth , we plot the cpi versus _ gdp _ for different countries [ fig .[ fig.1](a - e ) ] .we find a positive correlation between cpi and country wealth , which can be well approximated by a power law where , indicating that richer countries are less corrupt .most countries fall close to the power - law fitting line shown in fig .[ fig.1 ] , consistent with specific functional relation between corruption and country wealth even for countries characterized by levels of wealth ranging over a factor of .this finding in eq .( 1 ) indicates that the relative corruption level between two countries should be considered not only in terms of cpi values but also in the context of country wealth .for example , two countries with a large difference in their _ gdp _ on average will not have the same level of corruption , as our results quantify the degree to which poorer countries with lower _ gdp _ have higher levels of corruption . the quantitative relation between cpi and _ gdp _ for all countries in the world represented by the power - law fitting curves in fig .1 indicates where is the `` expected '' level of corruption for a given level of wealth .a country above ( or below ) the fitting line is less ( or more ) corrupt than expected for its level of wealth .for example , comparing the relative corruption level of two countries with similar _ gdp _ such as bulgaria and romania , one can assess that bulgaria is less corrupt than romania [ fig .3 ] . depending whether a specific country is above ( e.g. , bulgaria ) or below ( e.g. , romania ) the power - law fit, one can assess if this country is less ( or more ) corrupt relative to the average level of corruption corresponding to the wealth of this country .moreover , the quantitative dependence we find in eq .( 1 ) allows us to compare the relative levels of corruption between two countries which belong to two different wealth brackets .specifically , two countries with a very different _ gdp _ should not be compared only by the value of their cpi , but also by their relative distances from the power - law fitting line which indicates the expected level of corruption .for example , bulgaria and slovenia differ significantly in their wealth ( slovenia has times higher _ gdp _ ) , but both countries are at equal distances above the fitting line , indicating ( i ) that both countries are less corrupt than the corruption level expected for their corresponding wealth and ( ii ) that the relative level of corruption of slovenia within the group of countries falling in the same _ gdp _ bracket as slovenia is similar to the relative corruption level of bulgaria within the group of countries falling in the same _ gdp _ bracket as bulgaria [ fig .3 ] . to testhow robust is the power - law dependence between corruption and country wealth , we analyze groups containing different numbers of countries , and we find that eq .( 1 ) holds , with similar values of [ fig .[ fig.1](a - e ) ] . averaging the power - law exponent for different years and for different number of countrieswe find .27 , where =0.02 is the standard deviation .for the cpi and _ gdp _ data we find an average correlation coefficient of 0.86 .we also note that the inverse relation of _ gdp _ as a function of cpi is characterized by an exponent which is not equal to 1/ as one might expect , since the correlation coefficient of the data fit is less than 1 .next , we analyze data comprising the same set of countries for different years [ fig . 2 ] , and we find that the power - law dependence of eq .( 1 ) remains stable in time over periods shorter than a decade , with similar and slightly decreasing values for [ fig .[ fig.1 ] and fig . 2 ] .similar results we obtain also for the period 1996 - 2000 ( not shown in the figures as available data cover much smaller number of countries for that period ) . given the facts that ( i ) the number of countries we analyze changes from 90 to 153 , and ( ii ) that the time horizon of 5 - 6 years we consider could be sufficient for significant changes in both corruption level and wealth ( e.g. , the case of eastern european countries ) , our finding of a power - law relationship in eq .( 1 ) is consistent with a universal dependence between _ gdp _ and cpi across diverse countries .we note that the power - law relation in eq .( 1 ) holds when _ gdp _ is calculated both as current prices in us dollars [ fig . 1 and fig .2 ] , as well as the value based on purchasing power parity [ fig . 4 ] .further , eq . ( 1 ) implies that lowering the corruption level of a country would lead to an increase in its _ gdp _ and vice versa , for a country with_ gdp _ an increase in cpi of 0.25 units would lead to increase in the _ gdp _ of approximately [ fig .[ fig.1 ] and fig . 2 ] . to confirm that our findings do not depend on the specific choice of the measure of corruption , we repeat our analysis for a different index , the cci . as the cciis defined in the interval [ 2.5 , 2.5 ] we use a linear transformation to obtain the _ adjusted _ cci , , so that both and cpi are defined in the same interval from 0 to 10 .we find that also exhibits a power - law behavior as a function of _ gdp _ with a similar value of the power - law exponent as obtained for cpi [ fig .so , the specific interval in which the corruption index is defined does not affect the nature of our findings .we note that there is no artificially imposed scale on the values of the cpi or cci index for different countries .while the upper and lower bounds for the cpi or cci index are indeed pre - determined , the intrinsic relative relation between the index values for different countries is inherent to the data .there is no logarithmic scale artificially imposed on the index values of each country ( see details on the cpi and cci methodology in ) .the fact that we obtain practically identical results ( power - law dependence with similar values of the exponent ) for two independent indices cpi and cci , which are provided by different institutions and are calculated using different methodologies , indicates that the quantitative relation of eq .( 1 ) is not an artifact of subjective evaluation of corruption . in summary ,our empirical results indicate that the power - law relation between corruption and _ gdp _ across countries does not depend on the specific subset of chosen countries ( provided they span a broad range of _ gdp _ ) , does not depend on the specific measure of corruption ( cpi and cci ) , and does not change significantly over time horizons shorter than a decade .we next rank countries by their _ gdp _ and by their cpi .we find that _ gdp _ versus rank exhibits an exponential behavior for countries with rank larger than 30 , and a pronounced crossover to a power - law behavior for the wealthiest 30 countries [ fig .[ fig.2 ] ] .we further find that the shape of _ gdp _ versus rank curve remains unchanged for different years , and that increasing the number of countries we consider only extends the range of the exponential tail .our findings for the shape of the _ gdp _ versus rank curve differ from earlier reports .we find that the cpi versus rank curve exhibits a behavior similarly to that of the _ gdp _ versus rank curve , with a crossover from a power law to an exponential tail for countries with rank larger than 30 .the shape of the cpi versus rank curve also remains unchanged when we repeat the analysis for different years [ fig .[ fig.2b ] ] .we find that the ranking of countries based on _ gdp _ practically matches the ranking based on the cpi index .this is evidence of a strong and positive correlation between the ranking of wealth and the ranking of corruption . since the _ gdp _ rank is an unambiguous result of an _ objective _ quantitative measure , the evidence of a strong correlation of the cpi rank with the _ gdp _ rank we observe in fig . [ fig.2 ] and fig .[ fig.2b ] indicates that the cpi values are not _ subjective _ , and that our finding of a power - law relation between cpi and _ gdp _ in fig . 1 and fig .2 is not an artifact of an arbitrary scale imposed on the cpi or on the cci .further , we compare the values of the decay parameters and characterizing the exponential behavior of the cpi and _ gdp _ rank curves , and where and index the rank order of cpi and _ gdp _ respectively .we find that for each year the ratio / reproduces the value of the power - law exponent defined in eq .( 1 ) for the same year an insightful result since it would hold only when is similar to . indeed , only when we obtain from eq .( 2 ) and eq .( 3 ) the relation between log(cpi ) and log(_gdp _ ) , combining eq .( 1 ) and eq . ( 4 ), we see that thus , for each year the power - law dependence between cpi and _ gdp _ in eq .( 1 ) is directly related to the exponential behavior of the cpi and _ gdp _ versus rank [ eq .( 2 ) and eq . ( 3 ) ] .we note that this relation does not hold for the top 30 wealthiest countries , for which there is an enhanced economic interaction in a globalization sense , perhaps leading to similarities in development patterns and overall decrease in the _ gdp _ growth difference we next investigate how the corruption level relates to foreign direct investment .we consider the amount of inward investments received by different countries from the united states ( u.s . ) .investments originating from the u.s .are sensitive to corruption , since u.s .legislation holds american investors in other countries liable for corruption practices .we find a strong dependence of the amount of u.s .direct investments in a given country on the corruption level in that country [ fig .[ fig.3 ] ] .specifically , we find that the functional dependence between u.s .direct investments per capita , _i _ , and the corruption levels across countries exhibits scale - invariant behavior characterized by a power law ranging over at least a factor of [ fig .[ fig.3 ] ] we find that less corrupt countries have received more u.s .investment per capita , and that eq .( 6 ) also holds for different years .in particular , we find that groups of countries from different continents , which differ in both _ gdp _ and average cpi , are characterized by different values of [ fig .[ fig.3 ] ] .we obtain similar results when repeating our analysis for the cci , suggesting that the power - law relation in eq .( 6 ) between corruption level and foreign direct investment per capita does not depend on the specific measure of corruption used .we also note that the 1977 foreign corrupt practices act only precludes american firms from entering corruption deals , but does not dictate in which country and how much money the american firms should invest . therefore , the statistical regularities we find in fig. [ fig.3 ] can not arise from legislatory measures against foreign corruption .finally , we investigate whether there is a relation between corruption level and long - term growth rate .since the cpi reflects the quality of governing and administration in a given country , which traditionally requires considerable time to change , we hypothesize that there may be relation between the current corruption level of a country and its growth rate over a wide range of time horizons . to test this hypothesiswe estimate the long - term growth rate for each country as the slope of the least square fit to the plot of log(_gdp _ ) versus year over the past several decades , where the _ gdp _ is taken as constant prices in national currency [ fig .we divide all countries into four groups according to the world bank classification based on _ gdp _ .we find a strong positive dependence between country group average of cpi and the group average long - term growth rate , showing that less corrupt countries exhibit significant economic growth while more corrupt countries display insignificant growth rates ( or even display negative growth rates ) [ fig .[ fig.4 ] ] . repeating our analysis for different time horizons ( 1990 - 2005 ; 1980 - 2005 )we find similar relations between the cpi and the long - term growth , indicating a link between corruption and economic growth . in summary, the functional relations we report here can have implications when determining the relative level of corruption between countries , and for quantifying the impact of corruption when planning foreign investments and economic growth .these quantitative relations may further facilitate current studies on spread of corruption across social networks , the emergence of endogenous transitions from one level of corruption to another through cascades of agent - based micro - level interactions , as well as when considering corruption in the context of certain cultural norms .acknowledgments : we thank f. liljeros for valuable suggestions and discussions , and we thank d. schmitt and f. pammolli for helpful comments . we also thank merck foundation , nsf , and nih for financial support .tanzi , v. davoodi , h. r. ( 2000 ) corruption , growth , and public finance ._ working paper of the international monetary fund _ , fiscal affairs department .leff , n. h. ( 1964 ) economic development through bureaucratic corruption ._ american behavioral scientist _ 82 : 337 - 341 .huntington , s. p. ( 1968 ) _ political order in changing societies _( yale university press , new haven ) .wheeler , d. mody , a. ( 1992 ) international investment location decisions : the case of u.s . firms . _ journal of international economics _ 33 : 57 - 76 .hines , j. ( 1995 ) forbidden payment : foreign bribery and american business after 1977 . _ nber working paper 5266_. wei , s. j. ( 2000 ) how taxing is corruption on international investors . _the review of economics and statistics _ 82 : 1 - 11 .kaufmann , d. _ et al_. ( 2003 ) governance matters iii : governance indicators for 1996 - 2002 ._ world bank policy research working paper _ , 3106 .knack , s. keefer , p. ( 1995 )institutions and economic performance : cross country tests using alternative institutional measures . _economics and politics _ 7 : 207 - 27 .treisman , d .( 2000 ) _ journal of public economics _ 76 : 399 - 457. international country risk guide s corruption indicator published by political risk services .data are available at . the corruption perceptions index ( cpi )is published by transparency international .data are available at . the control of corruption index ( cci ) published by the world bank .data are available at .bardhan , p. ( 1997 ) _ journal of economic literature _ 35 : 1320 - 1346 .lambsdorff , j. g. ( 1999 ) corruption in empirical research - a review ._ transparency international working paper_. schneider , f. enste , d.h .( 2000 ) _ journal of economic literature _ 38 : 77 - 114 .makse , h. a. _ et al_. ( 1995 ) modelling urban growth patterns ._ nature _ 377 : 608 - 612 .axtell , r. l. ( 2001 ) zipf distribution of u.s .firm sizes ._ science _ 293 : 1818 - 1820 .stanley , m. h. r. _ et al_. ( 1996 ) scaling behavior in the growth of companies . _nature _ 379 : 804 - 806 .lee , y. _ et al_. ( 1998 ) universal features in the growth dynamics of complex organizations ._ 81 : 3275 - 3278 .fu , d. _ et al_. ( 2005 ) the growth of business firms : theoretical framework and empirical evidence ._ 102 : 18801 - 18806 .plerou , v. _ et al_. ( 1999 ) similarities between the growth dynamics of university research and of competitive economic activities ._ nature _ 400 : 433 - 437 .ivanov , p. ch ._ et al_. ( 2004 ) common scaling patterns in intertrade times of us stocks ._ physical review e _69(5 ) : 056107 .di guilmi , c. _ et al_. ( 2003 ) power law scaling in the world income distribution ._ economics bulletin _ 15 : 1 - 7 .iwahashi , r. machikita , t. ( 2004 ) a new empirical regularity in world income distribution dynamics , 1960 - 2001 ._ economics bulletin _ 6 : 1 - 15 .miskiewicz j , ausloos m. correlations between the most developed ( g7 ) countries .a moving average window size optimisation .( 2005 ) _ acta physica polonica b _ 36 ( 8) : 2477 - 2486 .miskiewicz j , ausloos m. an attempt to observe economy globalization : the cross correlation distance evolution of the top 19 gdp s . (2006 ) _ international journal of modern physics c _ 17 ( 3 ) : 317 - 331 .blanchard , ph ._ et al_. ( 2005 ) the epidemics of corruption . _ arxiv.org / abs / physics/0505031_. hammond , r. ( 2000 ) endogenous transition dynamics in corruption : an agent - based computer model ._ csed working paper no .19_. situngkir , h. ( 2004 ) money - scape : a generic agent - based model of corruption ._ computational economics_. fisman , r. miguel , e. ( 2006 ) cultures of corruption : evidence from diplomatic parking tickets ._ nber working paper no .
|
* we report quantitative relations between corruption level and economic factors , such as country wealth and foreign investment per capita , which are characterized by a power law spanning multiple scales of wealth and investments per capita . these relations hold for diverse countries , and also remain stable over different time periods . we also observe a negative correlation between level of corruption and long - term economic growth . we find similar results for two independent indices of corruption , suggesting that the relation between corruption and wealth does not depend on the specific measure of corruption . the functional relations we report have implications when assessing the relative level of corruption for two countries with comparable wealth , and for quantifying the impact of corruption on economic growth and foreign investments . *
|
in view of the rapid development of the experimental realization of quantum information processing ( qip ) by using different physical systems , it is an urgent obligation to find a proper strategy for detection of entanglement .one of the most commonly used systems in the study of physical realizations of qip is an ensemble system and the best - known technology therein is nuclear magnetic resonance ( nmr ) .the ensemble system of nmr has been employed for implementation of even relatively complicated quantum algorithms . however , nmr has been facing difficulty with regard to the existence of entanglement .the complication actually arises because of somewhat confusing ensemble behavior of the nmr system .the confusion manifests itself when macroscopic quantities , such as results of nmr measurement involving an average over a large number of molecules , are used to detect microscopic properties , such as entanglement between pairs of nuclear spins inside each molecule .in fact , it has been shown that , for some particular case of nmr implementation of quantum nonlocal algorithms , the apparent nonlocal behavior of the highly mixed states in nmr is due to a large number of molecules involved in the ensemble system .hence , highly mixed states of nmr are separable and can not be used for immaculate implementation of quantum nonlocal algorithms for which entanglement is believed to play an essential prerequisite role . on the other hand , ensemble quantum computing( qc ) pertains some inevitable advantages .it is particularly workable since spin manipulation is performed easily by applying corresponding pulses .in addition , ensemble qc , such as nmr , is supported by a long term research in the area of spectroscopy .thus , it would be unfair and also inefficient to totally ignore ensemble qc .we have been involved in realization of qip by means of electron nuclear double resonance spectroscopy ( endor ) .although this system should be taken as an ensemble system at the moment , it has been profoundly evaluated for more elaborated nonlocal qip through experimental studies .however , after applying entangling operations in an experiment , it might still be premature to claim that the state is properly entangled enough for implementation of nonlocal qip .at least some qualitative detection of general multipartite entanglement for a particular system of interest should be examined in advance .detection of a general multipartite entanglement is one of the most challenging problems for an experimental study in qip .there are several approaches in this context .one may first employ a full state tomography with which the complete density matrix would be obtained .then , direct application of an existing entanglement measure would be evaluated on the quantum state in order to extract information about the entanglement of the state .this is , however , a bit too general to be efficient .the density matrix of the quantum state includes far more information than necessary for an entanglement estimation only .furthermore , it is difficult to find a sufficient condition of entanglement for a given state in general if the dimension of the state is neither nor for which the peres - horodecki criterion would be applicable .detection of entanglement of a quantum state through violation of the bell inequalities also should not be taken to be perfect since there are entangled states that do not violate any known bell inequalities .still one may try the existing approaches that work for a state that is totally unknown in advance .however , this may not be the most proper choice for our system of interest since as long as we are working in an experiment , we have definitely some knowledge about the state , i.e. , through the prepared initial state or applied operations .if this is the case , it would be more appropriate to define an entanglement detector measure that is easily workable with less experimental effort by taking advantage of the available information on the state .we have studied the concept of entanglement witness ( ew ) with the motivation to introduce an entanglement detection applicable for a particular system of interest .the ew is an observable which has non - negative expectation values for all separable states .therefore , detection of a negative value indicates the entanglement of the state .in addition to the very fast development in the theory of ews for different classes of states , in experiments also detection of entanglement by the use of witness operators has attracted special attention . for physical systems in thermodynamical limits , witness operatorsare developed .in addition , ews are generated for detecting entanglement of mixed states that are close to a given pure state .ews are used for characterizing different classes of a multipartite quantum state .after determination of the ew for a particular expected state in an experiment , the important task is to decompose it into local operators that are easily measurable in the given physical system .this approach for detection of entanglement is operationally possible and , for most cases , is a simple method . in refs . detection of multipartite entanglement with few local measurements is studied . throughout the study on this issue , it has been more appreciated to operationally simplify the entanglement detection process by modifying the required observables for the particular working system and/or decreasing the number of local measurements . generally speaking ,previously introduced methods for detection of entanglement by the use of ews require several projective measurements on the copies of a state in order to extract the outcome .however , it is somehow operationally difficult to prepare several copies of a quantum state to be measured for detection of entanglement .therefore , it is more advantageous if an ew works just with a single run experiment .also , the detection process would be better to be specially modified for the particular working system .for instance , if the physical system of interest is an ensemble system , then the available measurements are ensemble average measurements .we work with ensemble systems .therefore , ensemble average measurements should be employed here , in contrast with the projective measurements widely used in the previous works .therefore , in this work , we propose a proper single - experiment - detectable ( sed ) ew for the ensemble system qc .the most significant result of our work is that our schemes require only a single - run experiment to detect entanglement by using nondestructive ensemble average measurements , which allow us to measure several non - commuting operators simultaneously .a well known example of nondestructive measurement is the free induction decay measurement that is frequently used in nmr quantum computing .this paper is organized as follows .first in the following section we will briefly summarize the concept of the ew and will give a short review particularly for multipartite ews .then , in the third section we will introduce and prove a method with which the conventional ew can be transformed into a collection of separate but simultaneous measurements of individual local systems of the ensemble qc .the analysis of this section is based on the assumption that the density matrix is diagonal after application of the disentangling operations introduced there . in the subsequent section , a different sed ew , which works without any assumption on the density matrix ,is introduced . as a drawback, however , we have to introduce an ancillary qubit . in sec .[ five ] we will extend discussions on our scheme , study the complexity of the corresponding quantum circuits for sed ew , and give some remarks on the behavior of this scheme under generally existing noise in the system .a density matrix is entangled if and only if there exists a hermitian operator , called an entanglement witness ( ew ) , such that where denotes the set of separable states .therefore , it is concluded that a state is entangled if some negative value is obtained in measurement of an ew .there are several methods to construct an ew . for the casein which a density matrix has a negative eigenvalue when partially transposed , the construction of the ew is very simple .the partially transposed projector onto the eigenvector corresponding to the negative eigenvalue of the partial transpose of the state is an ew . one important and notable point about ews appears when dealing with the multipartite case .ews can be used to detect different kinds of multipartite entanglement by defining the space in eq .( [ define ] ) to be the set of states that does not have a special kind of entanglement to be detected .let us give an example . while all the entanglements for two qubits are actually equivalent to each other , for three qubits there are two classes of genuine pure tripartite entangled states .the entangled states from different classes can not be transformed into each other by local operations and classical communications ( locc ) .one class is the greenberger - horne - zeilinger ( ghz ) class that includes entangled states locc - equivalent to this classification can be extended to more general tripartite mixed states .we remind the reader that a tripartite ( and generally any multipartite ) state is separable if it can be written as a convex combination of fully separable states .however , a state may not be fully separable but biseparable , i.e. , it can be written as a convex combination of biseparable pure states . if a tripartite state is neither fully separable nor biseparable then it is entangled , in either the ghz class or the w class . corresponding to each class of entangled states ,there are ews already known .needless to say , these entanglement witnesses are not sufficient to decide if the system possesses a genuine multipartite entanglement .for instance , for the ghz state , the ew is while for the w state , it is where is the identity operator .the ew that detects a genuine tripartite entanglement of a pure state , and of states close to , is given by where and denotes the set of biseparable states .then for all biseparable states and . the coefficient in eq .( [ alpha ] ) is determined by using the schmidt decomposition . for an ensemble system, it is desirable to find an observable that can be performed by a small number of experiments to satisfy operational requirements .any ew that is defined for the state , which is the most expected state after the applied operations and is close to the experimentally realized state , should be decomposed into a linear combination of individual polarization operators , in order to make it locally measurable . in this paper, we introduce a new method with which an ew for multipartite states can be measured just with a single - run experiment .for starting up an experiment , the state is supposed to be initialized to a simple fiducial state , such as , for an -qubit system . from now on we will drop the subscripts to simplify notation , unless otherwise stated .however , in an ensemble qc , the initial state is prepared in the form of a pseudopure state as follows where characterizes the fraction of the state .it is important to note here that the confusion regarding the concept of entanglement in an ensemble system is intrinsically apart from the concept of pseudopure state as an initial state .a pseudopure state is used for making the required input state for qc .improvement of experimental conditions above some definite threshold is required to realize an entangled state experimentally .in other words , the pseudopure state still can be used for realization of a genuine entangled state if the experimentally required conditions are all satisfied . here, we do not enter into the discussion of the very large required number of steps for making a pseudopure state as we suppose that , anyhow , we are given some prepared input state for which the status of entanglement should be studied . in order to produce a particular entangled state ,the corresponding entangling operation is applied on a state to yield the corresponding pseudopure state is note that , even though the entangling operation is applied , may or may not be entangled , and our task is to detect entanglement of in order to examine whether is applicable for some quantum nonlocal algorithm , for example .the corresponding conventional ew , eq .( [ ew ] ) , for is where is determined properly so that we do not get a negative value for separable states . the entanglement witness detects as entangled if where for instance , if a state is the experimentally achieved pseudopure state for the state eq .( [ ghz ] ) , then is .this large value of is in accordance with the results of studies on entanglement of pseudopure states which are indeed mixed states .one may still work with entanglement for an ensemble system without being engaged in the concept of a pseudopure state .however , if the state should be used for a quantum computation it is a formidable task to prepare a pseudopure state that works exactly as a pure state under unitary operations for qc . generally speaking ,the expectation value of the observable is given after several measurements on copies of state . if then is entangled . in this work ,we introduce a strategy with which the ew can be detected in a single run - measurement .the most usual and convenient measurement in an ensemble qip is the spin magnetization of the spin .then , we find the proper -partite ew , which we call , that satisfies the following equation and is detectable in a single run measurement as long as is written as the unitary operator should be appropriately defined in addition to the coefficients . it should be noted that eq .( [ equality ] ) is state dependent .now we will show that there exists some unitary operator in addition to the set of coefficients such that eq .( [ equality ] ) is satisfied .we also give an explicit example of the solution .the unitary operator includes the inverse entangling operation and that is introduced for eq .( [ equality ] ) to be satisfied as in fig .[ figone ] .it is possible to use a unitary transformation and individual polarizations of output qubits to find the value of .it is assumed that is diagonal .see the text for details . ]suppose , the state after applying the inverse entangling operation , is a diagonal matrix with a possible classical correlation .although this assumption is often satisfied , it can be dropped for a general proof if an ancillary qubit is added to the quantum circuit . indeed , for this case the only required measurement would be on the spin magnetization of the ancillary qubit .this is discussed in the next section . in the present case ,out task is to find some proper unitary transformation and coefficients such that eq .( [ equality ] ) holds under the condition that is diagonal .we first prove by induction that eq .( [ equality ] ) is satisfied for any and later give an explicit example of tripartite states . by setting and , we find for the left and right - hand sides ( lhs and rhs ) of eq .( [ equality ] ) .\end{aligned}\ ] ] consider the case where and let us denote the matrix by . by inspecting eqs .( [ lhs ] ) and ( [ rhs ] ) , we immediately notice that there is a solution in which the element of is while all the other diagonal elements vanish .note that the off - diagonal elements are arbitrary thanks to the assumed diagonal form of .typically , we can take , , and then , as promised . therefore , the equality eq .( [ equality ] ) is satisfied for the case .we will take advantage of this typical and the corresponding parameters in the following proof .now we use mathematical induction .suppose that transforms to .then , we show that with a class of parameters , the unitary transformation maps to .the unitary operator is a permutation operation that comprises transpositions : , , , .this set of transpositions may be written in terms of ket vectors in the binary system : , , , , or equivalently qubit .we also defined .this unitary transformation is easy to implement . by noting the equation we find that a single hadamard gate acting on the qubit and a single c gate with zero - conditional - control qubits and the target qubit do the job .now we give the details of the proof .suppose that for qubits we have the unitary operator and coefficients and that satisfy eq .( [ equality ] ) so that here also the off - diagonal terms are not important .then , for qubits , we have the equation \otimes{v'}_n^\dagger\nonumber\\ & = & \frac{1}{2 } \left(\begin{array}{cc } a_n&0\\ 0&a_n \end{array}\right)\\ & { } & + a_{n+1}\ ; { \rm diag}(1_1,\ldots,1_{2^n},-1_{2^n+1},\ldots,-1_{2^{n+1}}).\notag\end{aligned}\ ] ] this should be transformed to the form : in order for eq .( [ equality ] ) to be satisfied . to this end, we apply some unitary transformations to .first , we use the permutation .then we have it should be noted that the off - diagonal terms of eq .( [ bn1 ] ) have been also transformed under the application of . under this transformation ,the following permutations of the matrix elements , in decimal notation , take place . for and , for , the off - diagonal elements of the diagonal blocks of are replaced with elements that are .now , the diagonal blocks in are , from the upper left to the lower right. then we set and apply to .all the diagonal blocks but the first one transform to since .then we obtain therefore , the unitary transformation and coefficients , , and satisfy the equality eq .( [ equality ] ) for all under the imposed condition .the following example with will clarify the above proof .for , we have furthermore , by applying the permutation operator , we obtain this permutation is realized by two transpositions and with respect to the row labels ( numbered from 1 to 8) , and the same transpositions with respect to the column labels . with ket and bra labels ,these are transpositions , , , and .therefore , the off - diagonal terms , that is the second term in the right - hand side of eq .( [ foradd ] ) , are written as the matrix note that all the off - diagonal elements of the diagonal blocks of this matrix have disappeared .adding up the first and third terms of the right - hand side of eq .( [ foradd ] ) and substituting , all the diagonal blocks are mapped to , except the first diagonal block that remains as .then we use the block - diagonal unitary transformation where is the hadamard operation .then this leads to thus we found that the unitary transformation satisfies .this shows that the equality eq .( [ equality ] ) holds for .in this section we will show how to detect multipartite entanglement for a general case , without imposing the condition that the state density matrix be diagonal after the disentangling operation .for this idea , we use a single uninitialized ancillary qubit . consider a single ancillary qubit initially in a thermal equilibrium state : }_{\rm in } = p{|0\rangle}{\langle 0|}+(1-p){|1\rangle}{\langle 1|}^n^n ] of the reduced density operator of the ancillary qubit before the measurement is \notag\\ & { } & + [ 1-\tilde p(0\ldots0)][p{|0\rangle}{\langle 0|}+(1-p){|1\rangle}{\langle 1|}]\nonumber\\ & { } & \hspace{-0.8cm}=[\tilde p(0\ldots0)(1-p)+p(1-\tilde p(0\ldots0))]{|0\rangle}{\langle 0|}\notag\\ & { } & + [ \tildep(0\ldots0)p+(1-p)(1-\tilde p(0\ldots0))]{|1\rangle}{\langle 1|},\end{aligned}\ ] ] where use has been made of the identity .thus we have }_{\rm out}z=(1 - 2p)[2\tilde p(0\ldots0)-1].\ ] ] this leads to }_{\rm out}z.\ ] ] consequently , }_{\rm out}z.\ ] ] thus , the value of can be found by using the initial polarization }_{\rm in}z=2p-1 ] of the ancillary qubit . extending this method ,it is easy to construct a quantum circuit of concatenated ews to discriminate different types of multipartite entanglement from each other .suppose we want to measure types of multipartite entanglement with a set of ews , where .then we can test all of using measurements as shown in fig .[ figancillawmconcat ] .for example , tripartite entanglement can be differently written in the form of the ghz state or the w state . for a general tripartite state can be checked by applying disentangling operations for the ghz state ( ) and the w state ( ) .this is shown in fig .[ figancillawmconcat ] if substitutions are made as and and .the entanglement of the state would be detected by measurements of the ancillary qubit . in order to realize the quantum circuit fig .[ figancillawmconcat ] with a system such as nmr , one needs to make several free - induction ( nondecay ) measurements in a single run of a nmr experiment . in a conventional nmr experiment , however , generally one free - induction decay ( fid ) measurement is made at a particular point where the experiment is ceased .indeed , there have been experiments involving multiple fid measurements , such as the cory-48 pulse sequence .although it is not of particular interest to nmr researchers , in principle it is possible to take an instant free - induction measurement .one possible way is to apply a pulse and take signals of the precession for some microseconds duration ( that is , small enough to keep coherence ) , and then apply a pulse .thus , it is a realistic idea to have multiple free - induction measurements in a single run of an experiment .the number of two - qubit quantum gates to compose is .this is clear from the quantum circuit of in fig .[ figvp ] .let us write the number of basic gates to compose by .then , from which we obtain .here we used the fact that an in - place c gate ( is a unitary matrix ) is composed of two - qubit gates ( see , _ e.g. _ , ref . , p.184 ) .quantum circuit to compose . ]if we can use a highly selective pulse , the gate c may be performed in a single step , although this usually takes a long time of order .in addition to the circuit complexity to compose , the circuit complexity for the inverse of an entangling operation should be considered as well . usually , this circuit complexity is less than .for example , a quantum circuit to generate a ghz - like state from some pure initial state can be composed of several not gates , one hadamard gate , and controled - not ( cnot ) gates .consequently the total circuit complexity is usually . in casean ancillary qubit is employed , the dominant circuit complexity is due to the c gate . on the assumption that the internal circuit of disentangling operations ( here , may be ghz , w , etc . )have circuit complexities on the order of , the total circuit complexity of the quantum circuit of fig .[ figancillawmconcat ] is .although the proposed methods enable a single experiment to detect entanglement without copies of states , the size of quantum circuits used in the measurement process is considerably larger than that for usual entangling operations .thus noise ( namely , probabilistic errors ) in quantum gates can skew the result of entanglement detection with a higher probability than conventional entanglement detection using multiple copies of a state , assuming that error during preparation of copies of a state is negligible . in a simple model , we assume that a quantum gate acting on the target block of qubits suffers from a noise such that a desired unitary transformation takes place with success probability ; otherwise , a reduced density operator acting on becomes a maximally mixed state with failure probability .the superoperator of this noise is }/2^{\mathrm{len}(\mathbf{t})},\end{aligned}\ ] ] where is the number of qubits in ; }/2^{\mathrm{len}(\mathbf{t})} ] .we set the success probability as for single - qubit gates and for cnot gates .noise is assumed to exist in individual quantum gates including the entangling operation .we investigate the output in eq .( [ equality ] ) under the above noise .we choose the conventional entanglement witness for the class of the ghz state given in eq .( [ convewghz ] ) and decomposition into sed ew using the method that we have introduced in sec . 3 .numerical results of outputs of entanglement witnesses are plotted against and in fig .[ ewcomparison ] .as illustrated in the figure , positive values are returned in the range of in which the noise - free entanglement witness ( i.e. , the case of ) returns negative values .a sed ew is more fragile against noise than a conventional one in the sense that the range of in which negative values are returned is small .nevertheless , it is important that we never obtain a negative value for a separable state even under noise in this example . for a general case ,a mathematical proof for returning non - negative values for all separable states is not easy because it is strongly dependent on the structure of the quantum circuit that has been used for the scheme introduced in this paper .one way to improve the robustness is to increase the value of .recently , a high gate fidelity was reported by using a classical numerical optimization in nmr quantum computing ( _ e.g. _ ref . ) and this technique may be applicable for this purpose. this technique can also reduce the number of pulses needed to implement a large quantum gate .thus it seems a possible way to make proposed entanglement detection methods practical in the future .we proposed two schemes to reconstruct an entanglement witness into nonlocal operations and local measurements so that a single experiment without copies of a state is sufficient . in one scheme ,an ancillary qubit is not required but the quantum state must satisfy some condition , while an uninitialized ancillary qubit is required in the other scheme where no condition is imposed on the quantum state .computational complexities and noise behavior have been discussed .we would like to thank masato koashi for pointing out an error in the description of concatenated ews in the draft and kazuyuki takeda for discussions .r.r . is grateful to vlatko vedral for helpful discussions .is supported by the jsps .m.n . would like to thank mext for partial support ( grant no .13135215 ) .is supported by crest of japan science and technology agency .
|
in this paper we provide an operational method to detect multipartite entanglement in ensemble - based quantum computing . this method is based on the concept of the entanglement witness . we decompose the entanglement witness for each class of multipartite entanglement into nonlocal operations in addition to local measurements . individual single- qubit measurements are performed simultaneously ; hence complete detection of entanglement is performed in a single- run experiment . in this sense , our scheme is superior to the generally used entanglement witnesses that require a number of experiments and preparation of copies of quantum state for detection of entanglement .
|
nanofluidics refers to the study of the transport of ions and/or molecules in confined solutions as well as fluid flow through or past structures with one or more characteristic nanometer dimensions .the dramatic advances in microfluidics in the 1990s and the introduction of nanoscience , nanotechnology and atomic fabrication in recent years have given its own name to nanofluidics .nanofluidic systems have been extensively exploited for molecule separation and detection , nanosensing , elucidation of complex fluid behavior and for the discovery of new physical phenomena that are not observed or less influential in macrofluidic or microfluidic systems .some of such phenomena include double - layer overlap , ion permitivity , diffusion , ion - current rectification , surface charge effect and entropic forces .one major feature of a nanofluidic system is its structural characteristic .nanofluidic structures can be classified into nanopores and nanochannels and , in fact , these two terms are exchangeable in many cases .a nanopore has comparatively short length formed perpendicularly through various materials , such as a bipore consisting of proteins , i.e. , -hemolysin and a solid - state pore .an example of solid - state pore is a set of nanopores in a silicon nitride membrane which enables the detection of folding behaviors of a single double - stranded dna . on the other hand, a nanochannel has relatively larger dimensions of depth and width , usually fabricated in a planar format , and is often equipped with other sophisticated devices to control or influence the transport inside the channel .for instance , perry _ et al . _ demonstrates the rectifying effect of a funnel - shape nanochannel based on different movements of counterions at its tip and base .a nano - scaled channel usually has either a cylindrical or a conical geometry . in a cylindrical channel ,the flow direction does not influence on current , but surface charges and applied external voltage alter the flux of ions with opposite sign charges .however , the difference in the size of pores in a conical channel brings different ionic conductance patterns depending on the flow direction .the other major feature of a nanofluidic system is its interactions .it is the interaction at nanoscale that distinguishes a nanofluidic system from an ordinary fluid system .certainly , most interactions are directly inherited from the chemical and physical properties of the nanostructure , such as the geometric confinement , steric effect , polarization and charge .some other interactions are controlled by flow conditions , i.e. , ion composition and concentration , and applied external fields .therefore , the interactions of a nanofluidic system is determined by its structure and flow conditions .the function of a nanofluidic system is in turn determined by all the interactions .usually , most nanofluidic systems do not involve any chemical reactions . in this case, steric effects , van der waals interactions and electrostatic interactions are pivoting factors .therefore , in nanofluidic systems , microscopic interactions dominate the flow behavior , while in macroscopic flows and some microfluidics , continuum fluid mechanics governs and microscopic effects are often negligible .typically , microscopic and macroscopic behaviors co - exist in a microfluidic system .characteristic length scales , such as reynolds number , biot number and nusselt number , are important to the macroscopic fluid flows . for most nanofluidic systems ,one of most important characteristic length scales is the debye length , where is the dielectric constant of the solvent , is the permittivity of vacuum , is the boltzmann constant , is the absolute temperature , and and are , respectively , the bulk ion concentration and the charge of ion species .the debye length describes the thickness ( or , precisely , reduction ) of electrical double layer ( edl ) .essentially , ionic fluid behaves like a microscopic flow within the edl region , while acts as a macroscopic flow far beyond the debye length . by the gouy - chapman - stern model , the edlis divided into three parts : the inner helmholtz plane , outer helmholtz plane and diffuse layer .while the inner helmholtz plane consists of non - hydrated coions and counterions that are attached to the channel surface , the outer helmholtz plane contains hydrated or partially hydrated counterions .moreover , the part between the inner and outer helmholtz planes is called the stern layer . notethat the edl applies not only to the layer near the nanochannel , but also the layer around a charged biomolecule in the flow .consequently , many microfluidic devices with quite large channel dimensions exhibit microscopic flow characteristic when the fluid consists of large macromolecules and solvent . the possible deformation , aggregation , folding and unfolding of the macromolecules in the fluidic systemmake the fluid flow behavior complex and intriguing .nevertheless , for rigid macromolecules , the effective channel dimensions can be estimated by subtracting the macromolecular dimension from the physical dimension of the channel .the resulting system may be approximated by simple ions for most analysis .specifically , the charge on the wall surface derives electrostatic interactions and electrokinetic effects when ions in a solution are sufficiently close to channel wall .since the surface - to - volume ratio is exceptionally high in a nanoscale channel , surface charges induce a unique electrostatic screening region , i.e. , edl .in fact , it attracts ions charged oppositely ( counterions ) and repels ions having the same charge ( coions ) to sustain the electroneutrality of an aqueous solution confined in a channel .physically , the edl region only contains bound or mobile counterions and typically covers the nano - sized pore of a channel .therefore , the oppositely charged ions mainly constitute the electrical current through a micro- or nano - channel .the rectification of ionic current , which is one of the distinct transport properties of nanofluidic channels , can further elucidate the flow pattern and formation of the fluid through a nanochannel .this phenomenon usually occurs when surface charge distribution , applied electric field , bulk concentration and/or channel geometry are properly manipulated along the channel axis .conducted experiments to present ion - enrichment and ion - depletion effects on nanochannels to show that the rectification begins with these two effects . in their design ,an applied field gave rise to accumulation of all ions at the cathode and absence of all ions at the anode of the channels .ion selectivity is another important feature which enables nano - sized channels to work as an ionic filter .it is defined as the ratio of the difference between currents of cations and anions to the total current delivered by both ions .vlassiouk and his colleagues examined the ion selectivity of single nanometer channels under various conditions including channel dimension , buffer concentration and applied voltage .nanofluidics has been extensively studied in chemistry , physics , biology , material science , and many areas of engineering .the primary purpose of most studies is to separate and/or detect biological substances in a complex solution .a variety of nanofluidic devices have been produced using extraordinary transport behaviors caused by steric restriction , polarization and electrokinetic principles .for instance , a nanofluidic diode is an outstanding tool to take the advantage of the rectifying effect of ionic current through a nanochannel .the nanofluidic diodes have been developed to govern the flow inside the channel by breaking the symmetry in channel geometry , surface charge arrangement and bulk concentration under the influence of applied voltage .additionally , the design and fabrication of nanofluidics for molecular biology applications is a new interdisciplinary field that makes use of precise control and manipulation of fluids at submicrometer and nanometer scales to study the behavior of molecular and biological systems . because of the microscopic interactions , fluids confined at the nanometer scale can exhibit physical behaviors which are not observed or insignificant in larger scales .when the characteristic length scale of the fluid coincides with the length scale of the biomolecule and the scale of the debye length , nanofluidic devices can be employed for a variety of interesting basic measurements such as molecular diffusion coefficients , enzyme reaction rates , ph values , and chemical binding affinities .micro- and nanofluidic techniques have been instrumented for polymerase chain reaction ( pcr ) amplifications , macromolecule accumulator , electrokinetics , biomaterial separation , membrane protein crystallization , and micro - scale gas chromatography .nanofluidic dynamic arrays have also been devised for high - throughput single nucleotide polymorphism genotyping .nanofluidic devices have also been engineered for electronic circuits , local charge inversion , and photonic crystal circuits .microchannels and micropores have been utilized for cell manipulation , cell separation , and cell patterning .efforts are given to accomplish all steps , including separation , detection and characterization , on a single microchip .despite of rapid development in nanotechnology , the design and fabrication of nanofluidic systems are essentially empirical at present . since nanofluidic device prototyping and fabrication are technically challenging and financially expensive , it is desirable to further advance the field by mathematical / theoretical modeling and simulation .the modeling and simulation of nanofluidic systems are of enormous importance and have been a growing field of research in the past decade .when the width of a channel is less than 5 nm , the transport analysis requires the discreteness of substances and , in particular , molecular dynamics ( md ) is a useful tool in this respect .typically , the md determines the motion of each atom in a system using the newton s classical equations of motion .a simplified model is brownian dynamics ( bd ) , in which the solvent water molecules are treated implicitly , so this method costs less computationally than the md and is able to reach the time scale of physical transport .the bd describes the motion of each ion under frictional , stochastic and systematic forces by means of langevin equation .further reduction in the computational cost leads to the poisson - nernst - planck ( pnp ) theory , which is the most renowned model for charge transport .the pnp model describes the solvent water molecule as a dielectric continuum , treats ion species by continuum density distributions and , in principle , retains the discrete atomic detail and/or charge distribution of the channel or pore .the performance of the pb model and the pnp model for the streaming current in silica nanofluidic channels was compared .the brownian dynamics of ions in the nanopore channel was combined with the continuum pnp model for regions away from the nanopore channel .the reader is referred to the literature for a comprehensive discussion of the pnp theory .a further simplified model is the lippmann - young equation , which is able to predict the liquid - solid interface contact angle and interface morphology under an external electric field .most microfluidic systems involve fluid flow .if the fluid flow through a microfluidic pore or channel is also a concern in the theoretical modeling , coupled pnp and the navier - stokes ( ns ) equations can be utilized .these models are able to provide a more detailed description of the fluid flow away from the microscale pore or channel , i.e. , beyond the debye screening length . recently, a variety of differential geometry based multiscale models were introduced for charge transport .the differential geometry theory of surface provides a natural means to separate the microscopic domain of biomolecules from the macroscopic domain of solvent so that appropriate physical laws are applied to appropriate domains .our variational formulation is able to efficiently bridge macro - micro scales and synergically couple macro - micro domains .one class of our multiscale models is the combination of laplace - beltrami equation and poisson - kohn - sham equations for proton transport .another class of our multiscale models utilizes laplace - beltrami equation and generalized pnp equations for the dynamics and transport of ion channels and transmembrane transportors .the other class of our multiscale models alternate the md and continuum elasticity ( ce ) descriptions of the solute molecule , as well as continuum fluid mechanics formulation of the solvent .we have proposed the theory of continuum elasticity with atomic rigidity ( cewar ) to treat the shear modulus as a continuous function of atomic rigidity so that the dynamics complexity of a macromolecular system is separated from its static complexity . as a consequence ,the time - consuming dynamics is approximated by using the continuum elasticity theory , while the less time - consuming static analysis is carried out with an atomic description .efficient geometric modeling strategies associated with differential geometry based multiscale models have been developed in both lagrangian eulerian and eulerian representations .nevertheless , in nanofluidic modeling , computation and analysis , there are many standing theoretical and technical problems . for example , nanofluidic processes may induce structural modifications and even chemical reactions , which are not described in the present nanofluidic simulations . additionally ,although the pnp model can incorporate atomic charge details in its pore or channel description , which is vital to channel gating and fluid behavior , atomic charge details beyond the coarse description of surface charges are usually neglected in most nanofluidic simulations . moreover , as discussed earlier , stern layer and ion steric effect are significant for the edl , and are not appropriately described in the conventional pnp model .furthermore , nanofluidic simulations have been hardly performed in 3d realistic settings with physical parameters .consequently , results can only be used for qualitative ( i.e. , phenomenological ) comparison and not for quantitative prediction . finally , the material interface induced jump conditions in the poisson equation are seldom enforced in nanofluidic simulations with realistic geometries . therefore , it is imperative to address these issues in the current nanofluidic modeling and simulation .the objective of the present work is to model and analyze realistic nanofluidic channels with atomic charge details and introduce second - order convergent numerical methods for nanofluidic problems .we present a new variational derivation of the governing pnp type of models without utilizing the differential geometry formalism of solvent - solute interfaces .as such a domain characteristic function is introduced to represent the given solid - fluid interface .additionally , we investigate the impact of atomic charge distribution to the fluid behavior of a few 3d nanoscale channels .we demonstrate that atomic charges give rise specific and efficient control of nanochannel flows .moreover , we develop a second - order convergent numerical method for solving the pnp equations with complex nanochannel geometry and singular charges . furthermore , the change of the distribution in atomic charge distribution is orchestrated with the variation of applied external voltage and bulk ion concentration to understand nanofluidic currents .therefore , we are able to elucidate quantitatively the transport phenomena of three types of nano - scaled channels , including a negatively charged channel , a bipolar channel and a double - well channel .these flow phenomena are analyzed in terms of electrostatic potential profiles , ion concentration distributions and current - voltage characteristics . to ensure computational accuracy and efficiency for nanofluidic systems , we construct a second order convergent method to solve poisson - nernst - planck equations with dielectric interface and singular charge sources in 3d realistic settings .the rest of this paper is organized as follows .section [ theory ] is devoted to a new variational derivation of pnp type of models using a domain characteristic function for nanofluidic simulations . in section [ computation ] , we develop a dirichlet to neumann mapping for dealing with charge singularities and the matched and boundary interface ( mib ) method for material interfaces .these methods are employed to compute the pnp equations with 3d irregular channel geometries and singular charges .section [ validation ] is devoted to validate the present pnp calculation with synthetic nanoscale channels .we first test a cylindrical nanochannel with one charged atom at the middle of the channel and then examine the channel with eight atomic charges that are placed around the channel . since pnp equations admit no analytical solution in general , we design analytical solutions for a modified pnp system which has the same mathematical characteristic as the pnp system . in section[ result ] , we investigate the atomic scale control and regulation of cylindrical nanofluidic systems .three nanofluidic channels , a negatively charged channel , a bipolar channel and a double - well channel , are studied in terms of electrostatic potential profile , ion concentration distribution and current .finally , this paper ends with concluding remarks .unlike the charge and material transport in biomolecular systems , the charge and material transport in nanofluidic systems induces a negligible reconstruction of the solid - fluid interface compared to the system scale .therefore , instead of using our earlier differential geometric based multiscale models which allow the modification of the solvent - solute interface , we adopt a fixed solid - fluid interface in the present work . to this end, we introduce a domain characteristic function in our variation formulation .let us consider a total computational domain .we denote and respectively the microscopic channel domain and the solution domain .interface separates and so that .we introduce a characteristic function such that and .obviously , and are the indicators for the channel domain and the solution domain , respectively . unlike the hypersurface function in our earlier differential geometrybased multiscale models , the interface is predetermined in the present model . in the solution domain , we seek a continuum description of solvent and ions . in the channel or pore domain , we consider a discrete atomistic description .a basic setting of our model can be found in fig .[ schematic ] .[ cols="^,^ " , ] finally , we consider a double - well nanofluidic channel which is named after the shape of the electrostatic potential curve .the electrostatic potential through the channel axis in a cylindrical channel may have several potential wells by modifying atomic charge distribution .in fact , one of the most well - known biological channels , gramicidin a channel , has a double - well transmembrane ion channel . in this section, we design a cylindrical channel whose electrostatic potential curve has a double - well structure by varying the sign of atomic charges . as illustrated in fig .[ dw_channel ] , the middle section of the nanochannel is positively charged , but the other parts of the channel are negatively charged .m ( square ) , m ( triangle ) , m ( diamond ) , and m ( circle ) . with a high bulk ion concentration ,the total current gets increased and the i - v characteristics becomes linear . ] at first , we alter the applied voltage , but fix bulk ion concentration at m and the atomic charge distribution as described in fig . [ dw_channel ] . here , is set to be 0v and is increased gradually from to .figure [ dw_pot ] presents the electrostatic potential and ionic concentration along the channel length . on the left hand side of the inner channel ,the electrostatic potential becomes higher , which results in moderating the left potential well as in fig .[ dw_pot](a ) .subsequently , the positive ion concentration shows a dramatic change on the left hand side .moreover , the small change in the potential at the right hand side of the channel corresponds to the small change in the concentration profile on the right . as in fig .[ dw_concen ] , the increase in the total current through the double - well channel is derived from the increase in the bulk ion concentration . herein , the external voltage difference is the same ( ) . like the negatively charged channel, the i - v relation becomes linear as the bulk ion concentration gets multiplied .these results are consistent with those observed from both numerical simulations and experimental measurements of the gramicidin a channel .therefore , atomic design of 3d nanofluidic channels proposed in the present work can be used to study biological channels , which is particularly valuable when the structure is not available .recently the dynamics and transport of nanofluidic channels have received great attention . as a result ,related experimental techniques and theoretical methods have been substantially promoted in the past two decades .nanofluidic channels are utilized for a vast variety of scientific and engineering applications , including separation , detection , analysis and synthesis of chemicals and biomolecules .additionally , inorganic nanochannels are manufactured to imitate biological channels which is of great significance in elucidating ion selectivity and ion current controllability in response to an applied field in membrane channels .molecular and atomic mechanisms are the key ingredients in the design and fabrication of nanofluidic channels .however , atomic details are scarcely considered in nanofluidic modeling and simulation . moreover ,previous simulation of transport in nanofluidic channels has been rarely carried out with three - dimensional ( 3d ) realistic physical geometry .present work introduces atomistic design and simulation of 3d realistic ionic diffusive nanofluidic channels .we first proposes a variational multiscale paradigm to facilitate the microscopic atomistic description of ionic diffusive nanochannels , including atomic charges , and the macroscopic continuum treatment of the solvent and mobile ions .the interactions between the solution and the nanochannel are modeled by non - electrostatic interactions , which are accounted by van der waals type of potentials .a total energy functional is utilized to put macroscopic and microscopic representations on an equal footing .the euler - lagrange variation leads to generalized poisson - nernst - planck ( pnp ) equations . unlike the hypersurface in our earlier differential geometry based multiscale models ,the solid - fluid interface is treated as a given profile .a domain characteristic function is introduced to replace the hypersurface function in our earlier formulation .efficient and accurate numerical methods have been developed to solve the proposed generalized pnp equations for nanofluidic modeling .both the dirichlet - neumann mapping and matched interface and boundary ( mib ) methods employed to solve the pnp system in 3d material interface and charge singularity .rigorous numerical validations are constructed to confirm the second - order convergence in solving the generalized pnp equations .the proposed mathematical model and numerical methods are employed for 3d realistic simulations of ionic diffusive nanofluidic systems .three distinct nanofluidic channels , namely , a negatively charged nanochannel , a bipolar nanochannel and a double - well nanochannel , are constructed to explore the capability and impact of atomic charges near the channel interface on the channel fluid flow .we design a cylindrical nanofluidic channel of 49 in length and 10 in diameter .several charged atoms of about 1.8 angstrom apart are equally located outside the channel to regulate nanofluidic patterns . for the negatively charged channel ,all of the atoms have the negative sign ; on the other hand , for the bipolar channel , half of them has the negative sign and the other half has the positive sign .a double - well channel has positively charged atoms at the middle and negatively charged atoms on the remaining part of the channel .each end of the channel is connected to a reservoir of kcl solution and both reservoirs have the same bulk ion concentration .asymmetry in the applied electrostatic potentials at the ends of two reservoirs gives rise to current through these nanochannels .we perform numerical experiments to explore electrostatic potential , ion concentration and current through the channels under the influence of applied voltage , atomic charge and bulk ion concentration .the negatively charged channel generates a unipolar current because the negative atomic charge attracts counterions , but repels coions . the current within the nanochannel increases whenexternal voltage , magnitude of atomic charge and/or bulk ion concentration are increased. however , the bulk ion concentration has a limitation in its growth because a larger bulk ion concentration shortens debye length and thus the charged channel may behave like an uncharged one showing the ohm s law .the bipolar channel can create accumulation or depletion of both ions in response to the current direction .when the right end has a higher voltage , both ions are stored at the junction of the channel length . on the contrary ,when the left end has a higher voltage , both ions are moved away from the junction .applied voltage , atomic charge and bulk ion concentration affect the amplitude and gradient of the current - voltage characteristic .at last , the special atomic charge distribution of the double - well channel produces the electrostatic potential profile with two potential wells . increasing applied voltage at the left hand side of the system results in an obvious change in the left potential well and the k concentration on the left .the present study concludes that the properties and quantity of the current though an ionic diffusive nanochannel can be effectively manipulated by carefully altering applied voltage , atomic charge and bulk ion concentration .our results compare well with those of experimental measurements and theoretical analysis in the literature .since the physical size of model is close to realistic transmembrane channels , the present model can be utilized not only for ionic diffusive nanofluidic design and simulations , but also for the prediction of membrane channel properties when the structure of the channel protein is not available or changed due to the mutation .non - electrostatic interactions , are considered in our theoretical modeling but are omitted in the present numerical simulations to focus on atomistic design and simulation of 3d realistic ion diffusive nanofluidic channels . however, non - electrostatic interactions can be a vital effect in nanofluidic systems .a systematical analysis of non - electrostatic interactions is under our consideration .this work was supported in part by nsf grants iis-1302285 and dms-1160352 , and nih grant r01gm-090208 .the authors thank an anonymous reviewer for useful suggestions .d. branton , d. w. deamer , a. marziali , h. bayley , s. a. benner , t. butler , m. di ventra , s. garaj , a. hibbs , x. huang , et al .the potential and challenges of nanopore sequencing . , 26(10):11461153 , 2008 .d. d. busath , c. d. thulin , r. w. hendershot , l. r. phillips , p. maughan , c. d. cole , n. c. bingham , s. morrison , l. c. baird , r. j. hendershot , m. cotten , and t. a. cross .noncontact dipole effects on channel permeation .i. experiments with ( 5f - indole)trp gramicidin a channels . , 75:28302844 , 1998 .b. y. kim , j. yang , m. j. gong , b. r. flachsbart , m. a. shannon , p. w. bohn , and j. v. sweedler .multidimensional separation of chiral amino acid mixtures in a multilayered three - dimensional hybrid microfluidic / nanofluidic device ., 81:27152722 , 2009 .m. g. kurnikova , r. d. coalson , p. graf , and a. nitzan .a lattice relaxation algorithm for three - dimensional poisson - nernst - planck theory with application to ion transport through the gramicidin a channel ., 76:642656 , 1999 .j. wang , m. lin , a. crenshaw , a. hutchinson , b. hicks , m. yeager , s. berndt , w. y. huang , r. b. hayes , s. j. chanock , r. c. jones , and r. ramakrishnan .high - throughput single nucleotide polymorphism genotyping using nanofluidic dynamic arrays . , 10(561 ) , 2009 .y. wang , k. pant , z. j. chen , g. r. wang , w. f. diffey , p. ashley , and s. sundaram .numerical analysis of electrokinetic transport in micro - nanofluidic interconnect preconcentrator in hydrodynamic flow ., 7:683696 , 2009 .
|
recent advance in nanotechnology has led to rapid advances in nanofluidics , which has been established as a reliable means for a wide variety of applications , including molecular separation , detection , crystallization and biosynthesis . although atomic and molecular level consideration is a key ingredient in experimental design and fabrication of nanfluidic systems , atomic and molecular modeling of nanofluidics is rare and most simulations at nanoscale are restricted to one- or two - dimensions in the literature , to our best knowledge . the present work introduces atomic scale design and three - dimensional ( 3d ) simulation of ionic diffusive nanofluidic systems . we propose a variational multiscale framework to represent the nanochannel in discrete atomic and/or molecular detail while describe the ionic solution by continuum . apart from the major electrostatic and entropic effects , the non - electrostatic interactions between the channel and solution , and among solvent molecules are accounted in our modeling . we derive generalized poisson - nernst - planck ( pnp ) equations for nanofluidic systems . mathematical algorithms , such as dirichlet to neumann mapping and the matched interface and boundary ( mib ) methods are developed to rigorously solve the aforementioned equations to the second - order accuracy in 3d realistic settings . three ionic diffusive nanofluidic systems , including a negatively charged nanochannel , a bipolar nanochannel and a double - well nanochannel are designed to investigate the impact of atomic charges to channel current , density distribution and electrostatic potential . numerical findings , such as gating , ion depletion and inversion , are in good agreements with those from experimental measurements and numerical simulations in the literature . channel design , atomic design , charge gating , unipolar channel , bipolar channel , double - well channel .
|
rna interference is a complex biological process that occurs in many eukaryotes and fulfils a regulatory role by allowing control over gene expression , while also providing an effective immune response against viruses and tranposons through its ability to target and destroy specific mrna molecules .this multi - step process is mediated by double - stranded rnas ( dsrna ) of different lengths that are generated by an inverted - repeat transgene , or an invading virus during its replication process .a very simple description of the core pathway is as follows .the presence of transgenic or viral dsrna triggers an immune response within the host cell , whereby the foreign rna is targeted by specialized enzymes called dicers ( dlc ) .these enzymes cleave the target rna into short 21 - 26 nucleotide long molecules , named short interfering rnas ( sirna ) or microrna ( mirna ) , which can subsequently be used to assemble a protein complex , called the rna - induced silencing complex ( risc ) .this specialized complex can recognise and degrade rnas containing complementary sequences into garbage rna that can no longer be translated into a functioning protein , thus leading to the translational arrest of the viral or transgenic rna . while the core pathway might be sufficient to describe rna interference in mammals , for other organismsit is possible that the process is not strictly limited to the molar concentration of sirna at the initiating site , but can spread systemically . in the studies of rna interference in the nematode _ caenorhabditis elegans _, it was observed that a notable portion of the produced sirna was not derived directly from the initializing dsrna , suggesting a presence of a mechanism in which some additional dsrna could be generated . to account for this discovery , primed and unprimed amplification pathways were proposed ,in which an rna - dependent rna polymerase ( rdrp ) or rna replicase could synthesize the additional unaccounted dsrna . in the case of primed amplification, it is postulated that when assisted by rdrp , the sirna which binds on mrna can itself initialise dsrna synthesis , thus generating a new round of dsrnas ready to be used in the process . on the other hand ,unprimed amplification describes the situation where dsrna synthesis occurs without the assistance of the primer rdrp , but instead relies on the presence of garbage rna to facilitate synthesis . as in most complex biological processes, rnai carries risks and is prone to different errors , as it necessitates the host s ability to correctly discriminate between endogenous and exogenous mrna .thus , any invading viral sequences with cross - reactive similarities or accidental production of anti - sense transcripts corresponding to self genes can result in a self - reactive response that can be extremely damaging to the host . to limit the self - damage caused by the feed - forward amplification in rnai ,a protection mechanism has been proposed in .a number of mathematical models have considered different aspects of rnai in its roles of immune guard against viral infections , as well as an attractive tool for targeted gene silencing that is important for gene therapies .one of the earliest models was developed and analysed by bergstrom et al .these authors focused on the issue of avoiding self - directed gene silencing during rnai and hypothesised that this can be achieved via _ unidirectional amplification _, whereby silencing only persists in the presence of a continuing input of dsrna , thus acting as a safeguard against a sustained self - damaging reaction , or , in the case of viral infection , ending the process once the infection is cleared .this model was extended by groenenboom et al . , who analysed primed and unprimed amplification pathways to account for the dsrna dosage - dependence of rnai and to correctly describe the nature of transient and sustained silencing .groenenboom and hogeweg and rodrigo et al . have analysed how viral replication is affected by its interactions with rnai for plus - stranded rna viruses , with particular account for different viral strategies for evading host immune response . similarly to natural or artificial control systems , biological systems also possess intrinsic delays that arise from the lags in the sensory process of response - initiating variables , the transportation of components that regulate biological interactions , after - effect phenomena in inner dynamics and metabolic functions , including the times necessary for synthesis , maturation and reproduction of cells and whole organisms .these delays can often lead to changes in stability and play a significant role in modelling control systems that typically involve a feedback loop . on the other hand , mathematical models without time delays are based on the assumption that the transmission of signals and biological processes occur instantaneously . although the timescale associated with these delays can sometimes be ignored , for instance , when the characteristic timescales of the model are very large compared to the observed delays , there are clear cases where the present and future state of a system depend on its past history . in such situations , dynamics of the system can only be accurately described with delay differential equations rather than the traditional ordinary differential equations . due to the non - instantaneous nature of the complex processes involved in rna interference ,it is biologically feasible to explicitly include time delays associated with the times required for transport of rnai components , and assembly of different complexes .nikolov and petrov and nikolov et al . have considered the effects of such time delays within a single amplification pathway as modelled by bergstrom et al . . under a restrictive and somewhat unrealistic assumption that the natural degradation of risc - mrna complex takes place at exactly the same speed as formation of new dsrna ,the authors have shown how time delays can induce instability of the model steady state , thus disrupting gene silencing and causing oscillations . in the context of sirna - based treatment , bartlett and davis performed a detailed analysis of the process of sirna delivery and its interaction with the rnai machinery in mammalian cells , and compared it to experimental results in mural cell cultures .this model and associated experiments have provided significant insights into optimising the dosage and scheduling of the therapeutic sirna - mediated gene silencing .raab and stephanopoulos also considered sirna dynamics in mammalian cells with an emphasis on two - gene systems with different kinetics for the two genes .arciero et al . studied a model of sirna - based tumour treatment which targets the expression of tgf- , thus reducing tumour growth and enhancing immune response against tumour cells .since originally rna interference was discovered in plants , which present a very convenient framework for experimental studies of rnai , a number of mathematical models have considered specific aspects of the dynamics of viral growth and its interactions with rnai in plants .groenenboom and hogeweg have analysed a detailed model for the dynamics of intra- and inter - cellular rna silencing and viral growth in plants .this spatial model has demonstrated different kinds of infection patterns that can occur on plant leaves during viral infections .more recently , neofytou et al . have analysed the effects of time delays associated with the growth of new plant tissue and with the propagation of the gene silencing signal .they have shown that a faster propagating silencing signal can help the plant recover faster , but by itself is not sufficient for clearance of infection .on the other hand , a slower silencing signal can lead to sustained periodic oscillations around a chronic infection state . in a very important practical context of viral co - infection ,neofytou et al . have studied how the dynamics of two viruses simultaneously infecting a single host is mediated by the rnai . in this paperwe consider a model of rnai with primed amplification , and focus on the role of two time delays associated with the production of dsrna directly from mrna , or from aberrant rna .an important result obtained in this study is partial destruction of the hysteresis loop : while the original model without time delays is bi - stable , under the influence of time delays , the steady state with either the smallest or the highest concentration of mrna can lose its stability via a hopf bifurcation .this leads to the co - existence of a stable steady state and a stable periodic orbit , which has a profound effect on the dynamics of the system .when the default steady state is destabilized by the time delays , our numerical analysis shows that the system will always converge to the silenced steady state . on the other hand , in parameter regimes where time delays destabilize the silenced steady state, the system will either converge to the default steady state , or it will oscillate around the unstable steady state depending on the initial conditions .in fact , under the influence of time delays , one would requires an even higher initial dosage of dsrna to achieve sustained silencing .however , when there is stable periodic orbit around the silenced steady state , one would also have to consider the amplitude of these oscillations and how it may affect the phenotypic stability of the species in question .thus , the augmented model exhibits an enriched dynamical behavior compared to its predecessor which otherwise can only be replicated by different extensions to the core pathway , like the rnase model developed in , which assumes the presence of a specific sirna - degrading rnase with saturating kinetics .the outline of the paper is as follows . in the next sectionwe introduce the model and discuss its basic properties . in section 3 we identify all steady states of the model together with conditions for their biological feasibility .sections 4 and 5 are devoted to the stability analysis of these steady states depending on model parameters , including numerical bifurcation analysis and simulations of the model that illustrate different types of dynamical behaviour .the paper concludes in section 6 with the discussion of results and open problems .to analyse the dynamics of rnai with primed amplification , following groenenboom et al . we consider the populations of mrna , dsrna , sirna and garbage ( aberrant ) rna , to be denoted by , , and , respectively .it is assumed that mrna is constantly transcribed by each transgene at rate , with being the number of transgenic copies , and is degraded at the rate . for simplicity, it will be assumed that each transgene produces the same amount of mrna .some dsrna is synthesized directly from mrna through the activity of rdrp at a rate .the available dsrna is cleaved by a dicer enzyme into sirna molecules at a rate . in this modelit is assumed that sirna is involved into forming two distinct complexes that use the sirna as a guide to identify and associate with different categories of rna strands to initiate the dsrna synthesis .the first is the risc complex responsible for degrading mrna into garbage rna , which decays naturally at a rate . for simplicity ,the risc population is not explicitly included in the model , but it is rather assumed that sirna directly associates with mrna at a rate . the second complex guided by sirna binds mrna aberrant ( garbage ) rna , and subsequently is primed by rdrp to synthesize additional dsrna ( primed amplification ) . to avoid unnecessary complexity, the second complex will also be represented implicitly by assuming that sirna directly associates with mrna and garbage rna for the purpose of dsrna synthesis at the rates and respectively . at this point, we include two distinct time delays and to represent the delays inherent in the the production of dsrna from mrna and garbage rna , respectively . with these assumptions ,the system describing the dynamics of different rna populations takes the form with the initial conditions ,\quad g(s ) = g_0(s)\ge 0,\quad s\in[-\tau_2,0],\\\\ s(s)=s_0(s)\ge 0,\quad s\in[-\tau,0],\quad \tau = \max\{\tau_1,\tau_2\},\quad d(0)\ge0 . \end{array}\ ] ] before proceeding with the analysis of the model ( [ system : garbage ] ) , we have to establish that this system is well - posed , i.e. its solutions are non - negative and bounded .invariance of the positive orthant follows straightforwardly from the theorem 5.2.1 in .existence , uniqueness and regularity of solutions of the system ( [ system : garbage ] ) with the initial conditions ( [ iconds ] ) follow from the standard theory discussed in .[ theo : boundedness ] suppose there exists a time , such that the solution of the model ( [ system : garbage ] ) satisfies the condition for all with .then , the solutions of the model ( [ system : garbage ] ) are bounded for all . * proof .* suppose .using the non - negativity of solutions , one can rewrite the first equation of the system ( [ system : garbage ] ) in the form which shows that is also bounded for .the last equation of ( [ system : garbage ] ) can now be rewritten as follows -d_g g(t).}\ ] ] since , this inequality suggests that if , then initially it may increase , but it will never reach the value of .similarly , if initially , then would be monotonically decreasing , and once its value is below , it would never go above it .hence , is also bounded for . the third equation of the system ( [ system : garbage ] )can be recast in the form using the assumption of boundedness of and the comparison theorem , one then has which implies that is bounded for . hence , one concludes the existence of upper bounds , and , such that , and for all , which concludes the proof . in all our numerical simulations , including the ones presented in section 5 , the solutions of the system ( [ system : garbage ] ) always satisfied the condition that remains bounded , which , in light of * theorem [ theo : boundedness ] * , implies boundedness of all other state variables .steady states of the system ( [ system : garbage ] ) are given by non - negative roots of the following system of algebraic equations it is straightforward to see that the system ( [ sseq ] ) does not admit solutions with , as this would immediately violate the first equation due to the presence of the constant transcription of mrna .substituting into the third equation implies , and due to the second equation this then implies , which is impossible .hence , there can be no steady states with either or being zero .similarly , if , the last equation implies which again is not possible .thus , the system can only exhibit steady states where all components are non - zero .let us introduce the following auxiliary parameters assuming , one can solve the first equation of ( [ sseq ] ) to obtain adding the second and the third equations of the system ( [ sseq ] ) gives (n_2 - 1)}.\ ] ] one should note that for and , if and only if the following condition holds which implies that must satisfy from the last equation of the system ( [ sseq ] ) and using the expression for we obtain substituting these values back into the third equation of the system ( [ sseq ] ) one obtains the following cubic equation for where + d_gd_s(p + d_m),\\ \alpha_2=\hat{h}[b_1b_3(1+n_3-n_2n_3 ) + b_2b_3(1-n_2 ) ] + b_3d_s(p+ d_m ) + bd_gd_s . \end{array}\ ] ] it is obvious that the cubic has at least one positive real root for any in fact , by using descartes s rule of signs one can deduce that this cubic has exactly one positive and two negative roots , with the exception of and , when it admits three positive roots .we can summarise this in the following theorem .[ theorem : steady states ] let be the discriminant of equation ( [ poly : q(s ) ] ). then equation ( [ poly : q(s ) ] ) has three distinct real roots if and only if , and it has three real roots with one double root if . therefore , there will be a single feasible equilibrium if either ; or ; or , , and .on the other hand , if and , and , then there are exactly three distinct feasible equilibria . for the degenerate situation of ,when and , anything between one and three distinct feasible equilibria is possible .linearisation of the delayed system ( [ system : garbage ] ) around the steady state yields the following characteristic equation where the coefficients , , are given in the appendix , and for convenience of notation we have dropped stars next to the steady state values and introduced auxiliary parameters , . in the case of instantaneous primed amplification , i.e. for in ( [ sys : nodelays ] ) , any steady state defined in * theorem [ theorem : steady states ] * is linearly asymptotically stable , if the appropriate routh - hurwitz conditions are satisfied , i.e if , , and . as a first case, we consider a situation where one of the primed amplification delays is negligibly small compared to other timescales of the model , so that that part of the amplification pathway can be considered to take place instantaneously .formally , this can be represented by for some , with for . in this case ,analysis of the distribution of roots of the characteristic equation follows the methodology of .the first step is to rewrite the characteristic equation ( [ sys : nodelays ] ) in the form where , \gamma_2 = -an_2b_3[b_1n_3sm + ( d_g + \hat{h}m^{-1})g],\\\\ \gamma_3 = b b_1 n_3 m s^2(b_3 + bmg^{-1 } ) + an_2d[b_1n_3g^{-1}(a + \hat{h } ) + a\hat{h}g^{-1}m^{-1}]\\ \hspace{1cm}+\hat{h}(ab_1n_3sg^{-1 } + b_3d_g gm^{-1 } ) + a[b_3d_gg + m(b^2s + bpn_2)],\\\\ \delta_1 = ab_1b_2n_2n_3ms(bsm - \hat{h})g^{-1},\quad \delta_2 = an_2b_3[b_1n_3s(bms -\hat{h } ) - \hat{h}d_ggm^{-1}],\\\\ \delta_3 = abb_1n_3ms[pn_2mg^{-1 } - s(b_3 + bmg^{-1 } ) ] + a\hat{h}(ab_1n_2n_3g^{-1 } + b_3d_g gm^{-1 } ) .\end{array}\ ] ] if one of the delays is zero , we have where to investigate whether this equation can have purely imaginary roots , we substitute with some and separate real and imaginary parts , which yields the following system of equations squaring and adding these two equations gives the equation for the hopf frequency with let us assume that the equation ( [ equation : h(v ) ] ) has four distinct positive roots denoted by , , and .this implies that the equation ( [ eq : chp_single_delay ] ) in turn has four purely imaginary roots , , where with the help of auxiliary parameters one can rewrite the system ( [ sys : trigonometric ] ) in the form from this system we obtain which gives the values of the critical time for each , and any as . } \end{array}\ ] ] this allows us to define the following : in order to establish whether the steady state , , actually undergoes a hopf bifurcation at , one has to compute the sign of /d \tau_n ] , it is clear that for all , and \!\!,\ ] ] where consequently , with one can write /d \tau_n ] are computed as follows where and are the smallest possible integers for which the corresponding delays are non - negative ..baseline parameter values for the system ( [ system : garbage ] ) .the majority of the parameter values are taken from . [ cols="<,<,<,<",options="header " , ] in order to understand the effects of different parameters on feasibility and stability of different steady states and investigate the role of the time delays associated with primed amplification , we have used a pseudospectral method implemented in a tracedde suite for matlab to numerically compute the eigenvalues of the characteristic equation ( [ eq : chp_two_delays ] ) .the baseline parameter values are mostly taken from and are shown in the table [ tab : param ] .it is assumed that mrna is stable with a half - life of 5 hours , garbage rna decays 20 times faster than mrna , and the half - life of sirna is taken to be 21 mins as measured in human cells .the rest of the baseline parameters are chosen such as to illustrate all the different types of dynamical behavior that the model ( [ system : garbage ] ) can exhibit .since rna interference is a very complex multi - component process , many parameter values are case - specific and hard to obtain experimentally .hence , rather than focus on a specific set of parameters , we explore the dynamics through an extensive bifurcation analysis .figure [ fig:2](b ) shows that if the rate , at which the risc - mrna complex is formed , is sufficiently small , then only a single steady state is feasible , and it is stable for small or high numbers of transgenes , and unstable for intermediate values of . as the value of increases , sustained silencing occurs at higher numbers of transgenes and higher mrna levels .the system also acquires an additional unstable feasible steady state with an intermediate level of mrna , thus creating a region of bi - stability , as shown in figs .[ fig:2](c ) and ( d ) .the range of values of transgenes , for which the bi - stability is observed , itself increases with , which means that if the risc complexes are more efficient in cleaving mrna ( risc overexpression ) , it is possible to have the stable states with high and low values of mrna for higher and lower numbers of transgenes , respectively , and that the range of transgenes for which introduction of dsrna triggers sustained silencing becomes larger .a very interesting and counter - intuitive observation from figs . [ fig:2](c ) and ( d ) is that the actual values of the steady state mrna concentration are also growing with .one possible explanation for this is that the reduced availability of mrna means that a smaller amount of it can be directly used to synthesize dsrna , as described by the term in the second equation of ( [ system : garbage ] ) , and more mrna is directly degraded into the garbage rna , thus generating a smaller feedback loop in the model for sufficient silencing to occur . when one considers the effect of varying the rate of forming rdrp - mrna complexes , the behaviour is qualitatively different in that increasing leads to the reduction in the size of the bi - stability region , and for sufficiently high values of , the intermediate steady state completely disappears , and the system possesses a single feasible steady state , which is stable for low and high numbers of transgenes , and unstable for intermediate values of , as shown in fig .[ fig:3 ] . increasing the rate leads to a decrease in the maximum values that can be attained by the mrna concentration .similar behaviour is observed in fig .[ fig:4 ] , where the rate of forming rdrp - garbage complexes is varied . increasing this rate results in a reduced region of bi - stability andsmaller values of the maximum mrna concentration , but at the same time , it does not result in the complete disappearance of the bi - stability region , as was the case when the rate was varied .comparing the influence of the rate , at which rdrp synthesises dsrna directly from the mrna , to the number of sirna produced by dicer per cleaved dsrna , one can notice that for sufficiently small and , only the steady state is feasible and stable , and , therefore , the strength of rna silencing is severely limited , with a relatively high concentration of mrna surviving , as illustrated in figs .[ fig:59](a)-(b ) .this agrees very well with experimental observations in which plants carrying a mutation in rdrp can not synthesize trigger - dsrna directly from mrna , and , thus , fail to induce transgene - induced silencing , but similarly to mammals who do not carry rdrp , might experience transient silencing .increasing , reduces the range of values , for which bi - stability occurs , and eventually it leads to the complete disappearance of the intermediate steady state . for higher value of ,the state can exhibit instability in a small range of values , and for even higher rates of dsrna production , this steady state is always stable , thus signifying that gene silencing has been achieved . from a biological perspective , this should be expected , as by increasing , more mrna can be used for dsrna synthesis , which is then used for the production of sirna , which in turn amplifies the process even further .this is consistent with experimental observations which show that strains of the fungus _ neurospora crassa _ , which overexpress rdrp , are able to progressively carry fewer transgenes without reverting back to their wild type . as such, even a single transgene is sufficient to induce gene silencing and thus preserve the phenotypic stability of the species .when one considers the relative effects of the degradation rates of mrna and garbage rna , it becomes clear that if the mrna decays quite slowly , while garbage rna decays fast , in a certain range of values the system does not converge to any steady states but rather exhibits periodic solutions , as shown figs .[ fig:59](a)-(b ) . as the rate of mrna degradationis increased , this reduces the range of possible values where periodic behaviour is observed , until it eventually disappears completely .it is important to note that higher values of correspond to and lower values correspond to , which suggests that decreasing the rate of garbage rna degradation results in more of it being available for additional dsrna synthesis , which subsequently results in a more efficient gene silencing .figure [ fig:10 ] shows how the region where the system ( [ system : garbage ] ) is bi - stable depends on the number of transgenes and the time delay , associated with a delayed production of dsrna from aberrant rna when the delay associated with production of dsrna from mrna is fixed at .this figure shows that when , the system is bi - stable in the approximate range , and for sufficiently small up until , the behaviour of the system remains largely unchanged , whereas for and sufficiently small number of transgenes , the silenced steady state loses stability . this stability can be regained for some higher values of , but then it will be lost again .steady states with higher values of are not affected by the variations in and remain stable throughout the bi - stability region . in a similar way , the steady state can also lose its stability , but unlike , this happens for high values of transgenes , and the range of values where instability happens is smaller than for .these results suggest that the time delays associated with primed amplification can result in a destabilisation of the steady states and , thus disrupting gene silencing . when both time delays are varied , as shown in figs .[ fig:11 ] and [ fig:14 ] , the steady state without sufficient silencing is always stable , whereas increasing and/or causes the silenced steady state to switch between being stable or unstable .we note that the boundaries of the stability crossing curves shown in fig .[ fig:11 ] are analytically described by ( [ stab_curves ] ) .figure [ fig:14 ] illustrates that whilst the time delays do not affect the shape of the hysteresis curve , they can cause some extra parts of it to be unstable , which happens for smaller values of the time delay to only , and for higher values of the time delays to as well .a possible interpretation of this result is that the feedback loop in the model is highly sensitive to the speed dsrna production from its constituent parts . when the dsrna synthesis is hindered by the time delays, the production can not maintain the required consistent pace , and , as a result , one of the steady states loses stability , which gives birth to stable periodic solutions .figures [ fig:12 ] and [ fig:13 ] illustrate how the initial dosage of the dsrna , garbage rna and mrna affect the behaviour of the model . starting with the smaller number of transgenes , for which the system ( [ system : garbage ] ) is bi - stablewe see that in figs .[ fig:12](a),(b ) and figs .[ fig:13](a),(b ) , when the delays and are both set to zero , the system mostly converges to the steady state with a relatively high concentration of mrna for smaller numbers of transgenes , and to the steady state with a lower concentration of mrna for higher numbers of transgenes . as the time delays associated with the primed amplification increase , this increases the basin of attraction of for smaller , and the basin of attraction of for higher , as shown in figs .[ fig:12],[fig:13](c ) and figs .[ fig:12],[fig:13](d ) , respectively .these figures suggest that for sufficiently high dosage of dsrna and initial garbage rna or mrna being present in the cell , the system achieves a stable steady state where gene silencing is sustained . for higher values of the time delays, there is a qualitative difference in behaviour between lower and higher numbers of transgenes . for lower numbers of transgenes , the system exhibits a bi - stability between a stable steady state with a high concentration of mrna and a periodic orbit around the now unstable steady state . on the other hand , for higher values of ,there is still a bi - stability between and .whilst in this case , the system may appear not to be as sensitive to the effects of time delays in the primed amplification pathway , it is still evident that in the presence of time delays one generally requires a higher initial dosage of dsrna to achieve sustained silencing .furthermore , in the narrow range of values , where the steady state is destabilised by the time delays , numerical simulations show that the system always moves towards a stable steady state rather than oscillate around , thus suggesting that the hopf bifurcation of the steady state is subcritical .to illustrate the dynamics of the system ( [ system : garbage ] ) in different dynamical regimes , we have solved this system numerically , and the results are presented in fig .[ num_sim ] .figures ( a ) and ( b ) demonstrate the regime of bi - stability shown in figs . [fig:12],[fig:13](e ) , where under the presence of both time delays and depending on the initial conditions , the system either approaches the default stable steady state under a low initial dsrna dosage , or tends to a periodic orbit around the silenced steady state despite a high initial dsrna dosage .figure ( c ) corresponds to a situation where the number of transgenes is sufficiently high , and the steady state is destabilised by the time delays , in which case the system approaches a silenced steady state .it is interesting to note that prior to settling on the silenced state , the system exhibits a prolonged period of oscillations around this state - a phenomenon very similar to the one observed in models of autoimmune dynamics , where the system can also show oscillations and then settle on some chronic steady state .this behaviour highlights an important issue that during experiments one has to be able to robustly distinguish between genuine sustained oscillations and long - term transient oscillations that eventually settle on a steady state .in this paper we have considered a model of rna interference with two primed amplification pathways associated with the production of dsrna from sirna and two separate rdrp - carrying complexes formed by targeting mrna and garbage rna . for better biological realism ,we have explicitly included distinct time delays for each of these pathways to account for delays inherent in dsrna synthesis .the system is shown to exhibit up to three biologically feasible steady states , with a relatively low , medium , or high ( ) concentration of mrna .stability analysis of the model has shed light onto relative importance of different system parameters . for sufficiently small levels of host mrna ,the system has a single stable steady state , whose mrna concentration is growing with the number of transgenes .experimental observations suggest that the amount of transcribed mrna is an important factor in the ability of transcripts to trigger silencing .production of mrna can generally be enhanced in two ways : either the target transgene is under control of a 35s promoter with a double enhancer so that the gene is transcribed at a higher rate , or there are enough transgenic copies to maintain an adequate production of mrna to trigger silencing . in our model , the number of trangenes and the transcription rate of mrna are qualitatively interchangeable .hence , as the number of transgenes increases , there is a range of transgenic copies for which the system is bi - stable , exhibiting steady states with a high ( ) and low mrna concentrations , where describes a silenced state . for higher values of ,only the steady state is feasible and stable , suggesting that a sustained state of gene silencing is achieved . from a biological perspective, it is very interesting and important to note that in the bi - stable region , it is not only the parameters , but also the initial conditions that determine whether rna silencing occurs .this implies that the dosage of dsrna , which initialises the rna interference mechanism , as well as the current levels of mrna and garbage rna within the cell , determine the evolution of the system . in the absence of time delays , a high dosage of dsrna and an initial concentration of mrna or garbage rna results in a silenced steady state . in the casewhen the delays associated with the primed amplification are non - zero , our analysis shows that for specific range of and , both steady states or can lose stability in the bistable region . once again , not only the parameters , but also the initial conditions control whether the system will converge to the remaining stable state or will oscillate around the unstable steady state .additionally , in the presence of time delays , one generally requires an even higher initial dosage of dsrna to achieve sustained silencing compared to the non - delayed model .interestingly , oscillations can only happen around the silenced steady state , and when the steady state loses its stability , the system just moves towards a stable steady state .oscillations around biologically correspond to switching between higher and lower concentrations of mrna , implying that at certain moments during time evolution , the exogenous mrna is silenced , and at other times it is not affected by the rnai .it follows that this switching behaviour might have case - specific implications for the phenotypic stability of a species , which most likely depends on the amplitude of oscillations around the silenced steady state .the biological significance of this result lies in the fact that there are cases where even a high initial dosage of dsrna will not always result in a silenced steady state .thus , the augmented model exhibits an enriched dynamical behavior compared to its predecessor which otherwise can only be replicated by different extensions to the core pathway , like the rnase model developed in , which assumes the presence of a specific sirna - degrading rnase with saturating kinetics .an interesting open question is whether the switching behavior could also act as a form of protection against the self - inflicted response to an erroneous distinction of target mrna , and whether periodic silencing can , to some extent , minimise the damage to the host cell .another issue is that the time delays considered in the model are assumed to be discrete , and hence it would be very insightful and relevant from a biological perspective to investigate how stability results for this model would change in the case where the time delays obey some distribution .recent results suggest that distributed delays can in some instances increase , and in others reduce parameter regions where oscillations are suppressed .our future research will look into the effects of distributed time delays on primed amplification in rnai .hannon , g.j .rna interference .nature 418 , 244251 .ketting , r.f . ,fischer , s.e.j . ,bernstein e. , sijen , t. , hannon , g.j . ,plasterk , r.h.a . , 2001 .dicer functions in rna interference and in synthesis of small rna involved in developmental timing in _ c. elegans_. genes & development 15 , 26542659 .mandadi , k.k . ,scholthof , k .- b.g . , 2013 . plant immune responses against viruses : how does a virus cause disease ?plant cell 25 ( 5 ) , 14891505 .agius , c. , eamens , a. l. , millar , a. a. , watson , j. m. , wang , m .- b . , 2001 .rna silencing and antiviral defense in plants .methods in molecular biology ( clifton , n.j . ) , 894 , 1738 .sharma , n. , sahu , p.p ., puranik , s. , prasad , m. , 2013 .recent advances in plant - virus interaction with emphasis on small interfering rnas ( sirnas ) . molecular biotechnology 55 ( 1 ) , 6377 .escobar , m.a ., dandekar , a.m. , 2003 .post - transcriptional gene silencing in plants , in : barciszewski , j. , erdmann , v.a .( eds . ) , non - coding rnas : molecular biology and molecular medicine , kluwer , new york , 129140 .elbashir , s.m . ,lendeckel , w. , tuschl , t. , 2001 .rna interference is mediated by 21- and 22-nucleotide rnas .genes & development 15 , 188200 .palauqui , j .- c ., elmayan , t. , pollien , j .- m . , vaucheret , h. , 1997 .systemic acquired silencing : transgene - specific post - transcriptional silencing is transmitted by grafting from silenced stocks to non - silenced scions . the embo journal 16 ( 15 ) , 47384745 .melnyk , c.w . ,molnar , a. , baulcombe , d.c . ,intercellular and systemic movement of rna silencing signals .the embo journal 30 ( 17 ) , 35533563 .zhang , c. , ruvkun , g. , 2012 .new insights into sirna amplification and rnai .rna biology 9 ( 8) , 10451049 .sijen , t. , fleenor , j. , simmer , f. , thijssen , k.l . , parrish , s. , timmons , l. , plasterk , r.h.a . , fire , a. , 2001 . on the role of rna amplification in dsrna - triggered gene silencing .cell 107 ( 4 ) , 465476 .lipardi , c. , wei , q. , paterson , b.m . , ( 2001 ) .rnai as random degradative pcr : sirna primers convert mrna into dsrnas that are degraded to generate new sirnas . cell 107 ( 3 ) , 297307 .makeyev , e.v . ,bamford , d.h ., 2002 . cellular rna - dependent rna polymerase involved in posttranscriptional gene silencinghas two distinct activity modes .molecular cell 10 ( 6 ) , 14171427 .giordano , e. , rendina , r. , peluso , i. , furia , m. , 2002 .rnai triggered by symmetrically transcribed transgenes in _drosophila melanogaster_. genetics 160 ( 2 ) , 637648 .pak , j. , maniar , j.m . , mello , c.c ., fire , a. , 2012 . protection from feed - forward amplification in an amplified rnai mechanism .cell 151 ( 4 ) , 885899 .bergstrom , c.t . ,mckittrick , e. , antia , r. , 2003 .mathematical models of rna silencing : unidirectional amplification limits accidental self - directed reactions .proceedings of the national academy of sciences of the usa 100 ( 20 ) , 1151111516 .groenenboom , m.a.c . ,mare , a.f.m . ,hogeweg , p. , 2005 .the rna silencing pathway : the bits and pieces that matter .plos computational biology 1 ( 2 ) , e21 .groenenboom , m.a.c . ,hogeweg , p. , 2008 . the dynamics and efficacy of antiviral rna silencing : a model study .bmc systems biology 2 , 28 .napoli , c. , lemieux , c. , jorgensen , r. , 1990 .introduction of a chimeric chalcone synthase gene into petunia results in reversible co - suppression of homologous genes in trans _ in trans_. plant cell 2 ( 4 ) , 279289 .smith , h.l . , 1995 .monotone dynamical systems : an introduction to the theory of competitive and cooperative systems .american mathematical society , providence .ruan , s. , wei , j. , 2001 . on the zeros of a third degree exponential polynomial with applications to a delayed model for the control of testosterone secretion . mathematical medicine and biology 18 ( 1 ) , 4152 .gu , k. , niculescu , s .-i . , chen , j. , 2005 .on stability crossing curves for general systems with two delays .journal of mathematical analysis and applications 311 ( 1 ) , 231253 .liang , d. , white , r.g . ,waterhouse , p.m. , 2012 .gene silencing in arabidopsis spreads from the root to the shoot , through a gating barrier , by template - dependent , nonvascular , cell - to - cell movement .plant physiology 159 ( 3 ) , 9841000 .dalmay , t. , hamilton , a. , rudd , s. , angell , s. , baulcombe , d.c .an rna - dependent rna polymerase gene in arabidopsis is required for posttranscriptional gene silencing mediated by a transgene but not by a virus .cell 101 ( 5 ) , 543553 .caplen , n.j . , parrish , s. , imani , f. , fire , a. , morgan , r.a ., 2001 . specific inhibition of gene expression by small double - stranded rnas in invertebrate and vertebrate systems .proceedings of the national academy of sciences of the usa 98 ( 17 ) , 97429747 .forrest , e.c . ,cogoni , c. , macino , g. , 2004 .the rna - dependent rna polymerase , qde-1 , is a rate - limiting factor in post - transcriptional gene silencing in _neurospora crassa_. nucleic acids research 32 ( 7 ) , 2123 - 2128 . and the number of transgenes , with , and the rest of the parameter values taken from table [ tab : param ] .the bottom row shows ] for the steady state with a low concentration of mrna depending on the two time delays and associated with primed amplification , with the rest of the parameter values taken from table [ tab : param ] . in the regions where is stable , the system is actually bi - stable , as the steady state with a high mrna concentration is also stable . ] , and with parameter values from table [ tab : param ] .the red and cyan lines denote the regions where the steady states with a low ( ) and high ( ) levels of mrna are stable , respectively .the black line signifies the steady state with a medium concentration of mrna which is always unstable .the violet and light - brown lines denote the regions where the steady states and are unstable , respectively . ] ) .( a ) stable steady state for , .( b ) periodic oscillations around the steady state for , .( c ) transient oscillations settling on a stable steady state for , and .other parameter values are taken from table [ tab : param ] . ]
|
rna interference ( rnai ) is a fundamental cellular process that inhibits gene expression through cleavage and destruction of target mrna . it is responsible for a number of important intracellular functions , from being the first line of immune defence against pathogens to regulating development and morphogenesis . in this paper we consider a mathematical model of rnai with particular emphasis on time delays associated with two aspects of primed amplification : binding of sirna to aberrant rna , and binding of sirna to mrna , both of which result in the expanded production of dsrna responsible for rna silencing . analytical and numerical stability analyses are performed to identify regions of stability of different steady states and to determine conditions on parameters that lead to instability . our results suggest that while the original model without time delays exhibits a bi - stability due to the presence of a hysteresis loop , under the influence of time delays , one of the two steady states with the high ( default ) or small ( silenced ) concentration of mrna can actually lose its stability via a hopf bifurcation . this leads to the co - existence of a stable steady state and a stable periodic orbit , which has a profound effect on the dynamics of the system .
|
content based image retrieval ( cbir ) has been an active research topic in computer vision and multimedia in the last decades , and it is still very relevant due to the emergence of social networks and the creation of web - scale image databases .most of the works have addressed the development of effective visual features , from engineered features like sift and gist to , more recently , learned features such as cnns . to obtain scalable cbir systems features are typically compressed or hashed , to reduce their dimensionality and size .however , research on data structures that can efficiently index these descriptors has attracted less attention , and typically simple inverted files ( e.g. implemented as hash tables ) are used . in this paperwe address the problem of approximate nearest neighbor ( ann ) image retrieval proposing a simple and effective data structure that can greatly reduce the need to perform any comparison between the descriptor of the query and those of the database , when the probability of a match is very low . considering the proverbial problem of finding a needle in a haystack , the proposed system is able to tell when the haystack probably contains no needle and thus the search can be avoided completely . to achieve thiswe propose a novel variation of an effective hashing method for cnn descriptors , and use this code to perform ann retrieval in a database .to perform an immediate rejection of a search that should not return any result we store the hash code in a bloom filter , i.e. a space efficient probabilistic data structure that is used to test the presence of an element in a set . to the best of our knowledgethis is the first time that this data structure has been proposed for image retrieval since , natively , it has no facility to handle approximate queries .we perform extensive experimental validation on three standard datasets , showing how the proposed hashing method improves over state - of - the - art methods , and how the data structure greatly improves computational cost and makes the system suitable for application to mobile devices and distributed image databases ._ * visual features . * _ sift descriptors have been successfully used for many years to perform cbir .features have been aggregated using bag - of - visual - words and , with improved performance , using vlad and fisher vectors .the recent success of cnns for image classification tasks has suggested their use also for image retrieval tasks .et al . _ have proposed the use of different layers of cnns as features , compressing them with pca to reduce their dimensionality , and obtaining results comparable with state - of - the - art approaches based on sift and fisher vectors .aggregation of local cnn features using vlad has been proposed in , while fisher vectors computed on cnn features of objectness window proposals have been used in . _* hashing . * _ one of the most successful visual feature hashing methods presented in the literature is product quantization ( pq ) , proposed by jgou __ . in this methodthe feature space is decomposed into a cartesian product of subspaces with lower dimensionality , that are quantized separately .the method has obtained state - of - the - art results on a large scale sift and gist features dataset .the good performance of the product quantization method has led to development of several related methods that introduce variations and improvements .norouzi and fleet have built two variations of k - means ( orthogonal k - means and cartesian k - means ) upon the idea of compositionality of the pq approach .et al . _ have improved pq minimizing quantization distortions w.r.t .space decomposition and quantization codebooks , in their opq method ; he _ et al . _ have approximated the euclidean distance between codewords in k - means method , proposing an affinity - preserving technique .more recently , kalantidis and avrithis have proposed to use a local optimization over a rotation and a space decomposition , applying a parametric solution that assumes a normal distribution , in their vector quantization method ( lopq ) .most of recent approaches for cnn features hashing are based on simultaneous learning of image features and hash functions as in the method of gao _ et al . _ , that uses visual and label information to learn a relative similarity graph , to reflect more precisely the relationship among training data .unsupervised two steps hashing of cnn features has been proposed by lin _et al . _ . in the first stepstacked restricted boltzmann machines learn binary embedding functions , then fine tuning is performed to retain the metric properties of the original feature space . _* indexing . *_ typically hashed features are stored in inverted files .a few works have studied other data structures to speed up approximate nearest neighbors .babenko and lempitsky have proposed an efficient similarity search method that generalizes the inverted index ; the method , called inverted multi - index ( multi - d - adc ) , replaces vector quantization inside inverted indices with product quantization , and builds the multi - index as a multi - dimensional table .et al . _ have proposed an hashing method that improves over pq by performing multiple assignments to k - means centroids , and have stored the hash codes in marisa tries to greatly compress their storage ._ * bloom filter . * _ bloom filter and its many variants have received an extremely limited attention from the vision and multimedia community , so far .inoue and kise have used bloomier filters ( i.e. an associative array of bloom filters ) to store pca - sift features of an objects dataset more efficiently than using an hash table ; they perform object recognition by counting how many features stored in the filters are associated with an object .bloom filter has been used by danielsson as feature descriptor for matching keypoints .similarity of descriptors is evaluated using the union " operator .srijan and jawahar have proposed to use bloom filters to store compactly the descriptors of an image , and use the filter as postings of an inverted file index in .in the proposed approach , differently from , we learn a vector quantizer separately from the cnn features , so to easily replace different and pre - trained cnn networks for feature extraction , without need of retraining .moreover , we propose to include bloom filters into feature indexing structures to improve the speed of queries .bloom filters act as gatekeepers that rule out immediately , with a very limited memory cost , if a query should be completely performed or if it can be avoided .the proposed data structure is very suitable for mobile and distributed applications .the proposed approach is a variation of , which is an efficient method for mobile visual search based on a multiple assignment k - means hashing schema ( _ multi - k - means _ ) that obtained very good results , compared to pq , on the bigann dataset .the first step of the method consists in learning a standard k - means dictionary with a small number of centroids ( to maintain a low computational cost ) .each centroid is associated to a bit of the hash code , that has thus length equal to the number of centroids .the bit is set to 1 if the feature is assigned to the centroid , 0 otherwise .a feature can be assigned to more than one centroid , and it is assigned to it if the distance from the centroid is less than the mean distance from all the centroids ( figure [ fig : bin - method ] , top ) . instead , in this work we select a fixed number of distances and we set to 1 all the bits associated to the smaller distances ( figure [ fig : bin - method ] , bottom ) . in the following we refer to this method as minx .this change has proven to be more efficient when coding cnn feature descriptors , that were used in the experiments .( minx method ) . ]approximate nearest neighbor retrieval of image descriptors is performed in two steps : in the first step is performed an exhaustive search over the binary codes using hamming distances , to reduce negative effects of quantization errors .all the binary codes with hamming distance below a threshold are selected . in the second stepthe candidate neighbors are ranked according to the distance computed using the full feature vector using _ cosine distance _ , that proved to be more effective than during the experiments . to improve search of feature vectorswe also introduce the use of _ bloom filters _ . typically this type of structures are used to speed up the answers in a key - based storage system ( figure [ fig : bloom_filters ] ) .a bloom filter is an efficient probabilistic data structure used to test if an element belongs to a set or not .this structure works with binary signatures , and can provide false positive response but not false negative and more elements are inserted into the structure and more high is the probability to obtain a false positive . to insert an element inside a bloom filter we need to define hash functions which locate positions inside the array , setting them to 1 . to check the presence of an element inside a bloom filter we to compute the hash functions over the element and check the related positions inside the array .if just one bit of these positions is equal to 0 it means that the element is not present inside the array ; if all the checked bits are equal to 1 it means that either the element is inside the array or we have a false positive .we used the method of to create the functions from just two hash functions .a useful property of bloom filter is that we can measure the presence of a false positive with probability : where is the bit number of the array , is the number of inserted items , is the probability that one position of the array is equal to 0 , and is the number of hash functions .we can obtain the optimal value which minimizes false positive probability : supposed that we can write out like so is strictly related to , and in general it is a good compromise . storing in the bloom filter hash codes that are designed for ann , as those of sect .[ sec : quantization - method ] , results in a data structure that is similar , from a practical point of view , to distance - sensitive bloom filters proposed in , where lsh functions are used as hash functions .our proposed retrieval system merges the methods introduced in [ sec : quantization - method ] and [ sec : bloom - method ] . regarding visual feature hashingwe have applied the proposed method to cnns features .our system ( figure [ fig : system ] ) provides a initial phase were descriptors are extracted from base images , binarized following one of the methods introduced in [ sec : quantization - method ] and saved inside a data structure composed by a set of inverted files of hashes implementing an horizontal partition of data ( allowing to distribute the database as shard " ) , each one guarded by a bloom filter .the hash code is also added to the bloom filter of the corresponding inverted file . during the search phasewe extract the cnn descriptor from query images , compute the hash code , and check the presence of the hash in the bloom filters , each of which guard a subset of the base .if one of this bloom filters gives a positive response ( this means that we have a positive or a false positive match ) , all the hash codes within an hamming distance threshold are used to select the full feature vector .this provides a great speedup in the approximate nearest neighbor retrieval since we consider only descriptors from base coded by a bloom filter , and below the hamming threshold value . for each resulting original cnn descriptorwe compute the distance and we rearrange results to obtain a ranked list of vectors .we tested our system using three standard dataset : inria holidays , oxford 5k and paris 6k .we used the query images and ground truth provided for each dataset , adding 100,000 distractor images from _ flickr 100 _ .when testing on a dataset training is performed using the other two datasets .features have been hashed to 64 bits binary codes , a length that has proved to be the best compromise between compactness and representativeness .other parameters used for hashing were : number of nearest distances used in the hash code computation ( ) ; hamming distance threshold .for the sake of brevity , in the following we report only the best combinations . for the evaluation we used the mean average precision ( map ) metric .the cnn features used in the following experiments have been extracted using the 1024d average pooling layer of googlenet , that in initial experiments has proven to be more effective than the fc7 layer of vgg used in . in the first experimentwe evaluate the effects of the method parameters , comparing the proposed hashing approach ( minx ) with the original method of ( mean ) , a baseline that uses no hashing , and several state - of - the - art methods , among which the recent uth method .the best combinations of minx are reported , compared on the three datasets in terms of map . as expectedthe uncompressed features perform better , but the min6 setup , with an hamming distance has comparable results , and greatly outperforms any state - of - the - art hashing method .time results in seconds , for inria holidays dataset , are reported in fig .[ fig : inria - time ] . a speedup can be obtained with hamming distances between 6 and 10 .similar results , not reported here for the sake of brevity , have been obtained on oxford 5k and paris 6k datasets . in the second experimentwe evaluate a use case in which a database of images is queried with a large number of images that do not belong to it .hash codes have been computed with different variants of the proposed hashing method .the database contains the paris 6k images , and it is queried with all the query images of paris 6k and all the 100,000 distractor images . a different number of bloom filters , with different sizes is tested and compared against a baseline that does not use any bloom filter . map values and query time in seconds are reported in tab .[ tab : map - bf - paris6k ] ( map in the first row and time in the second ) .the speedup obtained is about since a large number of distractor queries are immediately stopped by the system ; the slight increase in map is due to the beneficial effect of elimination of some false positives of the paris 6k images , that do not result in retrieving wrong dataset images . in the third experimentwe evaluate a more challenging and large scale experiment : three datasets composed by distractor images and holidays , paris 6k and oxford 5k images are built and stored in the proposed data structure .the standard dataset query images are then used to query the system . in this casewe have used 10 filters to shard " the database that , thus , can be distributed . tab . [tab : map - bf - needle ] reports the results in terms of map and time ( secs . ) . for the sake of space we report only results for min6 and hamming threshold 10 . using the proposed method results in speed improvement of while improving map , except the holidays dataset that only improves speed .the size of each bloom filter is kb , allowing the use of the method in a mobile environment , by distributing the bloom filters to the mobile devices and maintaining the shards of the database on the backend .in this paper we have presented a simple and effective method for cnn feature hashing that outperforms current state - of - the - art methods on standard datasets .a novel indexing structure , where bloom filters are used as gatekeepers to inverted files storing the hash codes , results in a speedup for ann , without loss in map .
|
this paper presents a novel method for efficient image retrieval , based on a simple and effective hashing of cnn features and the use of an indexing structure based on bloom filters . these filters are used as gatekeepers for the database of image features , allowing to avoid to perform a query if the query features are not stored in the database and speeding up the query process , without affecting retrieval performance . thanks to the limited memory requirements the system is suitable for mobile applications and distributed databases , associating each filter to a distributed portion of the database . experimental validation has been performed on three standard image retrieval datasets , outperforming state - of - the - art hashing methods in terms of precision , while the proposed indexing method obtains a speedup .
|
networks , core periphery structure , shortest - path algorithms , low - rank matrix approximations , graph laplacians , spectral algorithms .network science has grown explosively during the past two decades , and myriad new journal articles on network science appear every year .one focal area in the networks literature is the development and analysis of algorithms for detecting local , mesoscale , and global structures in various types of networks .mesoscale features are particularly interesting , as they arise neither at the local scale of vertices ( i.e. , nodes ) and edges nor at the global scale of summary statistics . in the present paper, we contribute to research on mesoscale network structures by developing and analyzing new ( and computationally - effecient ) algorithms for detecting a feature known as _ core periphery structure _ , which consists of densely - connected _ core _ vertices and sparsely - connected _ peripheral _ vertices .the importance of the investigation of mesoscale network structures is widely acknowledged , but almost all of the research on this topic concerns a specific type of feature known as _ community structure_. in studying community structure , one typically employs some algorithm to detect sets of vertices called _ communities _ that consist of vertices that are densely connected to each other , such that the connection density between vertices from different communities is comparatively sparse . a diverse array of methods exist , and they have been applied to areas , such as committee networks in political science , friendship networks , protein protein interaction networks , functional brain networks , and mobile phone networks .popular methods include the optmization of a quality function called `` modularity '' , spectral partitioning , dynamical approaches based on random walkers or other dynamical systems , local methods such as -clique percolation , and more .most community - detection methods require a vertex to belong to a distinct community , but several methods also allow the detection of overlapping communities ( see , e.g. , ) .core periphery structure is a mesoscale feature that is rather different from community structure .the main difference is that core vertices are well - connected to peripheral vertices , whereas the standard perspective on community structure views communities as nearly decomposable modules ( which leads to trying to find the best block - diagonal fit to a network s adjacency matrix ) .core periphery structure and community structure are thus represented by different types of block models .the quantitative investigation of core periphery structure has a reasonably long history , and qualitative notions of core periphery structure have long been considered in fields such as international relations , sociology , and economics ( and have been examined more recently in applications such as neuroscience , transportation , and faculty movements in academia ) , but the study of core periphery structure remains poorly developed especially in comparison to the study of community structure .most investigations of core periphery structure tend to use the perspective that a network s adjacency matrix has an intrinsic block structure ( which is different from the block structure from community structure ) .very recently , for example , ref . identified core periphery structure by fitting a stochastic block model ( sbm ) to empirical network data using a maximum likelihood method , and the sbm approach in ref . can also be used to study core periphery structure .importantly , it is possible to think of core periphery structure using a wealth of different perspectives , such as overlapping communities , -cores , network capacity , and random walks .the notion of `` nestedness '' from ecology is also related to core periphery structure , although establishing an explicit and direct connection between these ideas appears to be an open problem .the main contribution of the present paper is the development of novel algorithms for detecting core periphery structure .our aim is to develop algorithms that are both computationally efficient and robust to high levels of noise in data , as such data can lead to a blurry separation between core vertices and peripheral vertices .the rest of this paper is organized as follows . in section [ sec : coreperintro ] , we give an introduction to the notion of core periphery structure and briefly survey a few of the existing methods to detect such structure . in section [ sec : pathcore ] , we introduce the path - core method , which is based on computing shortest paths between vertices of a network , for detecting core periphery structure .in section [ sec : objfuncsync ] , we introduce an objective function for detecting core periphery structure that leverages our proposed algorithms and helps in the classification of vertices into a core set and periphery set . in section [ sec : rank2 ] , we propose the spectral method lowrank - core , which detects core periphery structure by considering the adjacency matrix of a network as a low - rank perturbation matrix . in section[ sec : laplacian ] , we investigate two laplacian - based methods ( lap - core and lapsgn - core ) for computing core periphery structure in a network , and we discuss related work in community detection that uses a similar approach . in section [ sec : numsims ] , we compare the results of applying the above algorithms using several synthetically - generated networks and real - world networks . finally , we summarize and discuss our results in section [ sec : future ] , and we also discuss several open problems and potential applications . in appendix 1 , we detail the steps of our proposed path - core algorithm for computing the path - core scores , and we include an analysis of its computational complexity . in appendix 2 , we discuss the spectrum of the random - walk laplacian of a graph and ( of the random - walk laplacian of its complement ) . in appendix 3, we detail an experiment with artificially planted high - degree peripheral vertices that illustrates the sensitivity of a degree - based method ( which we call degree - core and which uses vertex degree as a proxy to measure coreness ) to such outlier vertices . finally ,in appendix 4 , we calculate spearman and pearson correlation coefficients between the coreness scores that we obtain from the different methods across several real - world networks .the best - known quantitative treatment of core periphery structure was introduced by borgatti and everett , who developed algorithms for detecting discrete and continuous versions of core periphery structure in weighted , undirected networks .( for the rest of the present paper , note that we will use the terms `` network '' and `` graph '' interchangeably . )their discrete methods start by comparing a network to an ideal block matrix in which the core is fully connected , the periphery has no internal edges , and the periphery is well - connected to the core .borgatti and everett s main algorithm for finding a discrete core periphery structure assigns each vertex either to a single `` core '' set of vertices or to a single `` periphery '' set of vertices .one seeks a vector of length whose entries are either or , depending on whether or not the associated vertex has been assigned to the core ( ) or periphery ( ) .we let if ( i.e. , vertex is assigned to the core ) or ( i.e. , vertex is assigned to the core ) , and we otherwise let ( because neither nor are assigned to the core ) . we define , where ( with elements ) is the adjacency matrix of the ( possibly weighted ) network . borgatti and everett s algorithm searches for a value of that is high compared to the expected value of if is shuffled such that the number of and entries is preserved but their order is randomized .the final output of the method is the vector that gives the highest -score for . in a variant algorithm for detecting discrete core periphery structure , borgatti and everettstill let if both and are equal to and let if neither nor are assigned to the core , but they now let $ ] if either or ( but not both ) . to detect a continuous core periphery structure , borgatti and everett assigned a vertex a core value of andlet .a recent method that builds on the continuous notion of core periphery structure from was proposed in .it calculates a core - score for weighted , undirected networks ; and it has been applied ( and compared to community structure ) in the investigation of functional brain networks .the method of core periphery detection in the popular network - analysis software ucinet uses the so - called _ minimum residual _ ( minres ) method , which is a technique for factor analysis .one uses factor analysis to describe observed correlations between variables in terms of a smaller number of unobserved variables called the `` factors '' .minres aims to find a vector that minimizes where for all vertices .one ignores the diagonal elements of the network s adjacency matrix .additionally , because is symmetric , this method works best for undirected networks . for directed networks, one can complement the results of minres with a method based on a singular value decomposition ( svd ) . in practice ,ucinet reports . in ref . , yang and leskovec argued that core periphery structure can arise as a consequence of community structure with overlapping communities .they presented a so - called _ community - affiliation graph model _ to capture dense overlaps between communities . in their approach ,the likelihood that two vertices are adjacent to each other is proportional to the number of communities in which they have shared membership .della rossa et al .recently proposed a method for detecting a continuous core periphery profile of a ( weighted ) network by studying the behavior of a random walker on a network .approaches based on random walks and other markov processes have often been employed in the investigation of community structure , and it seems reasonable to examine them for other mesocale structures as well .very recently , ref . identified core periphery structure by fitting a stochastic block model to empirical network data using a maximum likelihood method .the review article discusses several other methods to detect core periphery structure in networks .in transportation systems , some locations and routes are much more important than others .this motivates the idea of developing notions of core periphery structure that are based on transportation . in this section ,we restrict our attention to undirected and unweighted networks , although we have also examined transport - based core periphery structure in empirical weighted and directed networks . the first transport - based algorithm that we propose for detecting core periphery structureis reminiscent of _ betweenness centrality _( bc ) in networks .one seeks to measure the extent to which a vertex controls information that flows through a network by counting the number of shortest paths ( i.e. , `` geodesic '' paths ) on which the vertex lies between pairs of other vertices in the network .geodesic vertex betweenness centrality is defined as where is the number of different shortest paths ( i.e. , the `` path count '' ) from vertex to vertex , and is the number of such paths that include vertex .our approach also develops a scoring methodology for vertices that is based on computing shortest paths in a network .such a score reflects the likelihood that a given vertex is part of a network s core .instead of considering shortest paths between all pairs of vertices in a network , we consider shortest paths between pairs of vertices that share an edge _ when that edge is excluded from the network_. more precisely , we calculate where and are defined , respectively , as the path counts and in the graph , and denotes the edge set induced by the vertex set . the network denotes the subgraph of that one obtains by removing the edge .alternatively , one can define the path - core score of a vertex as the betweenness centrality of this vertex when considering paths only between pairs of adjacent vertices and , but for which the edge incident to the two vertices is discarded .in passing , we mention the approach by valente and fujimoto for deriving measures for `` bridging '' in networks based on the observation that edges that reduce distances in a network are important structural bridges . in their measure , for which they used a modification of closeness centrality , one systematically deletes edges and measures changes in the resulting mean path lengths . see also the recent paper about bridging centrality . in section [ sec3.1 ], we explain the intuition behind the proposed path - core algorithm , and we examine its performance on several synthetic networks . in section [ sec3.2 ] ,we comment on a randomized version of the path - core algorithm that samples a subset of edges in a graph and computes shortest paths only between the endpoints of the associated vertices .let be a graph with a vertex set of size ( i.e. , there are vertices ) and an edge set of size .the set of core vertices is ( and its size is ) for the size of the core set . ] , and the set of peripheral vertices is ( and its size is ) .suppose that a network ( i.e. , a graph ) contains exactly one core set and exactly one peripheral set , and that these sets are disjoint : and .the goal of the path - core algorithm is to compute a score for each vertex in the graph that reflects the likelihood that that vertex belongs to the core . in other words ,high - scoring vertices have a high probability of being in the core , and low - scoring vertices have a high probability of being in the periphery . throughout the paper ,we use the term `` path - core scores '' to indicate the scores that we associate with a network s vertices by using the path - core algorithm .we illustrate our methodology in the context of a generalized block model , such as the one in table [ tab : generalblockmodel ] , where the submatrices , , and represent the interactions between a pair of core vertices , a core vertex and a peripheral vertex , and a pair of peripheral vertices , respectively .suppose that and are adjacency matrices that we construct using the erds - rnyi random graph models on vertices , recall that an edge is present between each pair of vertices independently with probability .] and , respectively , and that the adjacency matrix of a random bipartite graph in which each edge that is incident to both a core and peripheral vertex is present with independent probability . in the context of the above block model ,core periphery structure arises naturally in instances of the above ensemble for which or .the above family of random networks , which we denote by , was also considered in ref .it contains exactly one set of core vertices , and the remaining vertices are peripheral vertices .more complicated core periphery structures can also occur , such as a mix of ( possibly hierarchical ) community structures and core periphery structures .
|
we introduce several novel and computationally efficient methods for detecting `` core periphery structure '' in networks . core periphery structure is a type of mesoscale structure that includes densely - connected core vertices and sparsely - connected peripheral vertices . core vertices tend to be well - connected both among themselves and to peripheral vertices , which tend not to be well - connected to other vertices . our first method , which is based on transportation in networks , aggregates information from many geodesic paths in a network and yields a score for each vertex that reflects the likelihood that vertex is a core vertex . our second method is based on a low - rank approximation of a network s adjacency matrix , which can often be expressed as a tensor - product matrix . our third approach uses the bottom eigenvector of the random - walk laplacian to infer a coreness score and a classification into core and peripheral vertices . additionally , we design an objective function to ( 1 ) help classify vertices into core or peripheral vertices and ( 2 ) provide a goodness - of - fit criterion for classifications into core versus peripheral vertices . to examine the performance of our methods , we apply our algorithms to both synthetically - generated networks and a variety of real - world data sets .
|
most data in contemporary science are the product of increasingly complex computations and procedures applied on the fast increasing flows of raw information coming from more and more sophisticated measurement devices ( the `` measurements '' ) , or from growingly detailed numeric simulations - e.g. pattern recognition , calibration , selection , data mining , noise reduction , filtering , estimation of parameters etc .high energy physics and many other sciences are increasingly cpu and data intensive .in fact , many new problems can only be addressed at the high data volume frontier . in this context , not only data analysis transformations , but also the detailed log of how those transformations were applied , become a vital intellectual resource of the scientific community .the collaborative processes of these ever - larger groups require new approaches and tools enabling the efficient sharing of knowledge and data across a geographically distributed and diverse environment . hereis where the concept of virtual data is bound to play a central role in the scientific analysis process .we will explore this concept using as a case study the coming generation of high energy physics ( hep ) experiments at the large hadron collider ( lhc ) , under construction at the european laboratory for particle physics cern close to geneva , switzerland .this choice is motivated by the unprecedented amount of data ( from petabytes to exabytes ) and the scale of the collaborations that will analyze it ( four worldwide collaborations , the biggest two with more than two thousand scientists each ) . at the same time , the problems to be solved are general and will promote scientific discoveries in different disciplines , enhance business processes and improve security .the challenge facing hep is a major driving force for new developments in computing , e.g. the grid .the computing landscape today is marked by the rise of grid and web services and service oriented architectures ( soa ) .an event - driven soa is well suited for data analysis and very adaptable to evolution and change over time .in our project we will explore and adopt service oriented solutions as they mature and provide the performance needed to meet mission - critical requirements .this paper is organized as follows : in the next section we introduce the concept of virtual data , than we discuss the issues arising when dealing with data equivalence , describe how data analysis is done in hep , digress with a metaphor , elucidate the ideas driving the caves project , continue with a detailed treatment of the caves architecture , sketch the first implementation , discuss the relationship with other grid projects , and conclude with an outlook .the scientific analysis process demands the precise tracking of how data products are to be derived , in order to be able to create and/or recreate them on demand . in thiscontext virtual data are data products with a well defined method of production or reproduction .the concept of `` virtuality '' with respect to existence means that we can define data products that may be produced in the future , as well as record the `` history '' of products that exist now or have existed at some point in the past .the virtual data paradigm logs data provenance by tracking how new data is derived from transformations on other data ._ data provenance _ is the exact history of any existing ( or virtual ) data product. often the data products are large datasets , and the management of dataset transformations is critical to the scientific analysis process .we need a `` virtual data management '' tool that can `` re - materialize '' data products that were deleted , generate data products that were defined but never created , regenerate data when data dependencies or algorithms change , and/or create replicas at remote locations when recreation is more efficient than data transfer . from the scientist s point of view , data trackability and result auditability are crucial , as the reproducibility of results is fundamental to the nature of science . to support this need we require and envision something like a `` virtual logbook '' that provides the following capabilities : * easy sharing of tools and data to facilitate collaboration - all data comes complete with a `` recipe '' on how to produce or reproduce it; * individuals can discover in a fast and well defined way other scientists work and build from it ; * different teams can work in a modular , semi - autonomous fashion ; they can reuse previous data / code / results or entire analysis chains ; * on a higher level , systems can be designed for workflow management and performance optimization , including the tedious processes of staging in data from a remote site or recreating it locally on demand ( transparency with respect to location and existence of the data ) .if we delete or accidentally loose a piece of data , having a log of how it came into existence will come in handy . immediatelythe question arises : is the `` new '' chunk of data after reproduction identical to the `` old '' one ?there are two extreme answers to this question : * the two pieces of data are identical bitwise - we are done ; * not only are the two pieces of data not identical bitwise , but they contain different information from the viewpoint of the application using them .the second point needs discussion : clearly two chunks of data can be `` identical enough '' for some types of applications and different for other types .each application has to define some `` distance '' measure between chunks of data and specify some `` minimal '' distance between chunks below which the pieces are considered identical . in this language bitwise samenesswould correspond to zero distance .let us illustrate this with two examples .in an ideal world , if we generate events with the monte carlo method , starting from the same seeds and using portable random number generators , we should get the same sequence of events everywhere . or if we do monte carlo integration , we should get exactly the same result . in practice , due to floating pointrounding errors , even on systems with processors with the same word length simulations tend to go down different branches , diverging pretty soon .so the results are not guaranteed to be identical bitwise .usually this is not a problem : two monte carlo integrations within the statistical uncertainty are certainly acceptable . andeven two different sequences of events , when their attributes are statistically equivalent ( e.g. histograms of all variables , correlations etc . ) , are good enough for many practical purposes .there are exceptions though : if our code crashes at event 10583 , we would like to be able to reproduce it bitwise .one way to proceed in such a situation is to store the initial random seeds for _ each _ event along with the how - to ( i.e. the algorithm and code for producing events ) .then any single divergence will affect at most one event .the second example is analysis of real data . if we are interested in statistical distributions ( histograms , scatter plots , pie charts etc . ) , a `` weak '' equivalence in the statistical sense can be enough .if we are selecting e.g. rare events in a search for new particles , we would like to isolate the same sample each time we run a particular selection on the same input ( `` strong '' equivalence ) .one way to proceed here is to keep the list of selected events along with the how - to of the selection for future verifications .as long as the input sample is available , we will have reproducibility of the selection even if portability is not guaranteed . to sum it up -each application has to define criteria establishing the equivalence of data for its domain .good choice of metadata about a chunk of data ( e.g. a dataset ) can be very useful later when trying to decide if your reproduction is good enough .for instance , if we kept the moments like mean value and standard deviation with their statistical uncertainties from a distribution with millions of events , it will help in determining if our replica is statistically equivalent later .last but not least , an important aspect in recording the data provenance is the level of detail .the result of the execution of the same algorithm with the same input in today s complex software world may depend on environment variables , linked libraries containing different versions of supporting applications , different compilers or levels of optimization etc .when these factors are important , they have to be included in the data provenance log for future use .the high energy physics field is sociologically very interesting .the experimental collaborations have grown from being counted on the fingers of one or two hands in the sixties to around five hundred in the nineties and two thousand today . even in theoretical physics collaborationsare growing with time .that explains why the field was always in the forefront of developing and/or adopting early new collaborative tools , the best known example being of course the invention of the world wide web at cern . at present , the lhc experiments are heavily involved in grid efforts , continuing the tradition , see e.g. .after a high energy physics detector is triggered , the information from the different systems is read and ultimately recorded ( possibly after cleaning , filtering and initial reconstruction ) to mass storage .the high intensity of the lhc beams usually results in more than one interaction taking place simultaneously , so a trigger records the combined response to all particles traversing the detector in the time window when the system is open .the first stages in the data processing are well defined and usually tightly controlled by the teams responsible for reconstruction , calibration , alignment , `` official '' simulation etc .the application of virtual data concepts in this area is discussed e.g. in .here we are interested in the later stages of data processing and analysis , when various teams and individual scientists look at the data from many different angles - refining algorithms , updating calibrations or trying out new approaches , selecting and analyzing a particular data set , estimating parameters etc . , and ultimately producing and publishing physics results . even in today s large collaborations this is a decentralized , `` chaotic '' activity , and is expected to grow substantially in complexity and scale for the lhc experiments .decentralization does not mean lack of organization - on the contrary , this will be one of the keys for building successful structures , both from the social and technical points of view .clearly flexible enough systems , able to accommodate a large user base , and use cases not all of which can be foreseen in advance , are needed .many users should be able to work and share their results in parallel , without stepping on each other s toes . herewe explore the benefits that a virtual data system can bring in this vast and dynamic field . moving from production to analysis, the complexity grows fast with the number of users while the average wall and cpu time to complete a typical task goes down , as illustrated in figure [ useranal ] .an intermediate phase are large analysis tasks which require batch mode .the ultimate challenge comes from interactive analyses , where users change their minds often and need fast response times to stay productive .the latency of the underlying systems at this stage is critical .there should be no single point of failure and the system should re - route requests automatically to the next available service .the redundancy should be accompanied by efficient synchronization , so that new results are published fast and made accessible for all interested parties regardless of their geographical location .an important feature of analysis systems is the ability to build scripts and/or executables `` on the fly '' , including user supplied code and parameters . on the contrary, production systems often rely on pre - build applications , distributed in a centralized way from `` officially controlled '' repositories .the user should be in position to modify the inputs on her / his desk(lap)top and request a derived data product , possibly linking with preinstalled libraries on the execution sites .a grid - type system can store large volumes of data at geographically remote locations and provide the necessary computing power for larger tasks .the results are returned to the user or stored and published from the remote site(s ) .an example of this vision is presented in . at each stage in the analysis interesting events can be visualized and plots for all quantities of interest can be produced .what will a virtual data system bring to this picture ?i have in my office a large collection of paper folders for different analyses performed working on various tasks , stored in varied ways .they match with codes stored and archived on different systems .so when a colleague comes in and asks me how i obtained that plot three months ago , i have to sift for some time ( depending on my organization ) through my folders , make a photocopy , then find the corresponding code , make sure it is the `` historically '' right version , and that i wrote down which version of the pattern recognition was used at the time etc . or if i go to my colleague , she will go through similar steps , but her organization will be different and we will possibly exchange information in a different format . or if one of us is on leave things will slow down .so we are recording data provenance , but manually , very often incomplete , and not easily accessible . andclearly this scales poorly for larger and geographically distributed collaborating groups . in a `` virtual logbook ''all steps of an analysis , even the blind alleys , can be recorded and retrieved automatically .let us assume that a new member joins the group . even without bugging one s colleagues too often, it will be quite easy to discover exactly what has been done so far for a particular analysis branch , to validate how it was done , and to refine the analysis .a scientist wanting to dig deeper can add a new derived data branch by , for example , applying a more sophisticated selection , and continuing to investigate down the new road .of course , the results of the group can be shared easily with other teams and individuals in the collaboration , working on similar topics , providing or re - using better algorithms etc .the starting of new subjects will profit from the availability of the accumulated experience . at publication timeit will be much easier to perform an accurate audit of the results , and to work with internal referees who may require details of the analysis or additional checks .at the beginning of a new project , a suitable metaphor can be helpful .as we would like to make our `` virtual data logbooks '' persistent , distributed and secure , the following analogy came quite naturally : * a cave is a secure place to store stuff . *usually you need a key to enter .* stuff can be retrieved when needed ( and if the temperature is kept constant , usually in good shape ) . *small caves can be private , larger ones are usually owned by cooperatives . * when a cave is full , a new one is build . * to get something , one starts at the local caves and , if needed , widens the search ... we can go on , but , as we will see in a moment , these are striking similarities with the goals of our project , so caves seemed a peculiarly apt name .the use of metaphors is inspired from the adoption of extreme programming techniques in our project .for their relationship to the programming style in hep see .the collaborative analysis versioning environment system ( caves ) project concentrates on the interactions between users performing data and/or computing intensive analyses on large data sets , as encountered in many contemporary scientific disciplines . in modern science increasingly larger groups of researchers collaborate on a given topic over extended periods of time .the logging and sharing of knowledge about how analyses are performed or how results are obtained is important throughout the lifetime of a project .here is where virtual data concepts play a major role .the ability to seamlessly log , exchange and reproduce results and the methods , algorithms and computer programs used in obtaining them enhances in a qualitative way the level of collaboration in a group or between groups in larger organizations .it makes it easier for newcomers to start being productive almost from day one of their involvement or for referees to audit a result and gain easy access to all the relevant details .also when scientists move on to new endeavors they can leave their expertise in a form easily utilizable by their colleagues .the same is true for archiving the knowledge accumulated in a project for reuse in future undertakings .the caves project takes a pragmatic approach in assessing the needs of a community of scientists by building series of prototypes with increasing sophistication .our goal is to stay close to the end users and listen carefully to their needs at all stages of an analysis task . in this way we can develop an architecture able to satisfy the varied requirements of a diverse group of researchers .our main line of development draws on the needs of , but is not limited to , high energy physics experiments , especially the cms collaboration , planning to begin data taking in 2007 at the large hadron collider. the cms experiment will produce large amounts of simulated and real data , reaching tens and hundreds of petabytes .the analysis of datasets of this size , with its distributed nature , by a large community of users is a very challenging task and one of the strongest driving forces for grid computing .the caves project explores and develops these emerging technologies to facilitate the analysis of real and simulated data .we start by analyzing the simulated data from the data challenges of the cms experiment , which will grow in scale and complexity approaching the situation when real data will start to flow . in extending the functionality of existing data analysis packages with virtual data capabilities, we build functioning analysis suites , providing an easy and habitual entry point for researchers to explore virtual data concepts in real life applications , and hence give valuable feedback about their needs , helping to guide the most useful directions for refining the system design . by just adding capabilities in a plug - in style we facilitate the acceptance and ease of use , andthus hope to attract a critical mass of users from different fields in a short time .there is no need to learn yet another programming language , and our goal is simplicity of design , keeping the number of commands and their parameters to the bare minimum needed for rich and useful functionality .the architecture is modular based on web , grid and other services which can be plugged in as desired .in addition to working in ways considered standard today the scientists are able to log or checkpoint their work throughout the lifetime of an analysis task .we envisage the ability to create private `` checkpoints '' which can be stored on a local machine and/or on a secure remote server .when an user wants to share some work , he can store the relevant know - how on the group servers accessible to the members of a group working on a given task .this could be a geographically distributed virtual organization . in the case of collaboration between groups or with internal or external referees portions of this know - howcan be made accessible to authorized users , or a shared system of servers can be created as needed .the provenance of results can be recorded at different levels of detail as decided by the users and augmented by annotations .along with the knowledge of how analyses are performed , selected results and their annotations can be stored in the same system .they can be browsed by the members of a group , thus enhancing the analysis experience both for experts and newcomers . when desirable , information from different phases of an analysis can easily be shared with other groups or peers .we stressed already the value of complete logs . in the heat of an active analysis session ,when there is no time or need to be pedantical , users may see merit in storing sometimes also partial logs , a classical example being a program with hidden dependencies , e.g. the calling of a program or reading of a file within a program , not exposed externally . in this case , the data product is not reproducible , but at least the log will point what is missing . or the users may even store a non - functional sequence of actions in the debugging phase for additional work later , even without producing a virtual data product. our system should be able to support partial logging , provided that the users are aware of the limitations and risks of this approach .an important point is how groups will structure their analyses .each virtual data product needs an unique identifier , which may be provided by the users or appended automatically by the system with e.g. project i d , user i d and date to render it unique . for smaller tasks , all identifiers and logscan be located in a single place , like a big barrel in a cave .then the group will benefit from adopting a policy for meaningful selection of identifiers , making subsequent browsing and finding of information easy . for larger projectsthe virtual data space can be structured in chunks corresponding to subtasks , like many barrels in a large cave . then at the beginning of a sessionthe user will select the barrel to be opened for that session .when needed , information from related ( linked ) barrels can be retrieved .in principal there are no restrictions on how deep the hierarchy can be , only the practical needs will determine it .we base our first functional system on popular and well established data analysis frameworks and programming tools , making them virtual data enabled .in the course of our project we will leverage best - of - breed existing technologies ( e.g. databases , code management systems , web services ) , as well as the developments in forward - looking grid enabled projects , e.g. the virtual data system chimera , the clarens server for secure remote dataset access , the condor , pegasus and sphinx schedulers for executing tasks in a grid environment .further down the road we envisage building distributed systems capable of analyzing the datasets used in the cms collaboration at all stages of data analysis , starting from monte carlo generation and simulation of events through reconstruction and selection all the way to producing results for publication .we plan to use the grid test bed of the grid physics network ( griphyn ) project for grid enabling the collaborative services .the griphyn project is developing grid technologies for scientific and engineering projects that will collect and analyze distributed , petabyte - scale datasets .griphyn research will enable the development of petascale virtual data grids ( pvdgs ) through its virtual data toolkit ( vdt ) .the caves system can be used as a building block for a collaborative analysis environment , providing `` virtual data logbook '' capabilities and the ability to explore the metadata associated with different data products .our first functioning system extends the very popular object - oriented data analysis framework root , widely used in high energy physics and other fields , making it virtual data enabled .the root framework provides a rich set of data analysis tools and excellent graphical capabilities , able to produce publication - ready pictures .it is easy to execute user code , written in c++ , and to extend the framework in a plug - in style .new systems can be developed by subclassing the existing root classes . and the cint interpreter runs the user code `` on - the - fly '' , facilitating fast development and prototyping .all this is very helpful in the early phases of a new project .in addition , root has a large and lively user base , so we plan to release early and often and to have a development driven largely by user feedback .last but not least , root is easy to install and very portable .versions for many flavors of linux and unix and for windows are available .we leverage a well established source code management system - the concurrent versions system cvs .it is well suited to provide version control for a rapid development by a large team and to store , by the mechanism of tagging releases , many versions so that they can be extracted in exactly the same form even if modified , added or deleted since that time .the cvs tags assume the role of unique identifiers for virtual data products .cvs can keep track of the contributions of different users .the locking mechanism makes it possible for two or more people to modify a file at the same time , important for a team of people working on large projects .the system has useful self - documenting capabilities . besides the traditional command line interfaceseveral products provide web frontends which can be used when implementing web services .all these features of cvs make it a good match for our system .nowadays cvs is already installed by default on most unix and linux systems and a windows port is available . in this way , our system can be used both from unix and windows clients , making it easy for users at all levels to reproduce results .a key aspect of the project is the distributed nature of the input data , the analysis process and the user base .this has to be addressed from the earliest stages .our system should be fully functional both in local and remote modes , provided that the necessary repositories are operational and the datasets available .this allows the users to work on their laptops ( maybe handhelds tomorrow ) even without a network connection , or just to store intermediate steps in the course of an active analysis session for their private consumption , only publishing a sufficiently polished result .this design has the additional benefit of utilizing efficiently the local cpu and storage resources of the users , reducing the load on the distributed services ( e.g. grid ) system .the users will have the ability to replicate , move , archive and delete data provenance logs .gaining experience in running the system will help to strike the right balance between local and remote usage .more details about the distribution of services is given in the next section .the architecture of caves builds upon the concept of sandbox programming . by sandbox programmingwe mean users work on per session basis , creating a new sandbox for a given session . all the changes or modifications andwork the user does in a session between checkpoints is logged into a temporary logfile , which can be checked in the cvs repository with a unique tag .the system checks if the user executed external programs ( in the root case these are typically c++ programs ) and logs them automatically with the same tag .here an interesting point arises : a possible scenario is that a user runs the same program many times , just varying the inputs . in this case cvswill do the right thing : store the program only _ once _ , avoiding duplication of code , and tagging it many times with different tags , reflecting the fact that the program was executed several times to produce distinct data products . or the user can choose during the same session to browse through the tags of other users to see what work was done , and select the log / session of interest by extracting the peers log with the tag used to log the corresponding session activities . heretwo modes of operation are possible : the user may want to reproduce a result by extracting and executing the commands and programs associated with a selected tag , or just extract the history of a given data product in order to inspect it , possibly modify the code or the inputs and produce new results .we also have the concept that users can log annotations or results in the repository along with the data provenance , storing useful metadata about a data product for future use ( see the discussion in the data equivalence section ) .it is possible to record the metadata in relational databases too , e.g. in popular open source products like mysql , so that another user first queries the database to retrieve the annotations or condensed logs of what other users have done already .this approach will ensure scalability for large groups of researchers accumulating large repositories , and will reduce the load on the cvs servers , improving the latency of the overall system . in this casethe searching of a database is expected to be faster than the direct search for a given tag among a large number of stored tags .this will be investigated by building functional systems and monitoring their performance .the additional burden of synchronizing the information between the databases and the repositories is worthwhile only if we can improve the overall performance and scalability of the system .a further enhancement can come from retrieving first the metadata , and only if the user is interested , the complete log about a particular data product .the architecture is shown in graphical form in figure [ cavesarch ] .let us discuss now some possible scenarios , which can take place in our architecture .* case1 : simple * + user 1 : does some analysis and produces a result with tag _ * projectx - stepy - user1*_. + user 2 : browses all current tags in the repository and fetches the session stored with tag _ * projectx - stepy - user1*_. + * case2 : complex * + user 1 : does some analysis and produces a result with tag _ * projectx - stepy - user1*_. + user 2 : browses all current tags in the repository and fetches the session stored with tag _ * projectx - stepy - user1*_. + user 2 : does some modifications in the code files , which were obtained from the session of user1 , runs again and stores the changes along with the logfile with a new tag . + user 1 : browses the repository and discovers that the previous session was used and contains modified or new code files , so decides to extract that session using the new tag and possibly reuse it to produce the next step and so on .+ this scenario can be extended to include an arbitrary number of steps and users in a working group or groups in a collaboration .based on our work so far , the following set of commands emerges as useful : 1 . _session commands : _ * * open * : authentication and authorization , connection , selection of cvs services , local or remote mode , the barrel to be opened etc . ** close * : save opened sessions , clean - up etc .2 . _ during analysis : _ * * help * : get help for a command or list of commands * * browse * : browse all tags in ( a part of ) a repository , subsets of tags beginning or containing a string etc ; possibly browse the metadata about a specific virtual data product e.g. by clicking on a selected tag from a list displayed in a graphical user interface * * startlog * : define the starting checkpoint for a log ( part of a session between user - defined points ) , which will be closed by a * log * command * * log * : log ( part of ) a session between user - defined checkpoints together with all programs executed in the session ; this may be a complete or optionally a partial log with user - defined level of detail * * annotate * : store user - supplied notes ( metadata ) about the work being done , preferably in a concise and meaningful manner ; optionally , selected results can be stored along with the annotations e.g. a summary plot , subject of course to space considerations ; this can be a separate command or part of the * log * command e.g. the user may select to be prompted to provide annotations when logging a tag * * inspect * : get a condense ( annotations plus user commands , in a sense something like header files ) , or the complete log for a tag including the programs executed , but do not reproduce the data product ; useful for reusing analysis work * * extract * : in addition to * inspect * , reproduce the virtual data product 3 . _administrative tasks : _ * * copy * : clone a log to a new repository * * move * : as you expect * * delete * : remove a log ; cvs has the nice feature of storing such files in the attic , so it is similar to moving a file in the trash can without emptying it * * archive * : store in an archive ( e.g. a mass storage system ) * * retrieve * : retrieve from an archive ( for whole repositories normal cvs techniques can be used ) .our first release is based on the minimal scope of commands providing interesting functionality .future developments will be guided by the value users put on different options . as we want a fast release cycle this limits the number of features introduced and tested in any new version .the command set is extensible and new commands may be introduced as needed .in this section we sketch the process of building the first caves release as prototype of a real analysis system , and examine some of the issues that arise .the first prototype has been demonstrated at the supercomputing conference in phoenix , arizona , in november 2003 , and the first release was made public on december 12 , 2003 . more details can be found in , and an in - depth technical description about the virtual data enabled root client and the remote services is in preparation .we limit the scope to the most basic commands described in the previous section : * * open * : sets the cvs service ( default or user choice ) * * help * : short help about the commands below * * browse * : browse tags in the repository * * log * : log part of a session between user - defined checkpoints * * extract * : reproduce a virtual data product .these commands are implemented by subclassing basic root classes .commands not recognized by our system are delegated to root for execution or catching exceptions .the root framework provides the ability to access both local and remote files and datasets .one way to realize the second option is to store datasets on apache servers which are root - enabled with a plug - in provided by the root team . in this waywe implement a remote data service from web servers .the cvs system also is able to access both local and remote repositories .one way to realize the second option is to use the cvs pserver .contrary to some opinions it can be configured in quite a secure and efficient way as follows : the remote users need just cvs accounts with password authentication , they never get unix accounts on the server .a dedicated cvs user ( or several for different groups ) acts on their behalf on the server .the mapping is done by a special cvs administrative file .similar design was adopted by the globus toolkit with the grid mapfiles to control user access to remote sites , the only difference being the use of certificates in place of passwords , thus enhancing the security and providing a temporarily limited single sign - on to a grid .the virtual organization tools developed by globus can be used also for cvs services .in addition , we implement access control lists per cvs user for reading of the repository or writing to specific directories only for authorized users .this makes the server secure : even compromising the password a normal user can not modify administrative files and thus can not run shell commands on the server .only the administrator needs an unix server account .adding and managing cvs users is undoubtedly much simpler , safer and more scalable compared to dealing with unix accounts , not to talk about the dreaded group variety .a single server can handle multiple repositories , making it easy to `` fine structure '' projects . to test the functionality after each modification we have a test suite: we use events generated with pythia ( or the cms pythia implementation in cmkin ) , and analyze , histogram and visualize them with code ( for results obtained with this code see e.g. ) built on top of the object - oriented data analysis framework root . to conclude we give a couple of snapshots of the first caves system in action : .... start a virtual data enabled root client : rltest rltest * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * welcome to caves * * collaborative analysis versioning environment system * * * * dimitri bourilkov & mandar kulkarni * * university of florida * * gainesville , usa * * * * you are welcome to visit our website * * cern.ch/bourilkov/caves.html * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * please set the cvs pserver or hit enter for default caves : pserver for this session : : pserver:test.phys.ufl.edu:/home/caves * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * to get started : * * just type help at the command prompt * * * * commands beginning with ' . 'are delegated to root * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * caves : help list of commands and how to use them : = = = help : to get this help help = = = browse : to list all existing tags or a subset beginning with string = = = > de facto the content of the virtual data catalog is displayed browse browse < prefix - string - of - tag > = = = log : to store all command line activities of the user labeling them with a tag = = = the actions after the last log command or = = = from the beginning of the session are stored = = = = = = < tag > must be cvs compliant i.e. start with uppercase or lowercase letter = = = and contain uppercase and lowercase letters , digits , ' - ' and ' _ ' = = = hint : this a powerful tool to structure your project = = = = = = > in effect this logs how a chunk of virtual data was produced = = = > and can be ( re)produced later ; the macro files executed by the user = = = > are stored in their entirety along with the command line activities = = = > creating a complete log log < tag > = = = extract : to produce a chunk of virtual data identified by a tag = = = > the necessary macro files are downloaded automatically to the client = = = > and can be reused and modified for new analyses extract < tag > caves : browse higgs - ww - plotpxpypz-500 ( revision : 1.5 ) higgs - ww - plotpxpypz-100 ( revision : 1.4 ) caves : extract higgs - ww - plotpxpypz-500 * * * * * * * * * * * * * * * * * * * * storing data for usage .... * * * * * * * * * * * * * * * * * * * * * root command is : .x macro is : dbpit1web.c macro is : dbpit1web.c u data / dbpit1web.c you have [ 0 ] altered files in this repository .are you sure you want to release ( and delete ) directory ` data ' : y argument is 500 argument is input " http://ufgrid02.phys.ufl.edu/~bourilkov/higgs.root " argument is output " higgs - ww - plotpxpypz-500 " command is : .x dbpit1web.c(500,"http://ufgrid02.phys.ufl.edu/~bourilkov / higgs.root " , " higgs - ww - plotpxpypz-500 " ) tfile * * higgs-ww-plotpxpypz-500.root tfile * higgs-ww-plotpxpypz-500.root key : tcanvas canv2;1 root pythia plotter d.bourilkov university of florida .x dbpit1web.c(500,"http://ufgrid02.phys.ufl.edu/~bourilkov / higgs.root " , " higgs - ww - plotpxpypz-500 " ) * * * * * * * * * * * * * * * * * * * * * * * * * * end * * * * * * * * * * * * * * * * * * * * * * * * * * * * you have [ 0 ] altered files in this repository .are you sure you want to release ( and delete ) directory ` v01 ' : y caves : .q .q .... the running of this example produces the following plot from five hundred simulated input events , as illustrated in figure [ higgsplot ] .it is worth mentioning that the plot materializes on the client machine out of `` thin air '' .the user can first download the caves code from our remote repository and needs just root and cvs to build the client in no time .then the remote logs are browsed , one is selected for extraction , the corresponding commands and programs are downloaded , built on the fly and executed on the client machine , the input data are accessed from our remote web data server , `` et voil '' , the plot pops up on the client s machine .an example of our event display built also on top of root is shown in figure [ eventdisplay ] .we envisage extending the service oriented architecture by encompassing grid and web services as they mature and provide the performance needed to meet mission - critical requirements .our system will benefit from developments like the globus authentication system , enhancing the security and providing a temporarily limited single sign - on to grid services , the griphyn virtual data toolkit for job executions on grids , possibly augmented by grid schedulers like sphinx or pegasus .other promising developments are remote data services e.g. clarens , which can possibly be used also with our set of cvs services to provide a globus security infrastructure , distributed databases etc .another closely watched development is the griphyn virtual data system chimera , which evolves a virtual data language .our `` virtual data logbooks '' in the first implementation are formatted as standard ascii files , in future versions we might use also a more structured format e.g. xml , which is well suited for web services .chimera also converts the virtual data transformations and derivations to xml and further to directed acyclic graphs .it is a challenging research question if all analysis activities can be expressed easily in the present chimera language . with the developments of both projects it may be possible to generate automatically virtual data language derivations from our logs of interactive root sessions for execution on a grid or storage in a chimera virtual data catalog .another promising path is integration with the root / proof system for parallel analysis of large data sets .caves can be used as a building block for a collaborative analysis environment in a web and grid services oriented architecture , important at a time when web and grid services are gaining in prominence .we are monitoring closely the evolving architecture and use cases of projects like caigee , hepcal , arda and collaborative workflows , and are starting useful collaborations . besides the root - based client a web browser executing commands on a remote root or clarens server is a possible development for `` ultralight '' clients .in this white paper we have developed the main ideas driving the caves project for exploring virtual data concepts for data analysis .the decomposition of typical analysis tasks shows that the virtual data approach bears great promise for qualitatively enhancing the collaborative work of research groups and the accumulation and sharing of knowledge in todays complex and large scale scientific environments .the confidence in results and their discovery and reuse grows with the ability to automatically log and reproduce them on demand .we have built a first functional system providing automatic data provenance in a typical analysis session .the system has been demonstrated successfully at supercomputing 2003 and a first public release is available for interested users , which are encouraged to visit our web pages , currently located at the following url : + http://cern.ch/bourilkov/caves.html in the process of shaping and launching the project the author enjoyed discussions with paul avery , rene brun , federico carminati , harwey newman , lothar bauerdick , david stickland , stephan wynhoff , torre wenaus , rob gardner , fons rademakers , richard cavanaugh , john yelton , darin acosta , mike wilde , jens vckler , conrad steenberg , julian bunn , predrag buncic and martin ballantijn .the coffee pauses with jorge rodriguez were always stimulating . the first prototype and first releaseare coded by mandar kulkarni and myself , the members of the caves team at present .this work is supported in part by the united states national science foundation under grants nsf itr-0086044 ( griphyn ) and nsf phy-0122557 ( ivdgl ) .`` the grid : blueprint for a new computing infrastructure '' , edited by ian foster and carl kesselman , july 1998 , isbn 0 - 97028 - 467 - 5 .i. foster , c. kesselman , s. tuecke , `` the anatomy of the grid : enabling scalable virtual organizations '' , international j. supercomputer applications , 15(3 ) , 2001 . i. foster , c. kesselman , j. nick , s. tuecke , `` the physiology of the grid : an open grid services architecture for distributed systems integration '' , open grid service infrastructure wg , global grid forum , june 22 , 2002 .world wide web consortium ( w3c ) , http://www.w3.org/2002/ws/ .see e.g. beth gold - bernstein , `` making soa a reality ?an appeal to software vendors and developers '' , november 2003 , http://www.ebizq.net/topics/soa/features/3142.html?related .i. foster _ et al ._ , `` chimera : a virtual data system for representing , querying , and automating data derivation '' , presented at the 14th international conference on scientific and statistical database management ( ssdbm 2002 ) , edinburgh , 2002 ; griphyn technical report 2002 - 7 , 2002 .i. foster _ et al ._ , `` the virtual data grid : a new paradigm and architecture for data intensive collaboration '' , proceedings of cidr 2003 - conference on innovative data research ; griphyn technical report 2002 - 18 , 2002 .p. avery , i. foster , `` the griphyn project : towards petascale virtual - data grids '' , griphyn technical report 2001 - 14 , 2001 ; http://www.griphyn.org/index.php .the particle physics data grid , http://www.ppdg.net/ .the european union datagrid project , http://eu-datagrid.web.cern.ch/eu-datagrid/ .p. avery , i. foster , r. gardner , h. newman , a. szalay , `` an international virtual - data grid laboratory for data intensive science '' , griphyn technical report 2001 - 2 , 2001 ; http://www.ivdgl.org/ .the lhc computing grid project , http://lcg.web.cern.ch/lcg/ .the enabling grids for e - science in europe project , + http://egee-intranet.web.cern.ch/egee-intranet/gateway.html .a. arbree _ et al ._ , `` virtual data in cms production '' , computing in high - energy and nuclear physics ( chep 03 ) , la jolla , california , 24 - 28 mar 2003 ; published in econf c0303241:tuat011 , 2003 ; e - print archive : cs.dc/0306009 .a. arbree , p. avery , d. bourilkov _ et al ._ , `` virtual data in cms analysis '' , computing in high - energy and nuclear physics ( chep 03 ) , la jolla , california , 24 - 28 mar 2003 ; fermilab - conf-03 - 275 , griphyn - report-2003 - 16 , cms - cr-2003 - 015 ; published in econf c0303241:tuat010 , 2003 ; [ arxiv : physics/0306008 ] . k. beck , `` extreme programming explained - embrace change '' , addison - wesley , 2000 , isbn 0201616416 .r. brun , f. carminati _ et al ._ , `` software development in hep '' , computing in high - energy and nuclear physics ( chep 03 ) , la jolla , california , 24 - 28 mar 2003 ; http://www.slac.stanford.edu/econf/c0303241/proc/pres/10006.ppt .the cms collaboration , `` the compact muon solenoid - technical proposal '' , cern / lhcc 94 - 38 , cern , geneva , 1994 . v. innocente , l. silvestris , d. stickland , `` cms software architecture software framework , services and persistency in high level trigger , reconstruction and analysis '' cms note-2000/047 , cern , 2000 . http://cms-project-ccs.web.cern.ch/cms-project-ccs/ . c. steenberg_ et al . _ , `` the clarens web services architecture '' , computing in high - energy and nuclear physics ( chep 03 ) , la jolla , california , 24 - 28 mar 2003 ; published in econf c0303241:mont008 , 2003 ; e - print archive : cs.dc/0306002 ; http://clarens.sourceforge.net/. the condor project , http://www.cs.wisc.edu/condor .e. deelman , j. blythe , y. gil , c. kesselman , `` pegasus : planning for execution in grids '' , griphyn technical report 2002 - 20 , 2003 . j. in _ et al ._ , `` policy based scheduling for simple quality of service in grid computing '' , griphyn technical report 2003 - 32 , 2003 .the griphyn virtual data toolkit , http://www.lsc-group.phys.uwm.edu/vdt/ .rene brun and fons rademakers , `` root - an object oriented data analysis framework '' , proceedings aihenp96 workshop , lausanne , sep .1996 , nucl .inst . & meth .a 389 ( 1997 ) 81 - 86 ; see also http://root.cern.ch/ .masaharu goto , `` c++ interpreter - cint '' cq publishing , isbn4 - 789 - 3085 - 3 ( japanese ) .the concurrent versions system cvs , http://www.cvshome.org/ .the mysql database , http://www.mysql.com/ .d. bourilkov and m. kulkarni , `` the caves project - collaborative analysis versioning environment system '' , presented at ppdg collaboration meeting , berkeley lab , december 15 - 16 , 2003 ; http://www.ppdg.net/archives/talks/2003/bin00010.bin .d. bourilkov and m. kulkarni , in preparation , to be presented at the root 2004 users workshop .apache web server , apache software foundation , http://www.apache.org/ .the globus alliance , http://www.globus.org/ .t. sjstrand _ et al ._ , `` high - energy - physics event generation with pythia 6.1 '' comp .* 135 * ( 2001 ) 238 .pythia and cmkin viewers and plotters developed by the author , + http://cern.ch/bourilkov/viewers.html .d. bourilkov , `` sensitivity to contact interactions and extra dimensions in di - lepton and di - photon channels at future colliders '' , arxiv : hep - ph/0305125 .d. bourilkov , `` study of parton density function uncertainties with lhapdf and pythia at lhc '' , arxiv : hep - ph/0305126 . m. ballintijn , r. brun , f. rademakers and g. roland , `` the proof distributed parallel analysis framework based on root '' , econf * c0303241 * , tuct004 ( 2003 ) , econf * c0303241 * , tult003 ( 2003 ) , [ arxiv : physics/0306110 ] .the caigee project and grid enabled analysis ( gae ) , + http://pcbunn.cithep.caltech.edu/gae/caigee/default.htm , + http://pcbunn.cithep.caltech.edu/gae/gae.htm .hep common grid applications layer use cases - hepcal ii document ( lcg - sc2 - 2003 - 032 ) , http://lcg.web.cern.ch/lcg/sc2/gag/hepcal-ii.doc .rtag11 : an architectural roadmap towards distributed analysis ( arda ) , + http://www.uscms.org/s&c/lcg/arda/ .+ show_docs.php?series = ivdgl&category = talks&id=691
|
the collaborative analysis versioning environment system ( caves ) project concentrates on the interactions between users performing data and/or computing intensive analyses on large data sets , as encountered in many contemporary scientific disciplines . in modern science increasingly larger groups of researchers collaborate on a given topic over extended periods of time . the logging and sharing of knowledge about how analyses are performed or how results are obtained is important throughout the lifetime of a project . here is where virtual data concepts play a major role . the ability to seamlessly log , exchange and reproduce results and the methods , algorithms and computer programs used in obtaining them enhances in a qualitative way the level of collaboration in a group or between groups in larger organizations . it makes it easier for newcomers to start being productive almost from day one of their involvement or for referees to audit a result and gain easy access to all the relevant details . also when scientists move on to new endeavors they can leave their expertise in a form easily utilizable by their colleagues . the same is true for archiving the knowledge accumulated in a project for reuse in future undertakings . the caves project takes a pragmatic approach in assessing the needs of a community of scientists by building series of prototypes with increasing sophistication . in extending the functionality of existing data analysis packages with virtual data capabilities these prototypes provide an easy and habitual entry point for researchers to explore virtual data concepts in real life applications and to provide valuable feedback for refining the system design . the architecture is modular based on web , grid and other services which can be plugged in as desired . as a proof of principle we build a first system by extending the very popular data analysis framework root , widely used in high energy physics and other fields , making it virtual data enabled . physics/0401007 + january 2004
|
the present time has been referred to as the `` golden age '' of precision cosmology .strong gravitational lensing data is a rich source of information about the structure and dynamics of the universe , and these data are contributing significantly to this notion of precision cosmology .strong gravitational lens studies are highly dependent on the software used to create the models and analyze the components such as lens mass , einstein radius , time delays etc .a comprehensive review of available software has been conducted by .while many such software packages are available , most studies utilize only a single software package for analysis .furthermore , most authors of strong gravitational lensing studies use their own software only .more recently , the status of comparison studies of strong gravitational lens models has been reviewed by .this study demonstrated that changes in redshift affect time delay and mass calculations in a model dependent fashion , with variable results with small changes in redshift for the same models .an important resource for the conduct of comparison studies is the orphan lens project , a compendium of information about strong lens systems that as of may 2014 contained data for 656 lens systems .there are a number of barriers to the conduct of lens model comparisons .ideally , a comparison study of a previously studied lens would include the original model for comparison , but this is sometimes impossible because the lens model code is not made publicly available .another barrier to performing comparative studies is the complexity of the lens model files , since there are major differences among the commonly used model software available . in order to facilitate this step of the processthe hydralens program was developed to generate model files for multiple strong gravitational lens model packages .to date , the largest comparison study of strong gravitational lens models was an analysis of macsj1206.2 - 0847 as part of the clash survey conducted by .this study included four different strong gravitational lens models including lenstool ( ) , pixelens ( ) , lensperfect ( ) and sawlens ( ) .the authors conducted five lens model analyses using the same data , and is thus categorized as a direct and semi - independent study .this type of study has great advantages in that all data and all models are available for direct comparison in a single study . the hubble space telescope ( hst ) frontier fields project is reporting preliminary results . this important deep field observing program combines the power of the hst with gravitational lenses .lensing analysis in the frontier fields project includes models from a number of software codes including zb , grale , lenstool , and two other non - ltm lens model software codes which facilitate direct comparison of results from a number of lens models rather than depending on a single model from which to draw conclusions .the hubble frontier fields analysis uses models that are independently developed and optimized by each group of investigators for each code used .the power of this approach has been reported , with more results surely to follow .the goal of this study is to directly compare the results of calculations among four model software codes in the evaluation of four lens systems .the present study has several unique features .this study is the first to use computer - aided lens model design , using hydralens software to facilitate lens model generation.there are no previous single studies which compare the results for multiple lens systems using multiple lens model software .this study was designed to further evaluate comparative lens model analyses and includes both direct and indirect semi - independent studies of four lens systems using four different software models .other studies have included indirect comparisons to previous lens model analyses , or direct comparisons of several lens models of a single lens system .this is the first study to also include combined indirect and direct analyses where previously published lens models were used for direct comparisons .the nomenclature of lens model comparison studies , lens systems studied , previous lens model studies of these systems and the lens model software used are described in section [ methods ] .the results of the lens model studies for each of the four systems studied are presented in section [ results ] and a review of existing comparison studies along with the results of this study are presented in [ discussion ] .conclusions and suggestions for future lens model studies are in section [ conclusions ] .the use of standardized nomenclature to describe lensing studies is useful to evaluate multiple studies . in this paperwe follow the nomenclature previously described .lens model comparison studies are referred to as direct when the comparison is made based on calculations using two software models in the same paper , and indirect when comparison is made to previously published data . in this study , we also use the actual models from published studies ( kindly supplied by the investigators ) so these are considered combined indirect / direct comparisons .lens model comparisons using the same data are referred to as semi - independent , and when different data is used , the comparison is independent .lastly , software is classified as light traces mass ( ltm , formerly known as parametric ) , or non light traces mass ( non - ltm , formerly known as non - parametric ) .each lens model software package uses a different input data format to describe the lens model .all of them use simple text files as input , but the format of the text files , available functionality and command structures are dependent on the particular software .some lens model software uses multiple accessory files to provide other data .each of them has a unique list of commands , with great variability .hydralens was written to simplify the process of creating lens model input files to facilitate direct comparison studies , and to assist those starting in the field .the four lens systems were evaluated using four lens model codes , necessitating 16 different models . the lenstool model for cosmos j095930 + 023427 was kindly provided by cao . the glafic model for sdss j1320 + 1644 was kindly provided by rusu . the remaining 14 models were written for this study using hydralens . in the case of cosmos j095930 + 023427 and sdss j1320 + 1644 , the two lens models we received from other investigators were used as input to hydralens which generated the models for the other three software packages used in this study . in the case of sdss j1430 + 4105 and j1000+ 0021 , models were first written for pixellens.http://ascl.net/1102.007 ] hydralens was then used to translate the pixelens model into the format for the other strong gravitational lens model software , including lenstool , http://ascl.net/1102.004 ] lensmodelhttp://ascl.net/1102.003 ] and glafic.http://ascl.net/1010.012 ] the translated files output from hydralens were edited to assure that parameters were fixed or free as appropriate , and that optimization parameters were correctly set .the lens model files were then used as input to the respective lens model software .the parameters used for the four lens systems was obtained from previous studies .the geometry for each system was identical in all four models evaluated , and therefore all studies conducted are classified as semi - independent lens analyses .three of the lens systems studied are listed in the orphan lens database including cosmos j095930 + 023427 , sdss j1320 + 1644 and sdssj1430 + 4105 .the lens cosmos j095930 was first described by jackson .cosmos j095930 is an early - type galaxy with four bright images of a distant background source .it is located at =0.892 , and the background source is estimated at =2.00 . while the exact is unknown , the value used by previous investigators is 2.00 .models of this system were described by faure using lenstool .this model used a singular isothermal ellipsoid ( sie ) with external shear ( ) and found an einstein radius of 0.79 " and =255 km s .more recently , an extensive multi - wavelength study of this system was reported by cao and colleagues , also using lenstool .this analysis used four different models , an sie with two singular isothermal spheres ( sis ) as well as a pseudo - isothermal elliptical mass distribution ( piemd ) model with two sis , both with and without external shear .we selected the sie+sis+sis model used by cao as the basis of the present indirect comparison with their work as well as the direct comparisons with the four lens models studied here .the lenstool model developed by cao and coworkers was kindly supplied for this study and used as a baseline model which was then translated into input files for the other software by hydralens .the lenstool model used by cao included priors for the values of ellipticity ( ] for all three potentials ) .these same priors were used in the models of cosmos j095930 for lensmodel and glafic in this study .the lenstool model developed by cao uses optimization in the source plane .the image positions used in all models of this system were taken from table 1 in cao .the lenstool model developed by cao has five free parameters including the velocity dispersion of the three galaxies , and orientation and ellipticity of the sie galaxy .the positions of the second and third galaxies ( sis ) in the model were fixed .the models used here were similarly parameterized .the present study is an indirect comparison with the analysis of cao as well as a direct comparison of the four lens models studied . since we were provided the model used by cao , it is a combined indirect / direct comparative analysis of cosmos j095930 .all four models of this system used a , , km s cosmology , as was used by .sdssj1320 + 1644 was initially described by and , and is a large separation lensed quasar candidate identified in the sdss , with a separation of .002 at =1.487 .both an elliptical and disk - like galaxy were identified almost symmetrically between the quasars at redshift =0.899 .a detailed lens model analysis of this system was conducted by , using glafic software .based on their analysis , they conclude that sdssj1320 + 1644 is a probable gravitationally lensed quasar , and if it is , this would be the largest separation two - imaged lensed quasar known .they show that the gravitational lens hypothesis implies that the galaxies are not isolated , but are embedded in a dark matter halo , using an nfw model and an sis model .the sis model has a =645 km s .we use the sis free model as the basis of the comparison study , as defined by , which models the three galaxies ( referred to as g1 , g2 and g4 ) as sis potentials and leaves the position of the dark matter halo ( also modeled as a sis ) as a free parameter .the model used by rusu includes priors for the velocity dispersion of the dark matter halo ( $ ] ) .the same priors were used in the models of sdss j1320 + 1644 in this study .the analysis by rusu uses optimization by glafic in the image plane .the image positions of the two images were used directly as described by rusu .rusu considers models with 0 degrees of freedom , including 14 nominal constraints and the same number of nominal parameters , which fit with .the ellipticity and position angle are used when the position of the dark matter halo is fixed .the models developed for this study were similarly parameterized using the position of the dark matter halo as a free parameter ( `` sis - free '' ) and fixed to introduce ellipticity and position angle .a number of glafic models developed by rusu and coworkers were kindly supplied for this comparative analysis and used as a baseline model which was then translated by hydralens into models for the other software .the present study includes an indirect comparison with the analysis of rusu as well as a direct comparison of the four software lens models studied .since we were provided a model used by rusu , this is a combined indirect / direct comparative analysis of sdssj1320 + 1644 .all four models of this system used a , , km s cosmology , as was used by .sdss1430 + 4105 was first described by as part of the slacs survey .this system is at redshift =0.285 with =0.575 , and has a complex morphology with several subcomponents as described by .bolton reported an effective radius of 2.55 " and a =322 km s . a very detailed lens model analysis of this systemwas then conducted by .this analysis was a direct , semi - independent comparative analysis using both gravlens ( ltm ) and lensview ( non - ltm ) software .the authors studied five different models using gravlens / lensmodel , including an sie and a power law ( pl ) model as well as three two - component de vaucouleurs plus dark matter models .similar results were found with the two different lens model analyses .they also studied four models using lensview including an sie and pl models with and without external shear .we use the gravlens / lensmodel sie model as the basis of the indirect comparison with their work .the plane of optimization used in the eichner model is not explicitly stated in the report .the models developed in the previous study were not available , and thus all models used were written for this study .the results referred to as model i by eichner did not use any priors in the lens model for sdss j1430 + 4105 , although priors were used in the development of the model with results within the error limits reported .similarly , priors were not used in the models in this study .the free parameters used by eichner et al included the lensing strength b , the ellipticity and the orientation of the single - component sie lens .these same free parameters were used in the models developed for this study .the positions of the multiple images of this system were taken from table 2 in .this is both an indirect comparison ( compared with the sie model in the published study of ) and direct comparisons of the four lens models studied here .all four models of this system used a , , km s cosmology , as was used by . using imaging data from candels and the large binocular telescope , van der wel andcolleagues recently reported the quadruple galaxy - galaxy lens j100018.47 + 022138.74 ( j1000 + 0221 ) , which is the first strong galaxy lens at .this interesting system has a =1.53 and a =3.417 . , analyzed this system in the manner described previously by , and reported an einstein radius of with an enclosed mass of with an upper limit on the dark matter fraction of 60% .the highly magnified ( 40 ) source galaxy has a very small stellar mass ( ) .the lens is a flattened , quiescent galaxy with a stellar mass of .there have been no other lens model analyses of this system using software models and therefore all models were developed for this study using data from , and is thus is a direct comparison of the four lens software models studied .there were no priors used in the lens models of j1000 + 0021 in this study .the free parameter in the sis models was only the velocity dispersion . in the sie models ,free parameters included the velocity dispersion , orientation and ellipticity .the image positions in all models in this study for this system were taken from table 2 in .all four models of this system used a , , km s cosmology .the analyses in this study were performed with four strong gravitational lens model software packages that have been used extensively in the literature .all four systems were modeled with all four lens model software packages .lenstool and lensmodel were executed under scientific linux version 6.4 ( except as noted for lensmodel in section [ version ] ) , and pixelens and glafic were executed under os / x version 10.9 . all of these lens model software codes were reviewed in the orphan lens project and the descriptions of the software are from the web site as well as from a review of lens model software .error calculations were performed according to the method of rusu et al .the errors quoted for the calculated parameters ( ellipticity , orientation , magnification , time delay , etc . ) reflect the calculations corresponding to calculations within the confidence interval for velocity dispersions .the fit of the models is assessed by optimization and the rms uncertainty .the rms is calculated by : where and are the locations given by the model , and and are the real images location , and the sum is over all images .the results are calculated for the models by lenstool , lensmodel and glafic , and are reported in the data tables .the rms value is reported by lenstool directly , while a manual calculation was necessary for models using lensmodel and glafic .pixelens is a non - ltm strong gravitational lens model software that is available for download as a java program which runs in a browser window .version 2.7 was used in these studies .pixelens is accompanied by a manual and a tutorial .pixelens reconstructs a pixelated mass map for the lens in terms of the arrival time surface and has been used in several studies .pixelens employs a built - in mcmc approach and creates an ensemble of 100 lens models per given image configuration .the pixelated mass map offers the advantage of being linear in the unknown .since all equations are linear in the unknowns , the best - fitting model and its uncertainties are obtained by averaging over the ensemble .the pixelated mass map differentiates pixelens from the other software used in this study which fit parametric functional forms .lenstool has been used in many different studies and is available for download .version 6.7.1 was used in these studies .lenstool has features of both ltm and non - ltm modeling and uses a bayesian approach to strong lens modeling and has been well - described in the literature .there are several resources available for writing lens models for lenstool .lenstool can optimize most of the parameters in a model .models produced by hydralens for lenstool were modified slightly to add appropriate optimization parameters and then used with lenstool .lenstool optimization is performed with mcmc .lenstool uses the geometry of the images given and then finds counter - images .the image positions are recomputed and the time delays determined .the gravlens package includes two codes , gravlens and lensmodel accompanied by a user manual .version 1.99o was used in these studies , under the linux operating system , downloaded from the astrophysics source code library .however , the darwin ( macintosh ) executable file provided for version 1.99o will only run on the now obsolete powerpc architecture . a newer version to run on the macintosh platform under os / x 10.9 ( gravlens version dated november 2012 ) was kindly provided by professor keeton , for these studies .lensmodel is an extension of gravlens and was used for all analyses here .it is fully described in two publications by keeton , and has been used extensively .lensmodel is an ltm lens model software , which optimizes the selected lens parameters and uses a tiling algorithm and a simplex method with a polar grid centered on the main galaxy .the tiles are used to determine the image positions , and then uses a recursive sub - gridding algorithm to more accurately determine image positions .glafic is an ltm lens model software , and includes computation of lensed images for both point and extended sources , handling of multiple sources , a wide variety of lens potentials and a technique for mass modeling with multiple component mass models .version 1.1.5 was used on the os / x platform and version 1.1.6 was used with linux in these studies .each lens is defined by the lens model and seven parameters .a large catalog of lens models is available ( including point mass , hernquist , nfw , einsato , sersic , etc . ) . after defining the parameters and the lens models , parameters to be varied in the minimizationsare specified . following this ,the desired commands are issued such as computing various lensing properties , einstein radius , write lensing properties to a fits file , etc .glafic has been used in a large number of lens model studies , including sdssj1004 , and performs lens model optimization .glafic uses a downhill simplex method of optimization .the image plane is divided using square grids by an adaptive meshing algorithm .the level of adaptive meshing is set as an optional parameter .each of the four lens systems was modeled with all four lens model software codes including pixelens , lenstool , lensmodel , and glafic .best - fit lens model parameters from previous studies are presented along with the results from this study for each system .the results reported for each lens were intended to follow the format of the data for best - fit lens parameters as reported in previous studies , and therefore there are some differences in the data presented for the four lens systems .lenstool and glafic directly calculate the velocity dispersion and then calculate the einstein radius and mass within the einstein radius .lensmodel directly calculates the einstein radius , from which the other values were deduced .pixelens calculates mass at various distances from the lens mass .the figures shown are the output from each of the software packages used , and represent the graphical capabilities of that software .best - fit lens model parameters for cosmos j095930 + 023427 are shown in table [ table : j095930 ] .the data reported by are at the upper portion of the table , and show the results of the lenstool model. the results in this study using the lenstool model are somewhat different because the model in this study used optimization in the image plane , rather than the source plane optimization used by cao .the glafic model was also conducted with optimization in the image plane , while the lensmodel model is conducted with source plane optimization because image plane optimization did not yield a satisfactory model .direct comparisons of the four software models evaluated are shown next .the models used here were based on the sie+sis+sis model used by .the lenstool model includes an sie potential at =0.892 , and two sis potentials at =0.7 , as described by .the pixelens model used image coordinates from , and calculated an enclosed mass inside the einstein radius very close to that calculated by the lenstool model .the lenstool model optimized the ellipticity , position angle and velocity dispersion for the single sie potential , and only the velocity dispersion for the two sis potentials , as done by as free parameters .lensmodel sets all three lens potentials at =0.892 because the software does not permit multiple lens planes .the ellipticities and position angles optimized by each of the three codes are quite different .the einstein radius of the sie potentials are similar while there is some difference in the optimized velocity dispersions calculated by the three codes , particularly in the values calculated by glafic for the second potential . in an effort to understand this ,the velocity dispersions of the first and second potentials were fixed at the values calculated by lenstool at 234 and 412 km s respectively and the velocity dispersion of the third potential allowed to optimize , using glafic .this resulted in a velocity dispersion of 632 km s for the third potential .when the first and third values were fixed at 238 and 603 km s ( as found by cao ) , the second potential was optimized at 57 km s .magnifications and time delays for this model are shown in table [ j095930td ] .both time delays and magnifications calculated by all four models show great variability .the velocity dispersions shown in table [ table : j095930 ] as calculated here are slightly different from those reported by , because of the different optimization technique .the velocity dispersion values shown for the lensmodel and glafic models are somewhat different . the lenstool model used by cao defined potentials at =0.892 and 0.7 ,although lenstool allows only a single lens plane .when the results were re - calculated defining all lenses in the same plane ( ) using lenstool , there was no effect on the calculation of the velocity dispersion .the wide variation in time delays calculated for this system are shown in table [ j095930td ] , and are consistent with the wide range in time delays reported in our previous study using different models .there is a wide disparity in time delay calculations seen in all of the systems evaluated in this study .the image plane for a representative model calculated using lenstool is shown in figure 1 .the image positions change from the input positions because of the image tracing algorithm used .this slight difference may account for the differences seen in time delay and magnification .lenstool identifies 16 total images , which are nearly superimposed at the original positions of the four images shown in figure 1 .each of the models uses somewhat different optimization schemes , and the velocity dispersions are a result of optimization , which may explain some of the differences shown in table [ table : j095930 ] .the differences in the results among the three software programs is not surprising , since this model had all three velocity dispersions as free - parameters .lcccccccc[!h ] lenstool & & & 1.7 + sie & [ 0.0 ] & [ 0.0 ] & & 0.28 & -10 & 0.79 & & 238 + sis & [ -10.98 ] & [ 0.474 ] & & & & & & 391 + sis & [ 3.52 ] & [ 13.2 ] & & & & & & 603 + pixelens & & & & & & & 3.51 & + + lenstool & & & 1.2 & + sie & [ 0.0 ] & [ 0.0 ] & 0.06 & & & & & + sis & [ -10.98 ] & [ 0.474 ] & & & & 1.8 & 11.7 & + sis & [ 3.52 ] & [ 13.2 ] & & & & 4.3 & 67.9 & + + lensmodel & & & 2.2 & + sie & [ 0.0 ] & [ 0.0 ] & 0.3 & & & & & + sis & [ -10.98 ] & [ 0.474 ] & & & & 1.6 & 8.81 & + sis & [ 3.52 ] & [ 13.2 ] & & & & 2.3 & 18.1 & + + glafic & & & 0.9 & + sie & [ 0.0 ] & [ 0.0 ] & 0.2 & & & & & + sis & [ -10.98 ] & [ 0.474 ] & & & & 0.00 & * & + sis & [ 3.52 ] & [ 13.2 ] & & & & 4.2 & 57.6 & + + lcccc[h ] pixelens + time delay & 0 & 0.7 & 3.4 & 0.07 + + lenstool + magnification & & & & + time delay & 0 & & & + + lensmodel + magnification & & & & + time delay & 0 & & & + + glafic + magnification & & & & + time delay & 0 & & & + + best - fit lens model parameters for sdss j1320 + 1644 are shown in table [ table : j1320 ] with an indirect / direct comparison to the study of and the four direct comparisons in this study . utilized a glafic model that modeled the potentials of g1 , g2 and g4 which were boosted by an embedding dark matter halo .one of the published models used four sis potentials and fixed the locations of the first three , allowing the position of the fourth ( the dark matter halo ) to optimize ( `` sis free '' ) .furthermore , they concluded that any reasonable mass model reproduced the observed image configuration .the values shown in table [ table : j1320 ] are those as presented in the paper , as the sis free model . in this study ,the values calculated by and shown here were reproduced exactly using their model , and the values are at 1 .the pixelens model has a much lower calculated time delay than the other models , and an enclosed mass within 1 of the value reported by . as performed by , the positions of the sources were kept fixed for the first three sis potentials .the velocity dispersion and position of the last potential ( the dark matter halo ) were optimized .the optimized position of the fourth potential calculated in the lenstool model is quite different , and the velocity dispersion is similar to other models .lensmodel uses the einstein radius , rather than velocity dispersion so the einstein radii for the first three sis potentials were fixed , and the fourth was a free parameter .the mass of the fourth potential calculated by lensmodel is nearly identical to the values calculated using glafic by as well as the lenstool and glafic models reported here .the time delays and magnification values show more variability .the lenstool , glafic and lensmodel models conducted in this study use image plane optimization , similar to the glafic analysis conducted by rusu . the calculated models of sdss j1320 + 1644 show similar optimization for the mass of the fourth sis potential , with fairly similar positions calculated by lensmodel and glafic , while the positions calculated by lenstool show greater variability .there is great variability among the calculated time delays and magnifications .the calculations performed in this study using glafic are the same as the glafic sis - free model reported by .table [ table : j1320 ] shows that the mass calculated for the fourth sis potential , which was a free parameter , optimized to the same value for lenstool , lensmodel and glafic .the optimized geometry was slightly different for lenstool compared to the others .the einstein radius calculated by all four models was almost the same for the first sis potential .the fact that the velocity dispersion for the fourth lens potential was optimized to the same value in all of the models may reflect the fact that there was only a single free parameter in each model .this is different from the results above with cosmos j095930 + 023427 , which optimized three lens potentials as free parameters , with varying results among the models tested . the model of sdss j1320+ 1644 was straightforward including four sis potentials which was reproduced in all software models without difficulty .the model used by rusu had 0 degrees of freedom and with a resulting , due in part to the design of the model with 14 nominal constraints and 14 parameters .the similarity of the potentials used to model the system may have contributed to the close results for optimization of the mass . despite this , position , magnification and time delay showed great variability among the four models .the velocity dispersion for only the fourth lens potential was left as a free parameter , with the other three fixed , which is likely a major factor in the close agreement found among the various models in the calculation of the velocity dispersion .the image plane of the glafic model is shown in figure 2 , which is the same as shown in figure 6 of .the image positions in the image plane are the same as the input positions in all models . despite this, there is variability in the time delay and magnification calculations .lcccccccc[!h ] glafic & & & & & + sis & [ -4.991 ] & [ 0.117 ] & & & & & & [ 237 ] + sis & [ -2.960 ] & [ 3.843 ] & & & & & & [ 163 ] + sis & [ -9.169 ] & [ 5.173 ] & & & & & & + sis & -4.687 & 1.149 & & & & & & + pixelens & [ 0.0 ] & [ 0.0 ] & & & 3.5 & & 2.9 & + + lenstool & & & 11.5 & & + sis & [ -4.991 ] & [ 0.117 ] & 2.3 & & & [ 0.49 ] & 1.1 & [ 237 ] + sis & [ -2.960 ] & [ 3.843 ] & & & & [ 0.23 ] & 0.25 & [ 163 ] + sis & [ -9.169 ] & [ 5.173 ] & & & & [ 0.12 ] & 0.07 & [ 118 ] + sis & -0.471 & 0.179 & & & & 3.6 & 61 & + + lensmodel & & & 51.1 & & + sis & [ -4.991 ] & [ 0.117 ] & 3.9 & & & [ 0.49 ] & 1.1 & [ 237 ] + sis & [ -2.960 ] & [ 3.843 ] & & & & [ 0.23 ] & 0.25 & [ 163 ] + sis & [ -9.169 ] & [ 5.173 ] & & & & [ 0.12 ] & 0.07 & [ 118 ] + sis & -3.93 & 2.43 & & & & 2.9 & 53 & + + glafic & & & 2.0e-06 & & + sis & [ -4.991 ] & [ 0.117 ] & 0.10 & & & [ 0.5 ] & & [ 237 ] + sis & [ -2.960 ] & [ 3.843 ] & & & & [ 0.23 ] & 0.25 & [ 163 ] + sis & [ -9.169 ] & [ 5.173 ] & & & & [ 0.12 ] & 0.070 & [ 118 ] + sis & -4.687 & 1.149 & & & & & 61 & + + the indirect comparison to the work of and the results of the four direct comparisons in this study are shown in table [ table : j1430 ] .in there are five different models tested for sdssj1430 + 4105 .the models were tested with gravlens / lensmodel ( ltm ) and lensview ( ltm ) , and the results compared in a direct comparison .the model used in the current study is based on model i , as described in , which models the lens as an sie , ignoring the environment of the lens .the best fitting parameters reported by are shown in table [ table : j1430 ] .the results of eichner are in good agreement with those by . in the sie model using lensview as reported by , their results were very similar to those with the lensmodel model .the input files for the model used by were not available for this study , making this study both an indirect and direct comparison . the lenstool , glafic and lensmodel models conducted in this study use image plane optimization .the enclosed mass calculated by pixelens inside the einstein radius , is slightly higher than the result published by .the einstein radii calculated by all the models are very close to each other as well as close to the result of .as shown in other lens systems in this study , there is considerable variation in magnification and time delay calculations among the four models studied as shown in table [ j1430td ] .the optimized ellipticities among the four models are all quite close , but there is significant variability in the optimal position angles calculated . the models used in this study ( results shown in tables [ table : j1430 ] and [ j1430td ] ) were written without detailed knowledge of the model used by . despite this , the models all had similar results , especially in regard to einstein radius , enclosed mass and velocity dispersion calculations .the image plane of the glafic model of this system is shown in figure 3 .the glafic ( figure 3 ) model resulted in just 4 images in the output image plane .in contrast , lenstool identified a total of 28 images .the position angles were somewhat different but there was good agreement among the models for ellipticity calculations .as with other models in this study , there was variation in the calculation of time delays and magnifications .one of the reasons for such close agreement among the models is that the models all used a single sie potential , which allowed for comparable potentials among the four lens model codes tested .there was a single lens plane in all of the models .lcccccccc[h ] lensmodel & & & 11.5 + sie & [ 0.0 ] & [ 0.0 ] & & & & & & + pixelens & [ 0.0 ] & [ 0.0 ] & & & & & 6.04 & + + lenstool & & & 4.9 + sie & [ 0.0 ] & [ 0.0 ] & 0.25 & & & & & + + lensmodel & & & 15.9 + sie & [ 0.0 ] & [ 0.0 ] & 0.30 & & & & & + + glafic & & & 2.4 + sie & [ 0.0 ] & [ 0.0 ] & 0.29 & & & & & + + lccccc[h ] pixelens + time delay & 0 & 0 & 0 & 0 & 0 + + lenstool + magnification & & & & & + time delay & 0 & & & & + + lensmodel + magnification & & & & & + time delay & 0 & & & & + + glafic + magnification & & & & & + time delay & 0 & & & 0 & + + an analysis of this lens system was performed by with a calculated einstein radius of ( or 3.0 kpc ) with an enclosed mass of .there have been no extensive lens model analyses of this system published to date .this is the first strong galaxy lens at . in all models ,the position ( both ra and dec ) of the lens galaxy was kept constant , and the mass was a free parameter optimized by the software .further details of the model used were not provided , such as the model software used or the calculation .results of the four direct comparisons done in this study are shown in table [ table : j1000 ] .this lens system was modeled both using an sis and an sie , with all lens model software tested .the lenstool , glafic and lensmodel models conducted in this study use image plane optimization .the pixelens model calculated the enclosed mass the same as reported by . using an sis potential ,the einstein radius , enclosed mass and velocity dispersion calculations were nearly the same for lenstool , lensmodel and glafic .the einstein radii and velocity dispersions were very close to that reported by .calculations of magnification and time delay showed quite a bit of variability in these models . the results of the models shown in table [ table : j1000 ] show very similar results for the sis and the sie models .the enclosed mass within the einstein radius is somewhat lower than that reported by for lenstool , lensmodel and glafic although the pixelens model reproduced the enclosed mass calculation very well .similar to the models used for sdssj1430 + 4105 , these models were all quite straightforward with a single potential located at the origin , which may have contributed to the concordance of results . comparing the results of the sie models ,the results with an sie model using the four software packages were also nearly identical , although among the sie models , there was some variability in the calculations of ellipticity and position angle .the image plane of the glafic model of this system is shown in figure 4 .this system is particularly interesting as the image positions in the lensmodel and glafic models have an almost identical geometry , while the image positions in the lenstool model are different .the time delays and magnifications in the lensmodel and glafic models are very similar , while the lenstool model values are different .lcccccccc[h ] + & & & & + & & & & & & 0.35 & & + pixelens & [ 0.0 ] & [ 0.0 ] & & & 2.3 & & 0.8 & + + lenstool + sis & [ 0.0 ] & [ 0.0 ] & 2.9 & & & & & + & & & 0.05 + lensmodel + sis & [ 0.0 ] & [ 0.0 ] & 5.8 & & & & & + & & & 0.23 + glafic + sis & [ 0.0 ] & [ 0.0 ] & 0.3 & & & & & + & & & 0.10 + & & & & & & & & + & & & & & & & & + lenstool + sie & [ 0.0 ] & [ 0.0 ] & 1.7 & & & & & + & & & 0.04 + lensmodel + sie & [ 0.0 ] & [ 0.0 ] & 1.7 & & & & & + & & & 0.12 + glafic + sie & [ 0.0 ] & [ 0.0 ] & 0.5 & & & & & + & & & 0.05 + there are some generalizations that can be made comparing the results calculated from the models for each of the four lens systems studied .the einstein radii and mass within the einstein radii are quite close for the four models of each system .the einstein radius is calculated from the average distance between the lens center and multiple images , and is insensitive to the radial density profile .the conversion from the einstein radius to the enclosed mass within the einstein radius is dependent only on the lens and source redshifts , and is therefore model independent .thus , the similar results for einstein radii and mass within the einstein radii are expected since all models had the same system geometry of and . there is variation among the calculated time delays and magnifications comparing the models generated by each of the four lens model software programs .the image positions input to each model were identical .the image positions in the models studied change due to the ray - tracing algorithms in each software model .these differences explain some of the variation seen in time delay and magnification . in some cases, the use of a similarly parameterized model leads to a model that has not converged appropriately , which illustrates some of the differences in the software .this is evident in the rms values calculated for the lenstool and lensmodel models of j1320 + 1644 there is also little agreement among calculations of ellipticity and position angle .the variation in results for calculated ellipticity and position angle may be a result of differences in the optimization algorithms used by lenstool , lensmodel and glafic .the complexity of the model also has an impact on agreement among the calculated values for velocity dispersion . in the models for sdss j1430 + 4105 , j1000 + 0021and sdss j1320 + 1644 ,there was only one potential with the velocity dispersion as a free - parameter for optimization .in all three of these systems , there was close agreement among the calculated values . in the model of cosmos j095930 + 023427, there were three lens potentials which were optimized , with quite a bit of variation among the results from the three software programs used . in order to evaluate the effect of software version and/or operating system / hardware platform , the model of sdss j1320 +1644 was evaluated with glafic and lensmodel on two different hardware platforms .glafic is distributed as an executable file with version 1.1.5 for the os / x platform and version 1.1.6 for linux .lensmodel is available as an executable file only for download as version 1.99o for the linux platform , and we were provided a version to run on os / x .input files for the models of sdss j1320 + 1644 were used unchanged . in the first test, the model was tested with the two versions of glafic .the mass of the first three sis potentials were held as fixed parameters and the mass of the fourth potential , as well as its position , were free parameters to be optimized .identical results were reported using either version of glafic , on both platforms .the results were identical including the numbers of models used for optimization in each run and the calculation of all parameters evaluated .the content of all output files produced by both versions was identical .the models for sdss j1320 + 1644 were then tested with each of the two versions of lensmodel . in this same test , optimizing the fourth sis potential , results with lensmodel were slightly different comparing the two versions . the optimized einstein radius of the fourth potential using the linux version is reported as 3.622605 , and the os / x version reports 3.622528 .there are similarly small differences in the optimized position of the fourth potential . in the next test ,the mass of all four potentials was optimized .the results with glafic , on both hardware platforms , were again identical in regard to all parameters evaluated , to the accuracy of the last decimal place reported .the contents of all output files produced by glafic were identical with the linux and os / x versions .however , the two versions of lensmodel reported widely disparate results with the two versions tested .the einstein radii of the four optimized sis potentials using the linux version are 1.851 , 1.004 , 0.3161 and 1.660 . using the os / x version ,the four potentials are optimized at 2.234 , 1.818 , 0.3139 and 2.006 . among the various studies reported in tables [ table : indep ] and [ table :depen ] , the software version used is reported in only one study .the hardware platform and/or operating system used in the calculations is not reported in any of the studies shown in these tables .small changes in redshift have different effects on the calculation of time delays and mass by different lens model software codes .in that study , a mock model with a single potential and four images as well as a model of sdss j1004 + 4112 were evaluated and the effect of changes in redshift on changes in calculations of time delay and mass were determined .the study showed that changes in time delay and mass calculations are not always proportional to changes in , as would be predicted .the image positions change expectedly as a result of ray - tracing tracing algorithms which are not the same for all of the software used .this is partly responsible for the differences in the values of time delay and mass in both systems when comparing the models from four different lens model software packages .the present study was designed to specifically compare the results using the same models with different software , rather than changes in the results , to compare results from different codes .the present study is the largest strong gravitational lens software comparison study performed to date , evaluating four different lens systems with four different lens model software codes in a single study , and is the first study to use hydralens for the preparation of multiple models .lcccccc[h ] + abell1689 * * & lensperfect & zb & pixelens + & & & + sdssj1004 * * & glafic & grale & pixelens + & & & + + cosmosj095930 & & lenstool & lenstool + & & & + sdssj1430 & & lensmodel / lensview * * & + & & & + sdssj1320 & & glafic & + & & & + [ table : indep ] lcccccc[h ] + sdssj1430 & & lensview * * & lensmodel & & & + abell1703 & & zb & grale & & & + ms1358 & & zb & grale & & & + macsj1206 & & zb & lenstool & lensperfect & pixelens & sawlens * * * + sdss120602 & & lensmodel & lensview * * & & & + rxj1347.5 & & glafic & pixelens & & & + + j1000 + 0221 & & pixelens & lenstool & glafic & lensmodel & + sdssj1430 & & pixelens & lenstool & glafic & lensmodel & + sdssj1320 & & pixelens & lenstool & glafic & lensmodel & + cosmosj095930 & & pixelens & lenstool & glafic & lensmodel & + [ table : depen ] table [ table : indep ] shows a review of the existing literature where parameters have been calculated using strong gravitational lens models and compared with other published results , and as such are referred to as `` indirect comparison studies '' . in the indirect comparison of cosmosj095930 performed by and ,both analyses were conducted with lenstool , and had very similar results for einstein radius , mass enclosed within the einstein radius , and other parameters .it is difficult to discern the details of the model used by with regard to number , type and geometry of the lens potentials used .indirect comparisons are further complicated by a lack of available detail of the model used , making it difficult to reproduce previous results .table [ table : depen ] shows previous studies where different lens models were compared in the same study , as well as the evaluations performed in the present study , all of which constitute `` direct comparison studies '' .the direct comparisons performed of abell 1703 , ms1358 , macsj1206 and sdss120602 have been described in detail in .the information in these direct studies was complementary in nature , leading to a greater understanding of the lens system .the lens sdssj1430 was investigated by who compared the results using lensview and lensmodel .the lensmodel analysis assumes point sources while lensview uses the two - dimensional surface brightness distribution of the same system .both analyses led to the same conclusions regarding the mass distribution of the galaxy .the two lens model techniques were indeed complementary and led to similar results . in a comparative analysis of rx j1347.5 - 1145 using glafic and pixelens , the authors note a 13 percent difference in the calculation of mass enclosed within the einstein radius .they suggest that the ltm model used by glafic may not be assigning sufficient mass to the profiles in the models used . we observed a similar underestimation of enclosed mass by non - ltm models as compared to pixelens in the analysis of j1000 + 0021 .indirect comparison studies are of value , but as some of the comparisons conducted in this study show , it may be difficult to reproduce the results of previous studies without previous model files available to create the models for other software , thus limiting the nature of the comparisons performed . in the analyses of cosmos j095930 + 023427 andsdss j1320 + 1644 , being able to use the same models as used in the original studies , qualifies these as direct comparisons .this supports the importance of sharing lens model files in future studies . even in direct comparisons, the results with one model may not be exactly the same as with another because of the difficulty in translating some of the features of one model to another because of the differences in features of the available software .for example , it is not possible to parameterize a pixelens model exactly the same as a lenstool model because of inherent differences in the software .these differences may explain the observations of as well as some of the results in this study . despite best efforts to similarly parameterize two models , there still may be small differences .this suggests that using several models to understand a system may lead to improved understanding . in seeking agreement among various models, the number of free parameters for the lens potentials is an important factor .while there was reasonable agreement among the calculated values for einstein radius in single potential models , such as sdss j1430 + 4105 and j1000 + 0021 in this study , there was less agreement in a more complicated model such as cosmos j095930 + 023427 , which may be a reflection of using more lens potentials to describe the system .differences noted in time delay and magnification calculations may be due to differences in the image tracing algorithms used by each of the software models .the input image positions are the same in all models .the software calculates new positions based on the software specific ray - tracing algorithm used going from the source plane back to the image plane , resulting in differences in time delay results .the differences in optimization algorithms used also leads to some of the observed differences among the software models , with great variation in the calculation of ellipticity and position angle .these results demonstrate that there are significant differences in results using lens models prepared with different software , and are consistent with a previous study of differences in lens models .there is no intention to suggest that a particular group of models are necessarily more correct , but only to suggest that future lensing studies should evaluate lens models using several approaches to understand the system more thoroughly , as already being conducted in the hubble frontier fields project .based on the results of this study , in order to allow comparisons across studies , it will be important to use a consistent nomenclature for lensing studies , specifying indirect vs. direct comparisons , independent vs. semi - independent comparisons and the type of model being used as ltm vs. non - ltm , as we have previously described .furthermore , this study has shown at least in one situation that the software version used can significantly affect the results which stresses the importance of specifying the software version number being used in all future studies , in addition to the hardware / operating system platform . it is also suggested that more detail is provided in future studies to allow reproducibility of the models such as the number and types of potentials used along with the name of the potential used in the various software packages .one of the most important aspects of any scientific experiment is reproducibility . in gravitational lens model studies ,this is impossible in many cases because the software is not available to other investigators , or the lens model files are not available .code - sharing of software in astrophysics is essential , as emphasized by .based on the studies reported here , the sharing of lens model files in gravitational lens studies is also essential to assure reproducibility and increased transparency in future gravitational lensing studies .another approach in lensing studies that has been successfully applied in weak lensing is computer challenges . the use of multiple approaches including comparative studies of lens models , open software , open lens model files , and computer challenges will help to assure increased transparency in future studies and enhance the results .42 natexlab#1#1url # 1`#1`urlprefix[2][]#2 [ 2]#2 , , , , , , , , . ., , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . . ,, , , , . . ,, , , , . . ,, , , . . ,, , , , , , . . ,, , , . . ,, , , , , , , , , , , , , , . . ,, , , , , , , , , , , , , , , , . . ,, , , , , , . . ,, , . . , . ,. . , b. . . ,, , , , , , , , , , , , , . . ,, , , . . ,, , , , . . ,, , , , , , , , , , , . . ,, , , , , . . ,. , . . , . , ., , , , , , , , , , , , , , , . . ,, , , , , , , , . . ,, , , , , , , , , . . ,, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . . ,, , , , , , . . ,, , , , , , , , , , , , , . ., , , , , , , , . . ,, , , , , , , , , , , , , . . ,the contributions of rusu and oguri ( j1320 model , glafic ) and shuo and zhang ( j095930 model , lenstool ) are greatly appreciated . their willingness to freely share their models contributed significantly to this work .thanks is also expressed to professor c. keeton for providing the latest version of gravlens / lensmodel .we gratefully acknowledge the careful consideration of the anonymous reviewers which afforded us the opportunity to improve and clarify this manuscript .this research was supported by a grant - in - aid for scientific research from the jsps ( grant number 26400264 ) .
|
analysis of strong gravitational lensing data is important in this era of precision cosmology . the objective of the present study is to directly compare the analysis of strong gravitational lens systems using different lens model software and similarly parameterized models to understand the differences and limitations of the resulting models . the software lens model translation tool , hydralens , was used to generate multiple models for four strong lens systems including cosmos j095930 + 023427 , sdss j1320 + 1644 , sdssj1430 + 4105 and j1000 + 0021 . all four lens systems were modeled with pixelens , lenstool , glafic , and lensmodel . the input data and parameterization of each lens model was similar for the four model programs used to highlight differences in the output results . the calculation of the einstein radius and enclosed mass for each lens model was comparable . the results were more dissimilar if the masses of more than one lens potential were free - parameters . the image tracing algorithms of the software are different , resulting in different output image positions and differences in time delay and magnification calculations , as well as ellipticity and position angle of the resulting lens model . in a comparison of different software versions using identical model input files , results differed significantly when using two versions of the same software . these results further support the need for future lensing studies to include multiple lens models , use of open software , availability of lens model files use in studies , and computer challenges to develop new approaches . future studies need a standard nomenclature and specification of the software used to allow improved interpretation , reproducibility and transparency of results .
|
the science of the 21 century , to a large extent is team science , operating globally , often cross disciplinary , and fully entangled with the web .the study of science as a specific , complex , and social system has been addressed by many research disciplines for quite some time . the availability of digital traces of scholarly activities at unknown scale and variety , together with the urgent need to monitor and control this growing system , is at heart of knowledge economies and has brought the question how best to measure , model , and forecast science back on to the research agenda . when reviewing the current models of science , it is clear there is no consistent framework of science models yet .existing models are often driven by the available data .for example , interdisciplinary bibliographic databases ( such as the web of science or scopus ) use the principle of citation indexing from the field of _ scientometrics _ to analyse the science system based on formal scholarly communication .typical output indicators are counts of publications , citations , and patents .they form the heart of the current `` measurement of science '' and have been taken up as data by network science and web science .this specific kind of output is , however , only a tiny fraction of information on science dynamics .traditionally , the measurement of science encompasses input indicators ( human capital , expenditure ) , output indicators , and .where possible , process information .research information systems , around since wwii in europe , are marking the shift to `` big science '' .however , the input side to science dynamics , in particular researchers , has been underrepresented in quantitative science studies for quite some time .this is partly due to the lack of databases and the problem of author ambiguity in the existing database .information on researchers has been mainly collected , documented , and curated locally at individual scientific institutions - and in nation - wide research information systems , at least in european countries .the emergence of the web has transformed this situation completely .the web has become an important , if not the most important , information source for researchers and a platform for collaboration .the extent and diversity of the traces scholars leave on the web has called for _ alt metrics _it has also triggered the development of standards and ontologies capable of automatically harvesting this wealth of information , beyond existing traditional bibliographic reference .the wealth of information provided on the web about researcher activities and their relations carries the potential for new insights into the global research landscape .but we are not yet at the point where this data can be both expressive enough to be useful and easy enough to consume . to illustrate the current situation we display the conceptual space of communities dealing with research information in form of four mind maps ( _ c.f ._ figure [ fig : webscience ] ) . in the upper left corner we brought together concepts , which are relevant from the perspective of scientific career research andoften conducted qualitatively , with rich factual evidence , which is hardly interoperable or scalable .for this mind node we drew on current discussions and first results in a fp7 framework programme acumen , academic careers understood by measurements and norms ( see http://research-acumen.eu/ ) , where sociologists and scientometricians work together . in the right lower corner we display the main classes of an ontology for research information ( vivo ) developed in the us . in the upper right corner, the main tables of a dutch research information database ( nod - narcis ) are displayed , and in the lower left corner is a selection of information and concepts which can be retrieved using different fields in one of the leading cross - disciplinary bibliographic databases - the web of knowledge .although , the mind map sketches are different in nature , from formal schemes to collection of aspects , this illustration shows their difference in size , granularity , scope , and expression or semantics . in this workwe argue for the need of a scalable , interoperable , and multi - layered data representation model for research information system ( ris ) .science of science and modeling of science dynamics raise and fall with a consistent measurement system for the sciences .the contributions of this paper are as follows : * a highlight of information loss happening when expressing data with generic ontologies ; * the introductions of the notion of levels of semantic agreement for expressing research data ; * a multi - layered ontology based on the above definition .the remainder of the paper describes the landscape of research data publication before diving into the details of a specific dutch case .we thereafter introduce our proposed multi - layer conceptual model for a research ontology and conclude in its potential for documenting research .in order to publish re - usable research data , one has to think in terms of standards and publication media .while the web imposes itself as the publication platform , the question of standards remains open and has been long investigated .first efforts in standardisation have been undertaken from the traditional research information communities .one example is the `` cerif '' standard developed by eurocris .this standard defines a set of generic classes and properties used to describe research data .the serialisation format used for the data is xml , although an rdf version is being considered .the content management system ( cms ) `` metis '' , popular in the netherlands , uses this standard to store and expose research data .this standard has also been used for the dutch portal `` narcis '' .the web of linked data is a way of combining the publication platform and the standards .more recent efforts have been made in this direction via a number of ontologies and publication platforms .the initiative linkeduniversities provides a reference towards these systems and highlights their practical use .vivo a united states based open source semantic web application is another such a system .the application both describes and publishes data , using rdf to encode the data and owl for the logical structure.in addition to its own classes and properties , the vivo ontology incorpates other standard ontolgies thus increasing its interoperability .however , the ontology relies heavly on the us academic model which limits its ability to accurately represent researchers in other systems .vivo and cerif based cms have been successfully put in use at many institutions .still , the landscape of research information is very scattered and far from being connected .one of the reasons for this is a lack of agreement upon semantics for the data .efforts have been made to align vivo and cerif but the main problem remains that data publishers essentially have to choose between using a globally agreed upon representation , which is less expressive as a result of covering a vast amount of heterogeneous information ( cerif ) , or a very expressive and specialised ontology ( vivo ) , which is difficult to map to other ontologies of similar complexity . in the netherlands , we find the following situation .all 13 universities ( 14 with the open university ) use a system called metis to register and document their research information . in practice , information is usually entered in metis centrally by a person in the administration although , sometimes individual accounts to metis are created . aside from those unconnected local implementations of one system ,higher education in the netherlands embraced the open access movement with a project called dare .this lead to an open repository for scientific publications .moreover , a web portal to dutch research information exists - narcis - which harvests publications from open repositories , but also entails a very well curated ( and still manually edited ) research information database ( nod ) with information about the scientific staff of about 400 university and outside university research institutions .as oskam and other dutch researchers already pointed out in 2006 , `` the researcher is key '' . outside of institutional risthis idea is prolific in web 2.0 .platforms such as mendeley and academia.edu .they have been designed around the needs of scholars .general social network sites such as linkedin - which is very popular for professionals in the netherlands - and facebook also profile themselves as outlets for individual researchers .this leads to a situation where user - content driven systems compete for the limited time and resources of an individual researcher and where , as a result , snippets of the oeuvre and academic journey of a researcher can be found at different places , recorded in different standards , and with different accuracy .the question raised in the 2006 paper : `` how can we make the cris a valuable and attractive ( career ) tool for the researcher ? '' is still waiting to be answered in a standardized way .the purpose of documentation of science ( and of careers of researchers ) has grown far beyond the effective information exchange .research evaluation relies heavily on indicators computed ( semi ) automatically from databases and the web .currently , individual careers of researchers are very much influenced by indicators which are built on activities for which large amounts of standardised data are available .prominent examples are journal impact factor or the h index .but , a researcher is not just a `` paper publication machine '' .grant acquisition is another important `` currency '' in the academic market - for individuals on the job market , as well as , for institutions competing for funding .teaching is an area which is monitored locally and institutionally , but for which no cross - institutional databases exist .moreover , researchers are no longer loyal to one institution , one country , or one discipline for their whole life .there is an increasing need for cross - discipline and cross - institutional mapping of whole careers .projects such as acumen look into current practices of evaluation and peer review to empower the individual researcher and develop guidelines for how best to present your academic profile to the outside world .`` acumen '' is the acronym for academic careers understood through measurements and norms .in this project , we analyse the use of a wide range of indicators - ranging from traditional bibliometrics to alt - metrics and metrics based on web 2.0 - for the evaluation of the work of individual academics . one of the author of the present work , frank van der most , also conducted interviews to investigate the impact or influence of evaluations on individual careers . for his workthe following events are of interesting in tracking an academic career : * birth of the academic ; * acquisition of diploma s and titles , in particular ma diplomas ( and equivalents ) , phd / dr . diplomas , habilitiation , professorships of sorts and levels ; * jobs , in universities and academic research institutes , but also in non - academic organisations .the latter is interesting because people move in , out , and sometimes back into academia ; * particular functions within or as part of the job(s ) : director of studies ( teaching ) , research - coordinator , head of department , dean , vice dean ( for research , education , or other ) , vice - chancellor / rector , board member of faculty / school / university / institute ; * launch of start - ups / spin - outs or people s own companies .it could simply be a form of employment , but start - ups or own companies may indicate economic or other societal value of academic work ; * prizes ; * retirement and decease . for the study of the impact , or influence , of evaluations an overview of someone s career is necessary to `` locate '' influential evaluations .this `` location '' has multiple dimensions .one is the calendar time , _i.e. _ on which date or in which year did an influential evaluation take place .based on time , geographic , and institutional location the context of a particular evaluation event can be reconstructed .scientific careers follow patterns which are influenced by current regimes of science dynamics ( including evaluations ) .another important dimension concerns the location of an evaluation ( or any event ) within someone s career .if two academics apply for the same job , the location in time and place is the same , but if one is an early - career researcher and the other is halfway through his / her career , this clearly makes a large difference to how their applications are being evaluated and how the evaluation results are likely to impact their respective careers .a rejection may have a bigger impact on the early - career researcher than on the mid - career researcher .another acumen sub - project investigates gender effects of evaluations and includes an analysis of performance indicators on research careers .this is planned to be a statistical analysis which would require some form of career descriptions .one of acumen s central aims is to identify and investigate bibliometric indicators that can be used in the evaluation of the work of individual researchers .a major point discussed in the acumen workshops is the realisation that researchers have a career or a life - cycle which contextualises the values of bibliometric indicators .although the events listed above are interesting for acumen , these events , or a sub - set or extension thereof , is likely to be interesting to many career studies .for example , productivity - studies would relate academic production of texts , courses taught , and other outputs to someone s career stage or career paths .an academic s epistemic development ( their research agenda ) could be studied in relation to career stages or mobility . to be able to trace the co - evolution of individual career paths and the social process of science for larger part of science, one would need a different kind of information depending on the study being undertaken .the challenge when designing a standard for sharing data is to make it generic enough so that aggregation makes sense , while being specific enough so institutions can express the data they need . as it is highlighted by the two most popular search tools , consuming data exposed via vivo from a number of external sources at the international level , only the most general concepts such as `` people '' make sense .on the opposite , the search features offered by a national portal such as narcis proposes a number of refined search criteria .these two extremes of the data mash - up scale show that depending on the study being done , different levels of semantics agreement are likely to be put into use . in contrary to xml schemas ,semantic web technologies make it possible to express data using an highly specified model while also making it available using a more general model .the technology of particular importance here is `` reasoning '' , that is the entailment of other factual valid information from the facts already contained in the knowledge base .for instance , if an rdf knowledge base contains a fact assessing that `` a is a _ researcher _ '' and another stating that `` every _ researcher _ is a _ person _ '' , the system will infer that `` a is a _ person _ '' .leveraging this , it is possible to extend ontologies by refining the definition of classes and properties .the most refined versions of the concepts will inherit from their parents .we argue that for research information systems , three levels are necessary ( see figure [ fig : layers ] ) .first , an international level containing a set of core concepts that can be used to build data mash - up on an international scale .then , a national level extending the previous core level with concepts commonly agreed upon nation wide ( _ e.g. _ positions ) .last , an institutional level where every institution is free to further refine the previous level with its own concepts and properties that matter to its network . as a feasibility assessment and to propose a first model , we hereafter introduce a core ontology and two national extensions .this proposal is based on related work , existing ontologies , and our personal experience but stands more as a first iteration of work in progress rather than a definitive model .conceptual models allows for the representation of classes and properties of a knowledge base , along with their relations , in an abstracted way .the proposed conceptual models that we hereafter introduce are not dependent on the technical solution implementing them .there is however , as highlighted previously , an advantage in using semantic web technologies for this .this point is discussed in details in the following , after the introduction and the description of the three proposed conceptual models . the model depicted in figure [ fig : core ]is a proposal for a core research ontology based on the work being done on cerif , the vivo ontology , the core vocabularies , and the data needs of acumen . as part of its goal to study the scientific career through the research data made available , acumenneeds a number of information related to individuals , such as but not limited to : * grants / project applications - both applied and granted .this in relation to persons ( applicants of various sorts ) and organisations ( applying / receiving institutes , main and sub - contractors , funding institutes ) ; * skills .for instance , `` leadership '' or `` artificial intelligence '' .there is no limit to the definition and several thesaurus could be implied ; * networks or network relations .relation between persons and organisations , but also between persons and results are of particular importance ; * memberships of scientific associations or academies ; * conferences visited or organised .the model contains classes to define individuals , projects , scientific output , positions and tasks .a generic `` relation '' can be established between authors and papers , or teachers and courses taught .the exact meaning of the relation is to be defined either by sub - classes of it or by using the property `` role '' .the second level of semantic agreement is that of national extensions .based on the core concepts , these extensions allows for the modeling of concepts actually used in the country - using the language and terminology of that country .when building such an extension , the main assumption made is that there is a level of agreement that can be reached on a national basis .an example of national extension is given in figure [ fig : extensions ] .this extension extends the core `` position '' and `` organization '' classes to define the type of positions and organisation commonly found in the netherlands ( figure [ fig : dutchext ] ) and the us ( figure [ fig : usext ] ) .the classes depicted in the dutch extension are those found in narcis , and as such represent the union set of all the specific classes used within the research institutions in the netherlands .1em it can be observed that the dutch extensions shows a high level of variety , with some classes that could be replaced with other model mechanisms , such as the `` part time hoogleraar '' class which is actually a `` hoogleraar '' contracted with less hours .we also note from figure [ fig : usext ] that the national level has to be kept generic in the us because of the variation observed locally . in the us, many titles and/or positions are essentially at the discretion of the individual institutions ( with some direction from the american association of university professors ( aaup ) ) , thus a very detailed national ontology is not appropriate .however , for countries with a more centralised model and using title and positions officially described , more detail can be added at this level thus increasing semantic understanding .the national level allows for this grey area adaption instead of the current two level `` very general '' to `` very specific '' model .local extensions are the most specific level of specification we propose for this approach .these can be used to specify concepts and relations that are understood within a given sub community inside a country .for instance , in the netherlands , the research institution knaw defines an additional position `` akademiehoogleraar '' for `` hoogleraar '' which are appointed to universities but directly affiliated to knaw .this additional position is only used by some institutions and for this academy - here , the `` akademie '' in `` akademiehoogleraar '' implicitly refers to knaw .prior to its concrete use , the proposed conceptual models have to be turned into an rdf based vocabulary .this vocabulary also has to be hosted under a domain name .there are a large number of vocabularies published on the web .the proposed models can effectively leverage most of their properties and classes from one of these existing sources of terms , having fewer new terms to introduce .in particular , the following vocabularies are to be considered : * foaf , for the description of the persons ; * bibo , for the publications ; * lode , for the description of events ; * skos , for the description of thesaurus terms such as those used to describe researchers skills ; * prov - o , to add additional provenance information to the data being served .we also note that , by design , there is a significant overlap between the conceptual model of figure [ fig : core ] and that defined in the core vocabularies for person , location and registered organisations in .this allows for the proposed core vocabulary for research to be defined based on these other core vocabularies defined by joinup and formalised by the w3c in the context of the working group on governmental linked data ( gld ) .the domain name at which an ontology is being served is , as for the data itself , often seen as indication of the person , or entity , in charge of supporting the ontology . to account for this, we envision the hosting of the core ontology and its extensions done at institutions matching the scope of the level of agreement .that is , an international organisation for the international layer , a national organisation for the national layer , and the institutions themselves for the local extensions .more concretely , such an hosting plan could be materialised as having : the core ontology being served by the w3c , the dutch national ontology by the vsnu , and the local extension from the knaw by the knaw .this paper operates at different levels . at the coreit proposes a model to semantically describe data in research information systems in a way which allows to aggregate but also to deconstruct if needed .it does so based on experiences with standards and data representation in the past and looking into very concrete practices - taking a vivo implementation exercise in the netherlands as point of reference and departure .a next shell of considerations around those specific mappings is added when we incorporate research outside of the traditional area of scientific information and documentation .science and technology studies , science of science , and scientometrics have produced over decades of insights in the structure and dynamics of the science system .a wealth of information is available in this area , most of it case - based evidence .we include the aims and achievements of an on - going eu fp7 funded project ( acumen ) which , in itself tries to combine bibliometric and indicator - based research with interviews , survey , and literature studies .the target subject of this project is the researcher .it is also the researcher which is targeted by research information systems , and it is the researcher which is the innovative driver for science dynamics .bibliometric indicators are heavily based on standards , part of them shared with ris .what makes the acumen project and the perspective of scientific career research so interesting for the design of future research information systems is the identification of factors relevant for career development which are not yet covered by current standards , databases , or ontologies .the last and most visionary shell in this paper is to design research information systems which can be used for science modeling . in the general framework developed by borner et al .science models can be developed at different scales of the science system , from the individual research up to the global science system ; they can differ in geographic coverage , as well as , in scales of time . in any case, the ideal would be having one data representation which can be scaled up and down along those different dimensions , and not singular data samples in incomparable measurement units not relatable for particular areas of the dynamics of science .our main argument is to provide a data representation which is retraceable - if needed - towards its specific roots and at the same time can be aggregated .in such a `` measurement system '' we would find a middle layer of data granularity on which basis complex , non - linear models can be validated and implemented , to better monitor and understand the science system .this work has been supported by the acumen project fp7 framework .we would like to think our colleagues ying ding , katy borner , and chris baars for their comments and support during this work .e - government core vocabularies : the semic.eu approach , 2011 . retrieved from european commission : http://joinup.ec.europa.eu/sites/default/files/egovernment-core-vocabularies.p df [ http://joinup.ec.europa.eu/sites/default/files/egovernment-core-vocabularies.p df ] .cerif 1.3 semantics : research vocabulary , 2012 . retrieved from http://www.eurocris.org/uploads/web%20pages/cerif-1.3/specifications/cerif1.3_semantics.pdf [ http://www.eurocris.org/uploads/web%20pages/cerif-1.3/specifications/cerif1.3_se mantics.pdf ] . core vocabularies specification , 2012 . retrieved from european commission: https://joinup.ec.europa.eu/sites/default/files/core_vocabularies-business_loca tion_person - specification - v0.2_1.pdf [ https://joinup.ec.europa.eu/sites/default/files/core_vocabularies-business_loca tion_person - specification - v0.2_1.pdf ] .brner , k. , boyack , k. , milojevi , s. , and morris , s. an introduction to modeling science : basic model types , key definitions , and a general framework for the comparison of process models . in _ models of science dynamics _ , a. scharnhorst , k. brner , and p. van den besselaar , eds . , understanding complex systems .springer berlin heidelberg , 2012 , 322 .brner , k. , contractor , n. , falk - krzesinski , h. , fiore , s. , hall , k. , keyton , j. , spring , b. , stokols , d. , trochim , w. , and uzzi , b. a multi - level systems perspective for the science of team science . , 49 ( 2010 ) , 49cm24 . dijk , e. narcis , linking criss and oars in the netherlands : a matter of standards and identifiers . , 2010. position paper presented at the eurocris workshop on cris , cerif and institutional repositories , rome , 10 - 11 may 2010 .hoekstra , r. , breuker , j. , di bello , m. , and boer , a. the lkif core ontology of basic legal concepts . in _ proceedings of the workshop on legal ontologies and artificial intelligence techniques ( loait ) _ ( 2007 ) .niles , i. , and pease , a. towards a standard upper ontology . in _ proceedings of the 2nd international conference on formal ontology in information systems ( fois ) _ ( 2001 ) .http://www.ontologyportal.org/. oskam , m. , simons , h. , and mijnhardt , w. harvex : integrating multiple academic information resources into a researcher s profiling tool . in_ enabling interaction and quality : beyond the hanseatic league ( 8th international conference on current research information systems ) _ , a. g. s. s. e. j. asserson , ed . , leuven university press ( 2006 ) , 167177 .reijnhoudt , l. , stamper , m. j. , brner , katy ; baars , c. , and scharnhorst , a. narcis : network of experts and knowledge organizations in the netherlands , 2012 .http://cns.iu.edu/research/2012_narcis.pdf , accessed january 26 , 2013 .van der most , f. the role of evaluations in the development of researchers careers . a conceptual frame and research strategy for a comparative studyposter presented at the conference ` how to track researchers ' careers. , luxembourg , 9 - 10 february 2012 .( unpublished , contact author ) , 2012 .wouters , p. , and costas , r. users , narcissism and control - tracking the impact of scholarly publications in the 21 st century .surffoundation.nl / nl / publicaties / documents / users% 20narcissism%20and%20control.pdf[http://www .surffoundation.nl / nl / publicaties / documents / users% 20narcissism%20and%20control.pdf ] , accessed january 25 , 2013 .
|
the web does not only enable new forms of science , it also creates new possibilities to study science and new digital scholarship . this paper brings together multiple perspectives : from individual researchers seeking the best options to display their activities and market their skills on the academic job market ; to academic institutions , national funding agencies , and countries needing to monitor the science system and account for public money spending . we also address the research interests aimed at better understanding the self - organising and complex nature of the science system through researcher tracing , the identification of the emergence of new fields , and knowledge discovery using large - data mining and non - linear dynamics . in particular this paper draws attention to the need for standardisation and data interoperability in the area of research information as an indispensable pre - condition for any science modelling . we discuss which levels of complexity are needed to provide a globally , interoperable , and expressive data infrastructure for research information . with possible dynamic science model applications in mind , we introduce the need for a `` middle - range '' level of complexity for data representation and propose a conceptual model for research data based on a core international ontology with national and local extensions .
|
this work is part of the mose ( modeling eso sites ) project , a feasibility study for the turbulence prediction at two eso sites : cerro paranal ( site of the vlt ) and cerro armazones ( future site of the e - elt ) .one of the main goal of the mose project is to supply a tool for optical turbulence forecasts to support the scheduling of the scientific programs and the use of ao facilities at the vlt and at the e - elt . in a joint paper , masciadri & lascaux presented the results of numerical simulations of 20 different summer nights ( in november and december 2007 ) with the mesoscale model meso - nh with a standard configuration using 3 imbricated domains with the innermost horizontal resolution equal to =0.5 km .the results are already very encouraging , especially in the free atmosphere , with a good prediction of the meteorological parameters important to the optical turbulence estimation ( temperature , wind speed ) .the main discrepancies concerning the meteorological parameters between meso - nh predictions and observations have been identified near the surface ( between 2 m and 30 m above the ground ) , for the wind speed . to overcome these specific limitations of the standard configuration ,we discuss in this study the impact of the use of a higher horizontal resolution numerical configuration .two domains with a horizontal resolution equal for both to =0.1 km , centered above cerro paranal and cerro armazones , have been added .it is very important to well predict meteorological parameters such as wind speed and temperature , and their temporal evolution .indeed , the intensity of the optical turbulence , characterized by the structure constant of the refractive index , mainly depends on their values and gradients .our team have already used so far the meso - nh model to perform some meteorological , and optical turbulence , numerical simulations , at other sites ( san pedro martir , mount graham , antarctica ) discussing its abilities in predicting meteorological and astroclimatic parameters , but with less rich data samples .two sites are investigated in this study : cerro paranal , site of the eso very large telescope ( vlt ) , and cerro armazones , future site of the eso european extremely large telescope ( e - elt ) . at paranal , observations of meteorological parameters near the surface come from an automated weather station ( aws ) and a 30 m high mast including a number of sensors at different heights .both instruments are part of the vlt astronomical site monitor .absolute temperature data are available at 2 m and 30 m above the ground .wind speed data are available at 10 m and 30 m above the ground . at armazones , observations of the meteorological parameters nearthe ground surface come from the site testing database , more precisely from an aws and a 30 m tower ( with temperature sensors and sonic anemometers ) .data on temperature and wind speed are available at 2 m , 11 m , 20 m and 28 m above the ground . at 2 m , at armazones , temperature measurements from the aws and the sonic anemometers are both available but we consider only those from the tower ( accuracy of 0.1 ) .those from the aws are not reliable because of some drift effects ( t. travouillon , private communication ) .wind speed observations are taken from the aws ( at 2 m ) and from the sonic anemometers of the tower ( at 11 m , 20 m and 28 m ) .the outputs are sampled at 1 minute intervals . [ cols="^ " , ]the intensity of the optical turbulence is mainly driven by the gradients of some meteorological parameters ( wind speed , temperature ) that determine the vertical profiles of . as these vertical profiles of , from which derive all the astroclimatic parameters useful for astronomers ( seeing , wavefront coherence time , isoplanatic angle ... )are characterized by a maximum in the surface layer , a good knowledge and prediction of the surface meteorological parameters is mandatory to obtain good forecast of the optical turbulence at a given site . in this studywe have investigated the impact of very high horizontal resolution on the prediction of meteorological parameters ( temperature and wind speed ) near the surface at two montainous sites : cerro paranal ( site of the vlt ) and cerro armazones ( future site of the e - elt ) . in the standard configuration , less demanding in computing resources , the results are already very encouraging over the entire atmosphere above the sites of interest . nevertheless , some discrepancies on the wind speed prediction near the ground were present , with low - level simulated winds lower than the obervations . the work presented in this study adressed this particular problem by testing a very high horizontal resolution configuration ( 5dom , with =0.1 km for the innermost domains ) , more demanding in computing resources . in this preliminary study( only 9 nights for cerro paranal and 5 nights for cerro armazones were investigated ) , we demonstrated that a very high horizontal resolution improved significatively the performance of the model in proximity of the ground .concerning the temperature , not only the prediction of the low - level temperature remained very good ( with biases inferior to 1 at all levels at both sites , and even equal to 0 at 2 m at cerro paranal ) , but the day to night temperature gradient inversion was more accurately reproduced .concerning the wind speed , the comparison between 5dom simulations and observations gave biases reduced by more than a half at all levels , at both sites , with respect to the standard 3dom configuration .+ meteorological data - set from the automatic weather station ( aws ) and mast at cerro armazones are from the thirty meter telescope site testing - public database server .meteorological data - set from the aws and mast at cerro paranal are from eso astronomical site monitor ( asm - doc.n .vlt - man - eso-17440 - 1773 ) .we are very greatful to the whole staff of the tmt site testing working group for providing information about their data - set as well as to marc sarazin for his constant support to this study and for providing us the eso data - set used in this study .simulations are run partially on the hpcf cluster of the european centre for medium weather forecasts ( ecmwf ) - project spitfot .this study is co - funded by the eso contract : e - sow - eso-245 - 0933 .e. masciadri and f. lascaux , `` mose : a feasibility study for optical turbulence forecast with the mesonh mesoscale model to support ao facilities at eso sites ( paranal , armazones ) '' , _ spie astronomical telescopes and instrumentation _ ,amsterdam , 1 - 6 july , 2012 .j. p. lafore , j. stein , n. asencio , p. bougeault , v. ducrocq , j. duron , c. fischer , p. hereil , p. mascart , v. masson , j .-pinty , j .- l. redelsperger , e. richard and j. vil - guerau de arellano , `` the meso - nh atmospheric simulation system .part i : adiabatic formulation and control simulations '' , _ annales geophysicae _ , 16 , pp .90 - 109 , 1998 .e. masciadri , r. avila and l. j. snchez , `` statistic reliability of the meso - nh atmospherical model for 3d simulations '' , _ rev .astrofisica _ , 40 , pp . 3 - 14 , 2004 .s. hagelin , e. masciadri and f. lascaux , `` wind speed vertical distribution at mt graham '' , _ mnras _ , 407 , pp .2230 - 2240 , 2010 .s. hagelin , e. masciadri and f. lascaux , `` optical turbulence simulations at mt graham using the meso - nh model '' , _ mnras _ , 412 , pp .2695 - 2706 , 2011 .f. lascaux , e. masciadri , s. hagelin and j. stoesz , `` mesoscale optical turbulence simulations at dome c '' , _ mnras _ , 398 , pp .1093 - 1104 , 2009 .f. lascaux , e. masciadri , s. hagelin and j. stoesz , `` mesoscale optical turbulence simulations at dome c : refinements '' , _ mnras _ , 403 , pp .1714 - 1718 , 2010 .f. lascaux , e. masciadri and s. hagelin , `` mesoscale optical turbulence simulations above dome c , dome a and south pole '' , _ mnras _ , 411 , pp . 693 - 704 , 2011 .s. sandrock , r. amestica , `` vlt astronomical site monitor - asm data user manual '' , _ doc no .: vlt - man - eso-17440 - 1773 _ , 1999. m. schoeck , s. els , r. riddle , w. skidmore , t. travouillon , r. blum , e. bustos , g. chanan , s. g. djorgovski , p. gillett , b. gregory , j. nelson , a. otrola , j. seguel , j. vasquez , a. walker , d. walker and l. wang , `` thirty meter telescope site testing i : overview '' , _ pasp _ , 121 , pp . 384 - 395 , 2009 . w. skidmore , t. travouillon and r. riddle , `` report of the calibration of the t2-armazones , 30-m tower air temperature sensors and sonic anemometers , the cross comparison of weather stations and sonic anemometers and turbulence measurements of sonic anemometers and finewire thermocouples '' , _ internal tmt report _f. lipps and r. s. hemler , `` a scale analysis of deep moist convection and some related numerical calculations '' , __ , 39 , pp . 2192 - 2210 , 1982 . t. gal - chen and c.j .sommerville , `` on the use of a coordinate transformation for the solution of the navier - stokes equations '' , _j. comput ._ , 17 , pp . 209 - 228 , 1975 .a. arakawa and f. messinger , `` numerical methods used in atmospheric models '' , _ garp tech ._ , 17 , wmo / icsu , geneva , switzerland , 1976 .r. asselin , `` frequency filter for time integration '' , _ mon . weather . rev ._ , 100 , pp . 487 - 490 , 1972 .j. cuxart , p. bougeault and j .- l. redelsperger , `` a turbulence scheme allowing for mesoscale and large - eddy simulations '' , _ q. j. r. meteorol_ , 126 , pp . 1 - 30 , 2000 .p. bougeault and p. lacarrre , `` parameterization of orographic induced turbulence in a mesobeta scale model '' , _ mon ._ , 117 , pp .1972 - 1890 , 1989. j. noilhan and s. planton , `` a simple paramterization of land surface processes for meteorological models '' , _ mon ._ , 117 , pp . 536 - 549 , 1989 . j. stein , e. richard , j .- p .lafore , j .-pinty . , n. asencio and s. cosma , `` high - resolution non - hydrostatic simulations of flash - flood episodes with grid - nesting and ice - phase parameterization '' , _ meteorol ._ , 72 , pp .203 - 221 , 2000 .g. farr , et al ., `` the shuttle radar topography mission '' , _ rev ._ , 45 , rg2004 , 2007 .
|
in the context of the mose project , in this contribution we present a detailed analysis of the meso - nh mesoscale model performances and their dependency on the model and orography horizontal resolutions in proximity of the ground . the investigated sites are cerro paranal ( site of the eso very large telescope - vlt ) and cerro armazones ( site of the eso european extremely large telescope - e - elt ) , in chile . at both sites , data from a rich statistical sample of different nights are available - from aws ( automated weather stations ) and masts - giving access to wind speed , wind direction and temperature at different levels near the ground ( from 2 m to 30 m above the ground ) . in this study we discuss the use of a very high horizontal resolution ( =0.1 km ) numerical configuration that overcomes some specific limitations put in evidence with a standard configuration with =0.5 km . in both sites results are very promising . the study is co - funded by eso and inaf .
|
the world wide web is a distributed document and media repository .hyper - text markup language ( html ) documents reference other html documents and media ( e.g. images , audio , etc . ) by means of an href citation .the resulting document citation graph has been the object of scholastic research as well as a component utilized in web page ranking .similarly , the semantic web is a distributed resource identifier repository .the resource description framework ( rdf ) serves as one of the primary standards of the semantic web .rdf provides the means by which uniform resource identifiers ( uri ) are interrelated to form a multi - relational or edge labeled graph . if is the set of all uris , is the set of all literals , and is the set of all blank ( or anonymous ) nodes , the the semantic web rdf graph is defined as the set of triples given that the uri is the foundational standard of both the world wide web and the semantic web , the semantic web serves as an extension to the world wide web in that it provides a semantically - rich graph overlay for uris .thus , the semantic web moves the web beyond the simplistic href citation into a rich relational structure that can be utilized for numerous end user applications .the linked data community is actively focused on integrating rdf data sets into a single connected data set .the linked data model allows _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [ any man or machine ] to start with one data source and then move through a potentially endless web of data sources connected by rdf links .just as the traditional document web can be crawled by following hypertext links , the web of data can be crawled by following rdf links .working on the crawled data , search engines can provide sophisticated query capabilities , similar to those provided by conventional relational databases . because the query results themselves are structured data , not just links to html pages , they can be immediately processed , thus enabling a new class of applications based on the web of data . " _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ while the linked data community has focused on providing a distributed data structure , they have not focused on providing a distributed process infrastructure .unfortunately , if only a data structure is provided , then processing that data structure will lead to what has occurred with the world wide web : a commercial industry focused on downloading , indexing , and providing search capabilities to that data .for the problem space of keyword search , this model suffices .however , the rdf data model is much richer than the world wide web citation data model .if data must be downloaded to a remote machine for processing , then only so much of the web of data can be processed in a reasonable amount of time .this ultimately limits the sophistication of the algorithms that can be executed on the web of data .the rdf data model is rich enough to conveniently support the representation of relational objects and their computational instructions .moreover , with respect to searching , the rdf data model requires a new degree of sophistication in graph analysis algorithms .for one , the typical pagerank centrality calculation is nearly meaningless on an edge labeled graph . to leave this algorithmic requirement to a small set of search engineswill ultimately yield a limited set of algorithms and not a flourishing democracy of collaborative development . as a remedy to this situation , a distributed process infrastructure ( analogous in many ways to the grid ) may be a necessary requirement to ensure the accelerated , grass roots use of the web of data , where processes are migrated to the data , not data to the processes . in such a model ,computational clock cycles are as open as the data upon which they operate . with respect to the web of data as a distributed rdf data structure, this article presents a graph analysis of the march 2009 linked data cloud visualization that was published on february 27 , 2009 by chris bizer .the remainder of this article is organized as follows . [ sec : construction ] articulates how the linked data cloud graph was constructed from the february 27 linked data cloud visualization . [ sec : statistics ] provides a collection of standard graph statistics for the constructed linked data cloud graph .finally [ sec : structural ] provides a more in - depth analysis of the structural properties of the graph .the current linked data cloud visualization was published by chris bizer on february 27 , 2009 .this visualization is provided in figure [ fig : lod - cloud ] ., scaledwidth=45.0% ] the linked data cloud visualization represents various data sets as vertices ( i.e. nodes ) and their interlinking relationships as directed unlabeled edges ( i.e. links ) .moreover , it is assumed that vertex size denotes the number of triples in the data set and edge thickness denotes the extent to which one data set interlinks with another . data set links to data set if data set has a uri that is maintained ( according to namespace ) by data set . in this way , by resolving a data set uri within data set , the man or machine is able to traverse to data set from .a manual process was undertaken to turn the linked data cloud visualization into a linked data cloud graph denoted , where is the set of vertices ( i.e. data sets ) , is the set of unlabeled edges ( i.e data set links ) , and .the link weights and the node sizes in the original visualization were ignored .a new visualization of the manually generated linked data cloud graph is represented in figure [ fig : lod - graph ] .the properties of this visualization are discussed throughout the remainder of this article .given the constructed linked data cloud graph visualized in figure [ fig : lod - graph ] , it is possible to calculate various graph statistics . a collection of standard graph statistics are provided in table [ tab : graphstats ] ..[tab : graphstats ] a collection of standard graph statistics for the linked data cloud graph represented in figure [ fig : lod - graph ] . [ cols="<,^",options="header " , ]the linked data initiative is focused on unifying rdf data sets into a single global data set that can be utilized by both man and machine .this initiative is providing a fundamental shift in the way in which data is maintained , exposed , and interrelated .this shift is both technologically and culturally different from the relational database paradigm .for one , the address space of the web of data is the uri address space , which is inherently distributed and infinite .second , the graph data structure is becoming a more accepted , flexible representational medium and as such , may soon displace the linked table data structure of the relational database model .finally , with respects to culture , the web of data maintains publicly available interrelated data . in the relational database world , rarely are database ports made publicly available for harvesting and rarely are relational schemas published for reuse . the semantic web , the linked data community , and the web of data are truly emerging as a radical rethinking of the way in which data is managed and distributed in the modern world .a. broder , r. kumar , f. maghoul , p. raghavan , s. rajagopalan , r. stata , a. tomkins , and j. wiener , `` graph structure in the web , '' in _ proceedings of the 9th international world wide web conference _ ,amsterdam , netherlands , may 2000 .e. oren , b. heitmann , and s. decker , `` activerdf : embedding semantic web data into object - oriented languages , '' _ web semantics : science , services and agents on the world wide web _ , vol .6 , no . 3 , pp .191202 , 2008 .m. a. rodriguez , _ emergent web intelligence_.1em plus 0.5em minus 0.4emberlin , de : springer - verlag , 2008 , ch .general - purpose computing on a semantic network substrate .[ online ] .available : http://arxiv.org/abs/0704.3395 m. a. rodriguez and a. pepe , `` on the relationship between the structural and socioacademic communities of an interdisciplinary coauthorship network , '' _ journal of informetrics _, vol . 2 , no . 3 , pp .195201 , july 2008 .[ online ] .available : http://arxiv.org/abs/0801.2345
|
the linked data community is focused on integrating resource description framework ( rdf ) data sets into a single unified representation known as the web of data . the web of data can be traversed by both man and machine and shows promise as the _ de facto _ standard for integrating data world wide much like the world wide web is the _ de facto _ standard for integrating documents . on february 27 of 2009 , an updated linked data cloud visualization was made publicly available . this visualization represents the various rdf data sets currently in the linked data cloud and their interlinking relationships . for the purposes of this article , this visual representation was manually transformed into a directed graph and analyzed . = 1
|
in the ben geen case , three experts ( an anaesthetist , a charge nurse , and a head nurse at three different hospitals ) have given their opinion that primary respiratory arrest in ed ( emergency department ) is rare .the defence , following the eminent statistician prof .jane hutton , argue that these statements at best merely constitute anecdotal evidence and at worst can be strongly tainted by observer bias .requests have consequently been made to many hospitals similar to horton general , resulting in a large data - base containing numbers of various events in ed as well as total numbers of patients admitted to ed per month , in more than 20 hospitals and covering up to 10 years .this report describes the main findings from statistical analysis of the data - base .we find that respiratory arrests in ed are about five times less frequent than cardio - respiratory arrests , which are of course extremely frequent .respiratory arrest is certainly less common than cardio - respiratory , but certainly not rare at all , by any reasonable understanding of the meaning of the word `` rare '' .the _ relative _ size of the variation in small observed rates due purely to chance is much , much larger than the _ relative _ size of the variation in larger observed rates ( the `` law of small numbers '' poisson variation ) . on top of purely random variation and strong seasonal variation ,the numbers fluctuate quite wildly in time , exhibiting all kinds of trends , bumps , and gaps in different hospitals .altogether , one can only conclude that periods with `` surprisingly high numbers '' of respiratory arrests are by no means rare and hence not in themselves surprising at all .the quality of the data ( which to put it kindly , is not high ) moreover underlines that all kinds of classification and reporting issues could easily go some way to explain these fluctuations .how events are classified , and how patients are admitted , will vary in time as hospital policies change ; moreover , random fluctuations in numbers of events can trigger changes in how events are classified ( so called `` publicity bias '' ) . to sum up : respiratory arrest in ed is not rare at all , and moreover its frequency is subject to large , and to a large extent unpredictable , variation of quite innocent nature .the data - base analysed in this report is available at http://www.math.leidenuniv.nl/~gill/data/hdf.csv , and the statistical analysis scripts ( written in the ` r ` language for statistical computing ) can be inspected at http://rpubs.com/gill1109/draftopinion . a table with an overview of hospitals and trusts to which f.o.i. requests were submitted is reproduced in the appendix .this report focusses on a subset of 16 hospitals ( or hospital trusts ) . originally , f.o.i .requests were sent to around 30 different hospitals and/or hospital trusts , supposed to be similar to horton ( though a few are teaching hospitals about five times larger ) .a few hospitals did not respond or turned out no longer to exist .the hospitals and trusts which did submit any data used a multitude of different formats including pdf files which though in one sense digital , are actually totally unsuited for transferring large tables of numbers from a hospital data base to a statistician s computer . after an extremely laborious process we succeeded in building a more or less `` clean '' data - base http://www.math.leidenuniv.nl/~gill/data/hdf.csv in a format amenable to statistical analysis corresponding to 22 hospitals or trusts .this means that data from eight trusts did not make it into the present data - base for various administrative reasons , the most common reason being that the data asked for was simply not available .the second most common reason was accidental error on our side ( lost emails ! ) .we will be able to rectify some of these omissions later , which will add a small number of hospitals to the data - base , but we do not believe this will have any impact on our main conclusions .data from * horton general hospital is not included * : all these analyses have been performed before even obtaining any data from that hospital . for the initial analyses in this report, six hospitals from the data - base have been removed : the data on one of those hospitals is quite weird , the other five did not supply the monthly number of admissions to ed . in an appendixwe show what happens when those five are put back : nothing much changes .in this report , we will study three variables called here ` admissions ` , ` cardioed ` and ` resped ` .the original f.o.i .requests defined these data as follows : ` admissions ` : the number of patients admitted to _ hospital / trust x _ emergency departments , by month , from november 1999 to the present . `cardioed ` : the number of patients admitted to _ hospital / trust x _ critical care units with cardio - respiratory arrest from the emergency department , by month , from november 1999 to the present . `resped ` : the number of patients admitted to _ hospital / trust x _ critical care units with respiratory arrest from the emergency department , by month , from november 1999 to the present .the variable ` admissions ` therefore counts total admissions _ to _ ed , and gives us information about the size of the hospital .moreover , we are specifically interested in events happening _ in _ ed which lead to transfer to cc ( critical care units , including intensive care units ). therefore `` size of ed '' , as measured by rates of admission to ed is more relevant than `` size of hospital '' measured in number of beds , say . for completeness , i mention that ` cardioed ` and ` resped ` are just two of a collection of altogether six variables whose names are formed by combining a prefix ` cardio ` , ` resp ` , or ` hypo ` with a suffix ` ed ` or ` tot ` .the suffix ` ed ` stands for emergency department ( accident and emergency , a&e ) : the number of such admissions which are _ from _ ed .the suffix ` tot ` stands for total : the total number of admissions to critical care units from anywhere , with the corresponding diagnosis .the prefixes ` cardio ` , ` resp ` , and ` hypo ` stand for cardiac or more precisely , cardio - respiratory arrest , respiratory arrest , and hypoglycaemic arrest .the intention was that the variables ` cardioed ` , ` resped ` , and ` hypoed ` should count _ events in ed causing transfer to cc _ , rather than _ diagnosis ( events in the recent medical history ) of the patientwhen transferred from ed to cc_. in other words , they were intended to count events occurring _ after _ the patient was admitted to ed , whose occurrence _ in _ ed was the direct _ cause _ of transfer from ed to cc .this should be compared to the variables ` cardiotot ` , ` resptot ` , and ` hypotot ` , which were intended to count patients entering cc with the respective diagnoses , irrespective of when the corresponding event had occurred and what was the immediate reason for the admission to cc .we can only hope that most hospitals did interpret the f.o.i .request as intended .a number of hospitals did not supply any information on the numbers of events occurring in ed they could only supply data at higher or different aggregation levels .this means that our key variables ` cardioed ` and ` resped ` were often missing . for similar reasons , the variable ` admissions `also was often missing .it was sometimes not easy to see from submitted spread - sheets and supporting documentation whether a blank stood for `` zero '' or `` not available '' .it is not entirely clear whether any particular patient only has one diagnosis , or can have several .cardio - respiratory arrest is heart - failure ( cardiac arrest ) together with respiratory failure because the former caused the latter .when your heart stops beating your lungs rapidly stop breathing , so a cardiac arrest without respiratory arrest is essentially impossible , except perhaps in ic ( think of a patient in a breathing machine ) .suppose a patient comes into ed who has already been resuscitated after a cardiac arrest .suppose this patient subsequently ( while in ed ) also suffers a respiratory arrest .he or she now has both _ diagnoses _( both these things have recently happened to him or her ) .if this patient is now admitted to cc , is he or she counted both as an admittance to cc with cardio - respiratory arrest and as an admittance to cc with respiratory arrest ?the intention was that cardio - respiratory and respiratory arrest should be mutually exclusive categories , but the f.o.i .does not make that explicit , though one can consider it implicitly implied when one considers all seven questions together .fortunately , we will be able to take finesse this particular difficulty . what is respiratory arrest and what is cardiac arrest , anyway ?an expert tells me _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ both are merely symptoms of an underlying problem .for example a mid - brain stroke may result in respiratory arrest , which leads on to cardio - respiratory arrest if not treated the heart stopping if artificial respiration has not been instituted .so if the medics pick up early on the stroke , it may only get as far as a fall. if the stroke was left untreated respiratory arrest may follow .if that is left untreated , the heart stops also .so the diagnosis is stroke , and the outcome arrest . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ wikipedia redirects from cardio - respiratory arrest to cardiac arrest .in the previous section i have discussed difficulties interpreting the data revolving around the fact that the condition(s ) a patient has when transferred to cc is not the same as the immediate cause of the transfer . in principle, a patient can have experienced both cardio - respiratory and respiratory arrest , in either order .these events can happen before admission to the hospital emergency department , or during stay in emergency .possibly , one event led to admission to ed , the next event to transfer to cc .what we wanted to count were transfers to cc caused directly either by a respiratory arrest in ed , or by a cardio - respiratory arrest in ed .this all turns around the difference between the little words `` in '' and `` with '' , and whether , when one asks for numbers of patients in different categories , administrators ( or their data - base software ) will understand that the categories should be understood as mutually exclusive .it depends on what information actually is in the data - base .i do not know how the the f.o.i .requests have been interpreted by the hospital administrators who have kindly supplied us all this data .we can go back and ask .or we can ask medical experts what they think those questions actually mean , and what data they think these questions would actually elicit . on the other hand , we are missing cardio - respiratory and respiratory arrests in ed which do not result in transfer to cc , if that is possible .many events occur in hospital wards which do not end up in the hospital data - base . if a patient has suffered either arrest in ed and is immediately successfully resuscitated there , does that person necessarily go directly to a critical care unit ?finally we should be aware that the records stored in hospital data - bases were not collected for the purpose of answering our questions , but are the results of an administrative system which collects some information about some of the processes going on in the hospital , but not all .many events do not find their way into the data - base at all .many events are wrongly classified . in any case, the classification can be somewhat subjective .the system allows only a small collection of possible categories and choosing just one of them might well not do justice to the complex state of any particular patient .so an administrator picks one out of habit or for convenience .registered rates of various kinds of event can change because culture changes , policy changes , staff changes , staff start `` seeing '' a new kind of event happen more often because they have been alerted to it by a notable occurrence ; thus awareness of particular categories of events changes in response to occurrence of other events .`` hospital '' is the name ( more precisely : _ my _ `` short name '' ) of the hospital , or in some cases the trust .a table will be supplied separately , giving full names of hospitals and trusts .here are the ( short ) names of the 22 hospitals ( or trusts ) in our data - base : .... barking c manchester darlington doncaster & b frenchay good hope heartlands hertfordshire hexham hull & e yorksh leicester maidstone n tyneside nottingham oxford radcliffe r liverpool sandwell solihull stoke uhn durham wansbeck wycombe ....these are the names of the 16 hospitals left after we have omitted those which did not report the numbers of admissions to ed : .... c manchester doncaster & b frenchay good hope heartlands hexham hull & e yorksh leicester maidstone n tyneside nottingham oxford radcliffe r liverpool sandwell solihull wansbeck .... the six hospitals which have been omitted to form the smaller data set are .... barking darlington stoke uhn durham wycombe hertfordshire .... the first five , because the number of admissions to ed is missing ; the sixth , hertfordshire , because the numbers there do not make much sense at all , and probably had not been processed correctly .as mentioned before , it was actually very hard to deduce whether blank fields in tables of numbers in the files provided by some hospitals meant `` zero '' or `` not available '' .as we will later see , another four hospitals ( sandwell , solihull , heartlands , good hope ) need to be removed from the presently remaining 16 for this reason . on the other hand , for the final steps of our analysis below , we will not use `` total admissions to ed ''so we could just as well have put barking , darlington , stoke , uhn durham and wycombe back in .we will do that in an appendix ._ it turns out that our substantive conclusions do not change at all_.we have _ monthly data _ from each hospital from various periods of time , but all within the overall period november 1999 to december 2011 .that is 12 years and 2 months , or altogether 146 months .the variable ` monthnr ` in our analyses measures time , by months , starting with month -1 = november 1999 , month 0 = december 1999 , month 1 = january 2000 , , month 144 = december 2011 .most hospitals could only supply data for ( varying ) parts of the period named in the f.o.i .request . this will be clearly visible in the graphics shown later in this report .i have deliberately avoided studying data from horton general hospital , in order to avoid personal bias . roughly speaking ,the hospitals in this study vary in size by a factor of up to 5 : we have quite a few hospitals with around 250 beds , quite a few around 500 , a few with around 750 , and just a couple with more than 1000 beds .horton general belongs at the low end of the scale , among hexham , solihull , wansbeck , wycombe .for the time being we look at 16 hospitals , partly for the opportunistic reason that , which is very convenient for graphical displays in which we can see the individual data of each hospital separately .we plot just three of our variables against time ( ` monthnr ` ) .the three variables of interest in this report are ` admissions ` , ` cardioed ` , and ` resped ` .three hospitals nottingham , leicester , doncaster & b stand out as having apparently five times larger emergency departments than most of the others : the three big ones have around 11000 admissions per month ( mean monthly admissions equals 11196 ) ; the smaller ones only around 2000 ( mean monthly admissions of all remaining hospitals equals 2838 , or around 3000 ) .the regular seasonal fluctuations in the large admission numbers are particularly clear .nottingham and leicester are both big teaching hospitals .doncaster & b is a trust : my short name is short for `` doncaster and bassetlaw _ hospitals _ '' .i draw the plot again , capping the admissions at 6000 , so we can better see the 13 time series of lower numbers .seasonal variation at frenchay is very clear to see ; less visible in the others .this too is only to be expected : by the law of _ small _ numbers ( poisson variation if not super - poisson variation ) random variation becomes proportionately larger when looking at low rates , hence more easily masks a given amount of systematic variation .now we turn to the heart of the matter : admissions to cc ( critical care units , intensive care ) from ed ( emergency department , _ aka _a&e ) because of ( or at least : with ) cardiac and/or respiratory arrest .the _ intention _ was to count admissions to cc from ed caused by just one of those events .if both had occurred , then the first might be reasonably imagined to have triggered the second . in other words , we wanted to know the numbers of admissions to cc caused _ primarily _ by either type of arrest having occurred _ after _ admission to ed .however we do not know how exactly the hospitals have interpreted the request for data , or indeed , whether the interpretation was uniform .fortunately , whether the counts are of cases `` with '' , or only cases `` primarily caused by '' , we will still be able to extract some very pertinent information from the data .very globally , we can say that even in the smaller hospitals there often 1 or 2 respiratory arrests in one month ( sometimes none , sometimes 3 or 4 ) , and anything from 0 to 10 and upwards cardio - respiratory arrests .transfer from ed to cc because of cardio - respiratory arrest is , on the whole , very common .transfer from ed to cc because of respiratory arrest is less common but not rare , by any account .four important features should be observed . *first feature * : four hospitals stand out as not reporting any cardiac or respiratory arrests at all in ed : sandwell , solihull , heartlands , and good hope . *second feature * : cardio - respiratory arrest is about five times more frequent than respiratory arrest .it is therefore somewhat less common .but it could well be considered rather misleading to call it `` rare '' . *third feature * : within each hospital , the numbers per month are highly variable . *fourth feature * : there are clear differences in levels between different hospitals , up to perhaps a factor of 5 between the lowest and the largest numbers . regarding the numbers of admissions to ed this is mainly accounted for by scale . regarding the numbers of transfers to cc because of ( or with ) various diagnoses this is no doubt exacerbated by different interpretations of the events to be counted , different registration systems or cultures .this careful selection of `` similar '' hospitals is actually extremely inhomogenous , even taking account of scale ( size ) .inhomogeneity might be administrative and/or cultural in nature , rather than due to scale or case - mix differences .obviously , we should compare horton general to _ similar _ hospitals .regarding size , this means hexham , solihull , wansbeck , and wycombe . as we will seethe data from solihull is anomalous , so this leaves us with hexham , wansbeck , and wycombe .the complete absence of cardiac arrest in sandwell , solihull , heartlands , and good hope _ must _ be caused by data - registration issues .it is inconceivable that there was not a single cardio - respiratory arrest in ed in all those years .i suspect we have incorrectly interpreted `` blank '' columns in a spreadsheet as zeroes rather than `` not available '' .so in the next section , i will remove the hospitals with * zero * events i am guessing that these are not true zeroes , but rather `` not known '' . in any case , hospital months when _ neither _ event happens do not tell us anything about whether respiratory without cardiac arrest is rare .the following statistics therefore pertain to just 12 hospitals .... c manchester doncaster & b frenchay hexham hull & e yorksh leicester maidstone n tyneside nottingham oxford radcliffe r liverpool wansbeck .... mean number of respiratory arrests per month : .... 0.4592 .... mean number of cardio - respiratory arrests per month : .... 2.207 .... number of hospital months in which the number of respiratory arrests exceeded the number of cardio - respiratory arrests : 94 total number of hospital - months in the data from this sample of 12 hospitals : 415 average number of months per year in which the number of respiratory arrests exceeded the number of cardio - respiratory arrests : 2.718 average number of respiratory arrests per year : 5.511 * in round numbers , cardio - respiratory arrest is five times more common that respiratory arrest . * * _ even if _ some patients are counted twice , _ at least _ half of the respiratory arrests are without accompanying cardio - respiratory arrest . *in very round numbers , per hospital , there are on average about _ 3 months _ in every year with a respiratory but no cardiac arrest .therefore there are _ at least 3 cases _ per year of respiratory without cardiac arrest .there are on average about 6 respiratory arrests per year .this means that * _ at least half _ ( if not all ) of the respiratory arrests are _ not _ accompanied by cardiac arrest*. on average , per hospital , there are * at least * about 3 respiratory arrests ( unaccompanied by cardiac arrest ) per year ; that can hardly be called _rare_. it is true , but hardly relevant , that _ respiratory arrest is less common than cardiac arrest _ ( about five times as infrequent ) .though some of what is called `` respiratory arrest '' in our data - sets might actually represent a combination of respiratory and cardiac arrest , in either order , it is absolutely clear that * respiratory arrest ( not caused by immediately preceding cardiac arrest ) * ( and leading to transfer to cc ) is not _rare _ at all. respiratory arrest leading to transfer to cc is about five times less frequent than cardio - respiratory arrest . in a hospital of the same size as horton general , there are on average 1 or 2 cases per month . fluctuations are large . respiratory arrest _ not _ leading to transfer to cc has not been accounted for at all : the numbers are unknown , unregistered .let s check what happens when we put back the hospitals with no `` admissions to ed '' data .that means we are now talking about the following 17 hospitals ; and our sample now has relatively more smaller hospitals . ....darlington frenchay good hope hertfordshire hexham hull & e yorksh leicester maidstone n tyneside nottingham oxford radcliffe r liverpool sandwell solihull stoke uhn durham wycombe .... mean number of respiratory arrests per month : 0.3077 mean number of cardio - respiratory arrests per month : 1.563 number of hospital months in which the number of respiratory arrests exceeded the number of cardio - respiratory arrests : 84 total number of hospital - months in the data from this sample of 12 hospitals : 854 average number of months per year in which the number of respiratory arrests exceeded the number of cardio - respiratory arrests : 1.18 average number of respiratory arrests per year : 3.693 * in round numbers , cardio - respiratory arrest is still five times more common that respiratory arrest . * * even if some patients are counted twice , at least one third of the respiratory arrests are without accompanying cardiac arrest . *the hospitals we put back , all of them quite small , have reduced the overall rates both of cardiac and of respiratory arrest . however , we still see that with respect to this larger sample of hospitals , including more small hospitals , respiratory arrest without cardiac arrest accounts for * at least * one third of ( if not all ) respiratory arrests ; and respiratory arrest , though less common that cardio - respiratory arrest ( it occurs five times less frequently ) , still occurs many times year .it can not be called _ rare_.i am a mathematician and a statistician , presently full professor of mathematical statistics at leiden university , netherlands ( mathematical institute , science faculty ) .i am presently 62 years old .i have both british and dutch nationality .i am a member of the royal dutch academy of sciences , and a past president of the dutch statistical society , to mention just two marks of distinction .my research interests span both theoretical and applied statistics .i have worked for a long time in medical statistics , both on topics connected to clinical trials and to observational studies ( epidemiology ) .this work has involved many collaborations with ( hospital ) medical doctors .more recently i became involved in forensic statistics which is the art and science of applying statistics and probability to problems of two kinds : statistics involved in solving crimes ( police investigation ) and statistics involved in prosecuting criminals ( evaluating the weight of statistical evidence ) .for instance , i have recently worked for the united nations special tribunal on lebanon , analysing mobile phone meta - data used to identify ( ? ) the perpetrators of the assassination of prime minister hariri some years ago .i am now regularly consulted by the dutch police and by dutch courts .recently i was asked by a dutch court to collaborate with a gynaecologist in order to comment on probabilities in a case of ( alleged ) serial infanticide .it was absolutely necessary for a medical expert and a statistical expert to look at the evidence and the relevant scientific literature _together_. we needed to figure out _ what were the right questions to ask _ , and neither of us could do that on our own .lawyers and judges are even worse placed to figure out _ what are the right questions to ask_. fortunately , the realisation that multi - disciplinary scientific work should be performed in first instance by collaborating scientists , not by courts of law , is growing , due to many recognised miscarriages of justice where faulty interpretation of scientific evidence , and recruitment of the `` wrong '' scientific experts , has been involved . particularly relevant to the present particular case ( ben geen )is my involvement in a celebrated dutch miscarriage of justice .a nurse , lucia de berk , was given a life sentence for murder of 6 patients and attempted murder of 4 more , largely on the basis of statistical evidence linking her presence to `` incidents '' on the wards where she worked .the conviction was revoked after a sequence of legal battles lasting altogether 9 years . at the final acquittal, the judges not only announced that she was not guilty , but that in actual fact no murders had occurred at all . in actual fact , according to the trial judges , nurses had battled heroically to save lives of patients which , despite this , were ultimately shortened by the _ mistakes of their doctors_. cases like this , internationally , are by no means rare .in fact , `` health care serial killers '' are rare , but witch hunts triggered by medical errors and magnified out of proportion by the rigid social structure of a hospital are unfortunately all too common , with often devastating consequences .the case of lucia de berk perhaps the biggest miscarriages of justice which ever occurred in the netherlands , a country which prides itself on its justice system contains a multitude of shocking parallels with the case of ben geen .what is all the more shocking is that this seems to have gone totally unremarked , to date .i will here recall just a few `` anecdotes '' from that case which have particular relevance to the statistical aspects of the ben geen case .a key piece of evidence in the lucia case was that the number of incidents on lucia s ward was 9 in one year ( the year in which she was supposed to be on a killing spree ) , and close to zero during both the two preceding years , and in the subsequent year .this enormous unexpected number of incidents in that ward was a key piece of prosecution evidence .it later transpired that the name of the ward had been changed , just prior to those two years of almost no incidents . in the yearspreceding , it had been somewhat larger than 9 , several years in succession .the hospital director s statement was the truth ( he referred to the ward by its current name ) , but not the whole truth .he himself was responsible for the name - change of the ward . andlater : the year after the quiet year after the big year , the numbers were big again .amusingly , in the case of lucia de berk , the argument put by the prosecution was precisely that respiratory arrest was normal , while cardiac arrest was supposed to be unusual ! in one of the key events , the crucial question for deciding whether a baby had died of poisoning or naturally , was whether heart failure took place before or after lung failure . the argument being : if someone is terminally ill , then the natural course of events is that the body becomes exhausted , the lungs fail , consequently there is shortage of oxygen , then the heart fails . on the other hand , `` unexpected '' heart failure ( after which the lungs naturally fail , too ) could indicate poisoning .it turned out that the temporal sequence of events was not remembered in the same way by different observers ( doctors , nurses ) and moreover that different registration systems appeared to give a different answer . only after an extremely carefully and thorough investigation taking everything into account ,could it be concluded that quite definitely , respiratory arrest came first , followed by heart failure .i do nt suppose that how terminally ill people die in the netherlands is terribly different from how they do it in the uk .and i apologise for appearing to offer anecdotal medical evidence when i am a statistician .however this point is crucial : medical diagnosis is not an exact science !many things happen in rapid succession . what one takes as the `` cause of death '' or the `` cause of the emergency '' is _fuzzy_. impressions of skilled doctors might easily be different from the information obtained from fairly reliable medical monitoring systems .the output from different monitors can get confused and mis - interpreted .memories are unreliable .memories change , certainly when people start to believe that there is a killer around .these points are actually very relevant to any interpretation of the statistics in the case of ben geen .which events are registered as events of which category is to a large degree subjective , and variable .i would also like to mention an `` incident '' pertaining to the present case . at my request , an f.o.i .request was also put to horton general , sometime later than all the others .the data which this hospital was able to prepare pertained only to a very short , recent , period , and with almost all categories merged .all numbers of events in any month smaller than 5 were simply reported as `` < 5 '' .no other hospital took this weird precaution , allegedly taken for reasons of confidentiality .is this an attempt to suppress embarrasing evidence ?
|
statistical analysis of monthly rates of events in around 20 hospitals and over a period of about 10 years shows that respiratory arrest , though about five times less frequent than cardio - respiratory arrest , is a common occurrence in the emergency department of a typical smaller uk hospital .
|
the buckling of rods , shells and plates is traditionally described in mechanics textbooks as an instability in the framework of nonlinear shell theory obtained by semi - rigorous dimension reduction of three - dimensional nonlinear elasticity . while these theories are effective in describing large deformations of rods and shells ( including buckling ) , their heuristic nature obscures the source of the discrepancy between theoretical and experimental results , as is the case for axially compressed circular cylindrical shells . at the same time, a rigorously derived theory of bending of shells captures deformations in the vicinity of relatively smooth isometries of the middle surface .unfortunately , the isometries of the straight circular cylinder are non - smooth .our approach , originating in , is capable of giving a mathematically rigorous treatment of buckling of slender bodies and determining whether the tacit assumptions of the classical derivation are the source of the discrepancy with experiment . in this paper, we apply our theory and obtain a mathematically rigorous proof of the classical formula for buckling load .this result justifies the generally accepted assumption that the paradoxical behavior of cylindrical shells in buckling is due to the high sensitivity of the buckling load to imperfections .this phenomenon is commonly explained by the instability of equilibrium states in the vicinity of the buckling point on the bifurcation diagram .however , the exact mechanisms of imperfection sensitivity are not fully understood , nor is there a reliable theory capable of predicting experimentally observed buckling loads .while a full bifurcation analysis is necessary to understand the stability of equilibria near the critical point , our method s singular focus on the stability of the trivial branch gives access to the scaling behavior of key measures of structural stability in the thin shell limit .we have argued in that axially compressed circular cylindrical shells are susceptible to scaling instability of the critical load , whereby the scaling exponent , and not just its coefficient , can be affected by imperfections .the new analytical tools developed in give hope for a path towards quantification of imperfection sensitivity .our approach is based on the observation that the pre - buckled state is governed by equations of linear elasticity . at the critical load ,the linear elastic stress reaches a level at which the trivial branch becomes unstable within the framework of 3d hyperelasticity .the origin of this instability is completely geometric : the frame - indifference of the energy density function implies non - convexity in the compressive strain region . since buckling occurs at relatively small compressive loads , the material s stress - strain response is locally linear .this explains why all classical formulas for buckling loads of various slender structures involve only linear elastic moduli and hold regardless of the material response model .the significance of our approach is two - fold .first , it provides a common platform to study buckling of arbitrary slender bodies .second , its conclusions are mathematically rigorous and its underlying assumptions explicitly specified . the goal of this paper is to demonstrate the power and flexibility of our method on the non - trivial , yet analytically solvable example of the axially compressed circular cylindrical shell .our analysis is powered by asymptotically sharp korn - like inequalities , where instead of bounding the norm of the displacement gradient by the norm of the strain tensor , we bound the norm of individual components of the gradient by the norm of the strain tensor .these inequalities have been derived in our companion paper .the method of buckling equivalence provides flexibility by furnishing a systematic way of discarding asymptotically insignificant terms , while simplifying the variational functionals that characterize buckling .the paper is organized as follows . in section [ sec : genth ] , we describe the loading and corresponding trivial branch of an axially compressed cylindrical shell treated as 3-dimensional hyperelastic body .we define stability of the trivial branch in terms of the second variation of energy .next , we describe our approach from and recall all necessary technical results from for the sake of completeness . in section [ sec : perfect ] , we give the rigorous derivation of the classical buckling load and identify the explicit form of buckling modes .our two most delicate results are a rigorous proof of the existence of a buckling mode that is a single fourier harmonic and the linearization of the dependence of this buckling mode on the radial variable the two assumptions that are commonly made in the classical derivation of the critical load formula .[ sec : genth ] in this section we will give a mathematical formulation of the problem of buckling of axially compressed cylindrical shell .consider the circular cylindrical shell given in cylindrical coordinates as follows : ,\qquad i_{h}=[1-h/2,1+h/2],\ ] ] where is a 1-dimensional torus ( circle ) describing -periodicity in . here is the slenderness parameter , equal to the ratio of the shell thickness to the radius . in this paperwe consider the axial compression of the shell where the lipschitz deformation satisfies the boundary conditions , given in cylindrical coordinates by the loading is parametrized by the compressive strain in the axial direction .the trivial deformation satisfies the boundary conditions for . by a _stable deformation _ we mean a lipschitz function , satisfying boundary conditions ( [ bc ] ) and being a weak local minimizer is called a weak local minimizer , if it delivers the smallest value of the energy among all lipschitz function satisfying boundary conditions ( [ bc ] ) that are sufficiently close to in the norm . ] of the energyfunctional among all lipschitz functions satisfying ( [ bc ] ) .the energy density function is assumed to be three times continuously differentiable in a neighborhood of .the key ( and universal ) properties of are * absence of prestress : ; * frame indifference : for every ; * local stability of the trivial deformation : where is the space of symmetric matrices , and is the linearly elastic tensor of material properties . here , and elsewhere we use the notation for the frobenius inner product on the space of matrices .while this is not needed for general theory , in this paper we will also assume that is isotropic : this assumption is necessary to obtain an explicit formula for the critical load .our goal is to examine stability of the homogeneous trivial branch given in cylindrical coordinates by where the function is determined by the natural boundary conditions since uniform deformations always satisfy equations of equilibrium .here , the gradient of with respect to , is the piola - kirchhoff stress tensor .we observe that [ lem : trbr ] assume that is three times continuously differentiable in a neighborhood of , satisfies properties ( p1)(p3 ) and is isotropic ( i.e. satisfies ( [ wiso ] ) ) .then there exists a unique function , of class on a neighborhood of 0 , such that and the natural boundary conditions ( [ nbc ] ) are satisfied by ( p2 ) .the function is three times continuously differentiable in a neighborhood of .thus , the isotropy ( [ wiso ] ) implies that for all .differentiating this relation in at we conclude that must commute with .in particular , this implies that the matrix must be diagonal , whenever is diagonal .we compute that in cylindrical coordinates ,\quad \bc=\bf^{t}\bf=\left [ \begin{array}{ccc } ( 1+a)^{2 } & 0 & 0\\ 0 & ( 1+a)^{2 } & 0\\ 0 & 0 & ( 1-{\lambda})^{2 } \end{array } \right]\ ] ] hence , is diagonal , and conditions ( [ nbc ] ) reduce to a single scalar equation where the left - hand side of ( [ rnbc ] ) is a twice continuously differentiable function of . condition ( p1 ) implies that is a solution .the conclusion of the lemma is guaranteed by the implicit function theorem , whose non - degeneracy condition reduces to by assumption , is isotropic , and the non - degeneracy condition ( [ ndgn ] ) becomes , which is satisfied due to ( p3 ) . here and are the bulk and shear moduli , respectively .it is important , that as , the trivial branch does not blow up .in fact , in our case the trivial branch is independent of .the general theory of buckling is designed to detect the first instability of a trivial branch in a slender body that is well - described by linear elasticity .here is the formal definition from .[ def : trbr ] we call the family of lipschitz equilibria of a * linearly elastic trivial branch * if there exist and , so that for every ] * * there exist a family of lipschitz functions , independent of , such that * where the constant is independent of and . we remark , that the leading order asymptotics of the nonlinear trivial branch is nothing else but a linear elastic displacement , that can be found by solving the equations of linear elasticity , augmented by the appropriate boundary conditions . here is the linear elastic strain .the linear elastic trivial branch depends only on the linear elastic moduli , unlike the model - dependent nonlinear trivial branch .the fact that our trivial branch ( [ trbr ] ) satisfies all conditions in definition [ def : trbr ] is easy to verify . here is independent of .here we computed that ( poisson s ratio ) by differentiating ( [ rnbc ] ) in at .we define critical strain in terms of the second variation of energy defined on the space of admissible variations by density of in we extend the space of admissible variations from to its closure in . the critical strain can be defined as follows . while this definition is unambiguous , it is inconvenient , since the critical strain strongly depends on the choice of the nonlinear energy density function .instead , we will focus only on the leading order asymptotics of the critical strain , as .the corresponding buckling mode , to be defined below , will also be understood in an asymptotic sense .[ def : asymbm ] we say that a function , as is a buckling load if a * buckling mode * is a family of variations , such that targeting only the leading order asymptotics allows us to determine critical strain and buckling modes from a _ constitutively linearized _ second variation : and is the linear elastic stress since the first term in ( [ ccsv ] ) is always non - negative we define the set of potentially destabilizing variations . the constitutively linearized critical load will then be determined by minimizing the rayleigh quotient the functional expresses the relative strength of the destabilizing compressive stress , measured by the functional and the reserve of structural stability measured by the functional [ def : bload ] the * constitutively linearized buckling load * is defined by we say that the family of variations is a * constitutively linearized buckling mode * if in we have defined a measure of `` slenderness '' of the body in terms of the korn constant it is obvious , that if stays uniformly positive , then so does the constitutively linearized second variation as a quadratic form on , for any , as .[ def : slender ] we say that the body is * slender *if this notion of slenderness requires not only geometric slenderness of the domain but also traction - dominated boundary conditions conveniently encoded in the subspace , satisfying .we can now state sufficient conditions , established in , under which the constitutively linearized buckling load and buckling mode , defined in ( [ clin])([clbm ] ) , verify definition [ def : asymbm ] .[ th : crit ] suppose that the body is slender in the sense of definition [ def : slender ] . assume that the constitutively linearized critical load , defined in ( [ clin ] ) satisfies for all sufficiently small and then is the buckling load and any constitutively linearized buckling mode is a buckling mode in the sense of definition [ def : asymbm ] .now we will show that theorem [ th : crit ] applies to the axially compressed circular cylindrical shells .the asymptotics of the korn constant , as , was established in .[ th : kcas ] let be given by ( [ breather ] ) . then, there exist positive constants , depending only on , such that in order to establish ( [ sufcond ] ) we need to estimate . for the trivial branch ( [ trbr ] )we compute where is the young s modulus .hence , where from now on will always denote the -norm on . in order to estimate need to prove korn - like inequalities for the gradient components , , , and .this was done in .[ th : ki ] there exist a constant depending only on such that for any one has , moreover , the powers of in the inequalities ( [ ki])([rz ] ) are optimal , achieved _ simultaneously _ by the ansatz {h}},z\right)\\[2ex ] \phi^{h}_{{\theta}}(r,{\theta},z)=&r\sqrt[4]{h}w_{,\eta}\left(\frac{{\theta}}{\sqrt[4]{h}},z\right)+ \frac{r-1}{\sqrt[4]{h}}w_{,\eta\eta\eta}\left(\frac{{\theta}}{\sqrt[4]{h}},z\right),\\[2ex ] \phi^{h}_{z}(r,{\theta},z)=&(r-1)w_{,\eta\eta z}\left(\frac{{\theta}}{\sqrt[4]{h}},z\right ) -\sqrt{h}w_{,z}\left(\frac{{\theta}}{\sqrt[4]{h}},z\right ) , \end{cases}\ ] ] where can be any smooth compactly supported function on , with the understanding that the above formulas hold on a single period $ ] , while the function is -periodic in .[ cor : lambda ] this is an immediate consequence of theorem [ th : ki ] .the lower bound follows from inequalities ( [ lcoerc ] ) , ( [ thetaz ] ) and ( [ rz ] ) ( and also an obvious inequality ) .the upper bound follows from using a test function ( [ ansatz0 ] ) in the constitutively linearized second variation. inequalities ( [ ki ] ) and ( [ perfect.lambda.hat.upper ] ) imply that the condition ( [ sufcond ] ) in theorem [ th : crit ] is satisfied for the axially compressed circular cylindrical shell .the problem of finding the asymptotic behavior of the critical strain and the corresponding buckling mode , as now reduces to minimization of the rayleigh quotient ( [ rq ] ) , which is expressed entirely in terms of linear elastic data . even though this already represents a significant simplification of our problem ,its explicit solution is still technically difficult .however , the asymptotic flexibility of the notion of buckling load and buckling mode permits us to replace with an equivalent , but simpler functional .the notion of buckling equivalence was introduced in and developed further in . herewe give the relevant definition and theorems for the sake of completeness .[ def : equiv ] assume that is a variational functional defined on .we say that the pair * characterizes buckling * if the following three conditions are satisfied 1 .characterization of the buckling load : if then is a buckling load in the sense of definition [ def : asymbm ] .2 . minimizing property of the buckling mode : if is a buckling mode in the sense of definition [ def : asymbm ] , then 3 .characterization of the buckling mode : if satisfies ( [ bmode ] ) then it is a buckling mode .[ def : bequivalence ] two pairs and are called * buckling equivalent * if the pair characterizes buckling if and only if does .of course this definition becomes meaningful only if the pairs and are related .the following lemma has been proved in .[ lem : pairb_hj ] suppose the pair characterizes buckling .let be such that it contains a buckling mode .then the pair characterizes buckling is designed to capture _ the _ buckling mode .we make no attempt to characterize an infinite set of geometrically distinct , yet energetically equivalent buckling modes that exist in our example . ] .the key tool for simplification of functionals characterizing buckling is the following theorem , .[ th : bequivalence ] suppose that is a buckling load in the sense of definition [ def : asymbm ] . if either or then the pairs and are buckling equivalent in the sense of definition [ def : bequivalence ] . as an application we will simplify the denominator in the functional , given by ( [ rq ] ) .theorem [ th : ki ] suggests that can be much larger than and .hence , we will prove that we can to replace , given by ( [ ex.compres.measure ] ) , with .hence , we define a simplified functional [ lem : k1 ] the pair characterizes buckling .by theorem [ th : ki ] we have for every .condition ( [ j1j2 ] ) now follows from ( [ perfect.lambda.hat.upper ] ) .thus , by theorem [ th : bequivalence ] , the pair characterizes buckling .[ sec : perfect ] in this section we prove the classical asymptotic formula for the critical strain the goal of this section is to show that even if we shrink the space of admissible variations to the set of single fourier modes in , we still retain the ability to characterize buckling .the first step is to define fourier modes by constructing an appropriate -periodic extension of in variable .since , no continuous -periodic extension has the property that , we will have to navigate around various sign changes in components of .we can handle this difficulty if is isotropic , which we have already assumed .it is easy to check that there are only two possibilities that work and its trace either changes sign or remains unchanged . ] : odd extension for , , even for , and even for , , odd for . since , is unconstrained at the boundary , only the latter possibility is available to us .hence , we expand and is the cosine series in , while is represented by the sine series : \displaystyle\phi_{{\theta}}(r,{\theta},z)=\sum_{n\in{\mathbb { z}}}\sum_{m=0}^{\infty}{\widehat{\phi}}_{{\theta}}(r , m , n)e^{in{\theta } } \cos\left(\frac{\pi m z}{l}\right),\\[2ex ] \displaystyle\phi_{z}(r,{\theta},z)=\sum_{n\in{\mathbb { z}}}\sum_{m=0}^{\infty}{\widehat{\phi}}_{z}(r , m , n)e^{in{\theta } } \sin\left(\frac{\pi m z}{l}\right ) .\end{cases}\ ] ] while functions in can be represented by the expansion ( [ fourier ] ) , single fourier modes do not belong to .yet , the convenience of working with such simple test functions outweighs this unfortunate circumstance , and hence , we switch ( for the duration of technical calculations ) to the space we will come back at the very end to the space to get the desired result for our original boundary conditions .the space appears in our companion paper as , where the inequalities ( [ ki ] ) , ( [ thetaz ] ) and ( [ rz ] ) have been proved for it . as a consequence ,the estimates ( [ perfect.lambda.hat.upper ] ) hold for we conclude that the pair characterizes buckling ( for the new boundary conditions associated with the space ) . in that casethe proof of lemma [ lem : k1 ] carries with no change for the space .hence , the pair characterizes buckling as well .we now define the single fourier mode spaces . for any complex - valued function and any , we define and let be given by ( [ adef ] ) with replaced by .we define * * the infimum in ( [ mninf ] ) is attained at and satisfying for some constant depending only on .* let be a minimizer in ( [ mninf ] ) .then the pair characterizes buckling in the sense of definition [ def : equiv ] .let us prove the reverse inequality .by definition of we have for any , and any and .any can be expanded in the fourier series in and where for all , .if is isotropic , then the sine and cosine series in do not couple and the plancherel identity implies that the quadratic form diagonalizes in fourier space : we also have inequality ( [ alphamn ] ) implies that summing up , we obtain that for every .it follows that , and part ( i ) is proved .observe that , according to the estimate and part ( i ) we have where let us show that the bounds ( [ mnbound ] ) hold for all .in particular , the sets are finite for all , and hence , the infimum in ( [ mninf ] ) is attained .let and be fixed .then , by definition of the infimum there exists such that .hence , there exists a possibly different constant ( not relabeled , but independent of , and ) , such that to prove the first estimate in ( [ mnbound ] ) we apply inequality ( [ ki : first.and.half ] ) to and then estimate via ( [ optrate ] ) : hence for some constant , independent of .therefore , we obtain a uniform in upper bound on . to estimate we write by the poincar inequality andhence applying ( [ ki : first.and.half ] ) again and estimating via ( [ optrate ] ) we obtain from which ( [ mnbound]) follows via ( [ mnbound]) . part ( ii )is proved now . part ( iii ) .now , let , be the minimizers in ( [ mninf ] ) .it is sufficient to show , due to lemma [ lem : pairb_hj ] , that contains a buckling mode . by definition of the infimum in ( [ hatmn ] ) , for each there exists such that therefore , hence , is a buckling mode , since the pair characterizes buckling . in this sectionwe prove that the buckling load and a buckling mode can be captured by single fourier harmonics whose and components are linear in . in fact , we specify an explicit structure for buckling mode candidates . we define the linearization operator as follows : we show now that the buckling mode can be found among the linearized single fourier modes * step 1 * ( linearization of . ) we introduce the operator of linearization of component . for we define .then , it is easy to see that .it is clear that thus we can estimate : we also have therefore , and recalling that , and that the inequalities ( [ mnbound ] ) imply that , we obtain due to ( [ mnbound ] ) .similarly , observe that using the inequality and the bounds on wave numbers ( [ mnbound ] ) we obtain we now proceed to estimate .let therefore , due to korn s inequality ( [ ki ] ) .thus , .we can express in terms of as follows hence , by ( [ cs ] ) , we have by the poincar inequality followed by the application of korn s inequality ( [ ki ] ) we obtain , similarly , by the poincar inequality and ( [ mnbound ] ) we estimate we conclude that hence , ( [ e(a2)e(a ) ] ) and ( [ tr(a2)tr(a ) ] ) become respectively , and hence , by ( [ u1u2 ] ) , ( [ tru1u2 ] ) and the coercivity of , we have * step 2 * ( linearization of . ) in this step we define and proceed exactly as in step 1 .we observe that and hence , and we also have for functions we obtain and where the bounds ( [ mnbound ] ) on wave numbers have been used .hence , for we obtain applying inequalities ( [ cs ] ) and ( [ mnbound ] ) we obtain finally , we estimate the norm integrating the identity we get therefore , applying inequalities ( [ cs ] ) and ( [ mnbound ] ) we get applying this estimate to ( [ step3thz ] ) and ( [ tr(a2)tr(a3 ) ] ) we obtain and we conclude that and hence , by coercivity of we have combining ( [ step2 ] ) and ( [ step3 ] ) we obtain ( [ linu ] ) .lemma [ lem : lin ] permits us to look for a buckling mode among those single fourier modes , whose and components are linear in .let be a constant , whose existence is guaranteed by lemma [ lem : lin ] .let let by lemma [ lem : pairb_hj ] it is sufficient to show that contains a buckling mode .let be minimizers in ( [ mninf ] ) . then , according to theorem [ th : mn ] , and contains a buckling mode .let be a buckling mode .let us show that is also a buckling mode .indeed , by lemma [ lem : lin ] taking a limit as and using the fact that is a buckling mode , we obtain hence , is also a buckling mode , since , by theorem [ th : mn ] , the pair characterizes buckling .the linearization lemma [ lem : lin ] allowed us to reduce the set of buckling modes significantly . yet , even for functions the explicit representation of the functional is extremely messy .this can be dealt with by further simplification of the functional via buckling equivalence that permits us to eliminate lower order terms that do not influence the asymptotic behavior of the functional .our first step is to simplify the denominator in by replacing the unknown function in with . here, in order to simplify the formulas we use in place of . hence , we define a new simplified functional we observe that hence , due to ( [ cs ] ) therefore , due to theorem [ th : ki ] .hence , by coercivity of .for we conclude that , due to ( [ perfect.lambda.hat.upper ] ) and ( [ mnbound ] ) , theorem [ th : bequivalence ] applies and hence the functionals and are buckling equivalent .we can also simplify the numerator of by replacing with 1 in those places , where it does not affect the asymptotics .the simplification now proceeds at the level of individual components of .we may , without loss of generality , restrict our attention to , such that of course , choosing instead of in ( [ pbmr ] ) works just as well .the choice between and in the remaining components becomes uniquely determined by the requirement that every entry in must be made up of terms that have the same kind of trigonometric function in .( we have already taken care of the same requirement in . )hence , the and components of must have the form where and are real scalars that determine the amplitude of the fourier modes .we compute , e(\bgf)_{r{\theta}}=\dfrac{n(f_{r}(1)-f_{r}(r))}{2r}\sin(n{\theta})\cos(mz),\\[2ex ] e(\bgf)_{rz}=\dfrac{m(f_{r}(1)-f_{r}(r))}{2}\cos(n{\theta})\sin(mz),\\[2ex ] e(\bgf)_{{\theta}{\theta}}=\dfrac{n(ra_{{\theta}}+(r-1)nf_{r}(1))+f_{r}(r)}{r}\cos(n{\theta})\cos(mz),\\[2ex ] e(\bgf)_{{\theta}z}=-\dfrac{mr^{2}a_{{\theta}}+na_{z}+(r^{2}-1)mnf_{r}(1)}{2r}\sin(n{\theta})\sin(mz),\\[2ex ] e(\bgf)_{rz}=m(a_{z}+(r-1)mf_{r}(1))\cos(n{\theta})\cos(mz ) .\end{cases}\ ] ] we can now replace with a much simpler matrix , given by e(\bgf)_{r{\theta}}=0,\\[2ex ] e(\bgf)_{rz}=0,\\[2ex ] e(\bgf)_{{\theta}{\theta}}=\dfrac{n(ra_{{\theta}}+(r-1)nf_{r}(1))+f_{r}(1)}{\sqrt{r}}\cos(n{\theta})\cos(mz),\\[2ex ] e(\bgf)_{{\theta}z}=-\dfrac{mr^{2}a_{{\theta}}+na_{z}+(r^{2}-1)mnf_{r}(1)}{2\sqrt{r}}\sin(n{\theta})\sin(mz),\\[2ex ] e(\bgf)_{rz}=\dfrac{m(a_{z}+(r-1)mf_{r}(1))}{\sqrt{r}}\cos(n{\theta})\cos(mz ) \end{cases}\ ] ] observing that we obtain via ( [ cs ] ) that similarly , hence , for every we have for the components , and we have therefore , which implies we conclude that that and thus by coercivity of .it follows that dividing this inequality by we obtain therefore , it follows that , due to ( [ perfect.lambda.hat.upper ] ) , the application of theorem [ th : bequivalence ] completes the proof . at this point the strategy for finding the asymptotic formula for the buckling load can be stated as follows .we first compute and then we find and as minimizers in the goal of the section is to prove that the functional given by ( [ rstar ] ) will now be analyzed in its explicit form . where .we minimize the numerator in with prescribed value .this can be done by minimizing the numerator in treating it as a scalar variable for each fixed : where thus , we reduce the problem of computing to finite - dimensional unconstrained minimization : where since the function to be minimized in ( [ fdmin ] ) is homogeneous of degree zero in the vector variable , we can set , without loss of generality . then , evaluating the integral in we obtain where let us show that the last term in , as well as can be discarded .let be the simplified version of .we observe that therefore , hence , if we denote then which implies that minimizing in we obtain the minimization in was too tedious to be done by hand . using computer algebra software ( maple ), we have obtained the following expression for : where is a polynomial in of degree 6 , is a polynomial in of degree 8 and is a polynomial in of degree 4 .the minimum was achieved at where is a polynomial in of degree 5 , and is a polynomial in of degree 4 .let us show that the terms do not affect the asymptotics of .let formulas ( [ l3st.formula ] ) and ( [ formula.m.n.b.l ] ) become obvious , if we observe that it is also clear from the degrees of polynomials and that and for some constant , independent of , , and . in order to show that we can also eliminate from the numerator of observe that for any constant hence , if is a minimizer in ( [ l3pr ] ) , then , as . if denotes a minimizer in ( [ l3st ] ) , then formulas ( [ l3st.formula ] ) and ( [ formula.m.n.b.l ] ) imply that , as , and thus therefore , and ( [ lim3 ] ) follows . in this sectionwe return to the original boundary conditions and the space , defined in ( [ breather ] ) .let even though , technically speaking , is not a subspace of , it is helpful to think of it as such . hence , our next lemma is natural ( but not entirely obvious ) . in view of theorem[ th : mn ] it is sufficient to prove the inequality this is done by repeating the arguments in the proof of the analogous inequality in theorem [ th : mn ] .the argument is based on the fact the -periodic extension of , such that and are even and is odd , is still of class , and the expansion ( [ sexp ] ) is valid .the inequality ( [ lambda.vh>lambda.hat1 ] ) follows from ( [ mninf ] ) and the inequality ( [ mode.by.mode ] ) , which is valid for each single fourier mode . in order to prove that the asymptotic formula ( [ answer ] ) holds for ( and hence for ) it is sufficient to find a test function such that indeed , which proves both that and that is a buckling mode .we construct the buckling mode as a 2-term fourier expansion ( [ fourier ] ) . for this purposewe choose , as , and to lie on koiter s circle and \displaystyle\phi^{h}_{{\theta}}(r,{\theta},z)=\sum_{m\in\{m(h),m(h)+2\}}f_{{\theta}}(r , m , n(h))\sin(n(h){\theta } ) \cos(\hat{m}z),\\[2ex ] \displaystyle\phi^{h}_{z}(r,{\theta},z)=\sum_{m\in\{m(h),m(h)+2\}}f_{z}(r , m , n(h))\cos(n(h){\theta } ) \sin(\hat{m}z ) , \end{cases}\ ] ] where now , in order to avoid confusion , we distinguish between and to ensure that we require the structure of coefficients is determined by optimality at each of the two values of separately , since the expansion ( [ sexp ] ) is valid for . in particular , we choose let then , and . poisson sratio .,title="fig : " ] , and .ratio .,title="fig : " ] , and .ratio .,title="fig : " ] * acknowledgments . *the authors are grateful to eric clement and mark peletier for their valuable comments and suggestions .this material is based upon work supported by the national science foundation under grant no .1008092 . r. c. tennyson .an experimental investigation of the buckling of circular cylindrical shells in axial compression using the photoelastic technique .report 102 , university of toronto , toronto , on , canada , 1964 .
|
the goal of this paper is to apply the recently developed theory of buckling of arbitrary slender bodies to a tractable yet non - trivial example of buckling in axially compressed circular cylindrical shells , regarded as three - dimensional hyperelastic bodies . the theory is based on a mathematically rigorous asymptotic analysis of the second variation of 3d , fully nonlinear elastic energy , as the shell s slenderness parameter goes to zero . our main results are a rigorous proof of the classical formula for buckling load and the explicit expressions for the relative amplitudes of displacement components in single fourier harmonics buckling modes , whose wave numbers are described by koiter s circle . this work is also a part of an effort to quantify the sensitivity of the buckling load of axially compressed cylindrical shells to imperfections of load and shape .
|
power system operations aim to optimally utilize available electricity generation resources to satisfy projected demand , at minimal cost , subject to various physical transmission and operational security constraints .traditionally , such operations involve numerous sub - tasks , including short - term load forecasting , unit commitment , economic dispatch , voltage and frequency control , and interchange scheduling between distinct operators .most recently , renewable generation units in the form of geographically distributed wind and solar farms have imposed the additional requirement to consider uncertain generation output , increasingly in conjunction with the deployment of advanced storage technologies such as pumped hydro . growth in system size and the introduction of significant generation output uncertaintycontribute to increased concerns regarding system vulnerability .large - scale blackouts , such as the northeast blackout of 2003 in north america and , more recently , the blackout of july 2012 in india , impact millions of people and result in significant economic costs .similarly , failure to accurately account for renewables output uncertainty can lead to large - scale forced outages , as in the case of ercot on february 26 , 2008 .such events have led to an increased focus on power systems reliability , with the goal of mitigating against failures due to both natural causes and intelligent adversaries .optimization methods have been applied to power system operational problems for several decades ; wood and wollenberg provide a brief overview .the coupling of state - of - the - art implementations of core optimization algorithms ( including simplex , barrier , and mixed - integer branch - and - cut algorithms ) and current computing capabilities ( e.g. , inexpensive multi - core processors ) enable optimal decision - making in real power systems .one notable example involves the unit commitment problem , which is used to determine the day - ahead schedule for all generators in a given operating region of an electricity grid .a solution to the unit commitment problem specifies , for each hour in the scheduling horizon ( typically 24 hours ) , both the set of generators that are to be operating and their corresponding projected power output levels . the solution must satisfy a large number of generator ( e.g. , ramp rates , minimum up and down times , and capacity limits ) and transmission ( e.g. , power flow and thermal limit ) constraints , achieving a minimal total production cost while satisfying forecasted demand .the unit commitment problem has been widely studied , for over three decades . for a review of the relevant literature, we refer to and the more recent .many heuristic ( e.g. , genetic algorithms , tabu search , and simulated annealing ) and mathematical optimization ( e.g. , integer programming , dynamic programming , and lagrangian relaxation ) methods have been introduced to solve the unit commitment problem . until the early 2000s ,lagrangian relaxation methods were the dominant approach used in practice .however , mixed - integer programming implementations are either currently in use or will soon be adopted by all independent system operators ( isos ) in the united states to solve the unit commitment problem .security constraints ( i.e. , which ensure system performance is sustained when certain components fail ) in the context of unit commitment are now a required regulatory element of power systems operations .the north american electric reliability corporation ( nerc ) develops and enforces standards to ensure power systems reliability in north america .of strongest relevance to security constraints for unit commitment is the nerc transmission planning standard ( tpl-001 - 0.1 , tpl-002 - 0b , tpl-003 - 0b , tpl-004 - 0a , ) .the tpl specifies 4 categories of operating states , labeled a through d. category a represents the baseline normal " state , during which there are no system component failures .category b represents so called - contingency states , in which a single system component has failed ( out of a total of components , including generators and transmission lines ) .nerc requires no loss - of - load in both categories a and b , which collectively represent the vast majority of observed operational states .categories c and d of the tpl represent more extreme states , in which multiple system components fail ( near ) simultaneously .large - scale blackouts , typically caused by cascading failures , are category d events . such failure states are known as - contingencies in the power system literature , where ( ) denotes the number of component failures . in contrast to categories a and b , the regulatory requirements for categories c and d are vaguely specified , e.g. , some " loss of load is allowable , and it is permissible to exceed normal transmission line capacities by unspecified amounts for brief time periods .the computational difficulty of security - constrained unit commitment is well - known , and is further a function of the specific tpl category that is being considered .the unit commitment problem subject to - reliability constraints is , given the specific regulatory requirements imposed for category b events of the tpl , addressed by system operators worldwide .however , we observe that it is often solved approximately in practice , specifically in the context of large - scale ( iso - scale ) systems .for example , a subset of contingencies based on a careful engineering analysis is often used to obtain a computationally tractable unit commitment problem .alternatively , the unit commitment problem can be solved without considering contingencies , and the solution can be subsequently screened " for validity under a subset of contingencies ( again identified by engineering analysis ) .additional constraints can then be added to the master unit commitment problem , which is then resolved ; the process repeats until there is no loss - of - load in the contingency states .we raise this issue primarily to point out that even the full - problem is not considered a solved " problem in practice , such that advances ( including those introduced in this paper ) in the solution of unit commitment problems subject to general - reliability constraints can potentially impact the practical solution of the simpler - variant .numerous researchers have introduced algorithms for solving both the security - constrained unit commitment problem and the simpler , related problem of security - constrained optimal power flow . in the latter ,the analysis is restricted to a single time period , and binary variables relating to generation unit statuses are fixed based on a pre - computed unit commitment schedule . provides a recent review of the literature on security - constrained optimal power flow .of specific relevance to our work is the literature on security - constrained optimal power flow in situations where large numbers of system components fail .this literature is mostly based on worst - case network interdiction analysis and includes solution methods based on bi - level and mixed - integer programming ( see ) and graph algorithms ( see ) . following the northeast us blackout of 2003 , significant attentionwas focused on developing improved solution methods for the security - constrained unit commitment problem . in particular , various researchers introduced mixed - integer programming and decomposition - based methods for more efficiently enforcing - reliability , e.g. , see .however , due to its computational complexity , security - constrained unit commitment considering the full spectrum of nerc reliability standards has not attracted a comparable level of attention until very recently .specifically , and consider the case of security - constrained unit commitment under the more general - reliability criterion .similarly , and use robust optimization methods for identifying worst - case - contingencies . in this paper, we extend the - reliability criterion to yield the more general -- criterion .this new criterion dictates that for all contingencies of size , , at least fraction of the total demand must be met , with ] , given physical ramping limitations .we now examine the impact of contingency constraints on optimal buc solutions using the 6-bus test system introduced in - .our goals are to concretely illustrate ( a ) the often significant changes in solution structure induced by the requirements to maintain -- in unit commitment , relative to the baseline and - cases , and ( b ) the redundant nature of contingency constraints , in that satisfaction of one contingency state yields solutions that can cover " many other contingency states .the original test system consists of 6 buses , 7 transmission lines , and 3 generating units .we modified this instance for purposes of our analysis as follows .we augmented the system with three additional , fast - ramping generators g4 , g5 , and g6 , located at buses 1 , 2 , and 6 , respectively .this modification ensures there is sufficient generation capacity to satisfy the -- criterion during contingency states .data for the original generator set and the three additional generators is summarized in table [ tab10 ] .transmission line data is summarized in table [ tab11 ] .[ tab10 ] consistent with - , the unit shutdown cost is negligible and assumed to be zero in our analysis . for illustrative purposes ,we only consider the buc with a single time period , with loads of 51.2 , 102.4 , and 42.8 at buses 3 , 4 , and 5 , respectively .runtime results for the full 24-hour instance are presented in section [ sec : experiments ] .[ tab11 ] a single line diagram of the 6-bus test system is shown in figure [ figure1](a ) .generator capacity bounds , transmission line capacity bounds , and loads are shown adjacent to their corresponding system elements .when contingencies are ignored , the optimal buc solution commits a single unit ( g1 at bus 1 ) , generating 196.4 mw to meet the total demand .the no - contingency economic dispatch is shown graphically in figure [ figure1](b ) .-4.5 cm in accordance with nerc s tpl standard , loss - of - load is not permitted in single - component - failure contingency states . in order for the 6-bus test system to be fully - compliant , i.e. , to operate the system in such a way that there exists a post - contingency corrective recourse action for _ all _ possible - 1 contingencies ,5 generation units must be committed , as shown in figure [ figure3](a ) .of these , two units ( g1 and g3 ) provide generation capacity during the no - contingency state , while three units ( g4 , g5 , and g6 ) function as spinning reserves . unlike the practical approach of explicitly setting aside spinning reserves ( e.g. , equivalent to the capacity of the largest unit ) via constraints , our proposed ccuc model implicitly and automatically selects units to provide spinning reserves , within the context of satisfying contingency constraints .further , in contrast to the approach of explicitly allocating spinning reserves , our proposed ccuc model guarantees that there is adequate transmission capability to dispatch the generator outputs during all contingency states .the optimal --compliant buc solution shown in figure [ figure3](a ) represents the system in steady state operations , i.e. , under no observed contingency .figures [ figure3](b ) and [ figure3](c ) illustrate feasible corrective recourse power flows for single - component contingency states corresponding to the failure of generation unit 1 and transmission line 1 ( connecting buses 1 and 2 ) , respectively .the total operating cost of the - compliant solution is approximately 6.52% higher than an optimal no - contingency buc solution . -4.5cm the modified 6-bus system has 13 ( 7 transmission lines and 6 generators ) possible single - component contingency states .we observe that it is sufficient to consider _ only _ the two contingency states shown in figure [ figure3](b ) and figure [ figure3](c ) in order to achieve full - compliance .in other words , accounting for those two contingencies _ implicitly _ yields feasible corrective recourse actions for the other - contingency states . as we discuss in section [ sec : solution ] , in most practical systems only a small number of contingency states are likely to impact the optimal unit commitment solution .consequently , we design our algorithm to screen for these critical contingencies implicitly , without the need to explicitly consider all possible combinations of system component failures thus avoiding the combinatorial explosion in the number of possible contingencies . if the maximum allowable contingency size is increased to , the optimal buc solution for the 6-bus test system commits four generation units , as shown in figure [ figure2 ] .in addition to including contingencies in our analysis , we require that loads must be fully served in the no - contingency state and that a post - contingency corrective resource exist for all contingencies with zero loss - of - load , per tpl standards . for all contingencies ,the allowable loss - of - load threshold is set to , to ensure that there is sufficient slack to accommodate the loss of both transmission lines connected to bus 5 .if both transmission lines connected to bus 5 fail , then the load at that bus can not be served ; the factor corresponds to the minimal loss - of - load under this contingency . for systems with greater redundancy and flexibility , such as those presented section [ sec : experiments ], the loss - of - load threshold can be set more conservatively ( i.e. , lower ) .of the four committed units , one unit ( g1 ) is producing at maximum capacity and three units ( 4 , 5 , and 6 ) are producing at at levels below their maximum rating .taken together , these three units can provide up to 150mw of spinning reserves .although fewer units are committed ( 4 compared to 5 ) relative to the - solution , the two least expensive units ( g1 and g2 ) are not committed while the three most expensive units ( g4 , g5 , and g6 ) are committed in the -- compliant solution . -5.2 cm we conclude with the obvious , yet critical , observation that contingency constraints must be considered in normal ( no - contingency ) unit commitment operations in order to ensure that a feasible post - contingency corrective recourse exists for all contingency states under consideration .given the baseline unit commitment model ( buc ) and associated contingency constraints as defined respectively in sections [ sec : model - standarduc ] and [ sec : model - contingencies ] , we can now describe our full contingency - constrained unit commitment ( ccuc ) problem : the resulting unit commitment decision vector represents a minimal - cost solution that satisfies ( 1 ) the non - contingency demands for each bus for each time period , ( 2 ) the generation unit ramping constraints and startup / shutdown constraints , and ( 3 ) the network security and dc power flow constraints for each contingency , subject to loss - of - load allowances .we again note that generation costs incurred during a contingency are not considered in the objective function .rather , only power system feasibility need be maintained , subject to the loss - of - load allowances , for all .the extensive formulation ( ef ) of the ccuc problem is a large - scale milp , and has an extremely large number of variables and constraints . for large power systems and/or non - trivial contingency budgets ,the ef formulation will quickly become computationally intractable .for example , the number of constraints in the second stage of the ccuc ( which drives the overall problem size ) is approximately given as : . alternatively , the ef formulation of the ccuc problem has a structure that is amenable to a benders decomposition ( bd ) approach , which partitions the constraints in the ef formulation into ( 1 ) a buc problem prescribing the unit commitment decisions and the corresponding economic dispatch in the no - contingency state ( this is commonly referred to as the master problem in bd ) , and ( 2 ) a subproblem corresponding to each contingency feasibility check , for each contingency state and time period .the bd algorithm iterates between solving the master problem ( buc ) , to prescribe the lowest cost unit commitment and economic dispatch , and the linear subproblems , until an optimal solution with a feasible post - contingency corrective recourse for all contingency states is obtained . in the following sub - section ,we describe our benders decomposition solution method , as it is applied to ccuc .we begin by observing that given a time period , a unit commitment decision and the no - contingency generation schedule , feasibility under contingency state , as prescribed by , is contingent on satisfying the following dc power flow constraints .we refer to this problem as the contingency feasibility problem * cf* . for conciseness of notation, we eliminate the superscript `` '' from the and decision variables .[ mod_cf ] using the dual variables associated with each constraint set in ( [ cf_bal])-([cf_tot_q ] ) , we have , by strong duality in linear programming , that is feasible if and only if the following dual problem * dcf* is bounded : [ mod_dcf ] note that the feasible domain for * dcf* , is a polyhedral cone and any solution in the domain is a ray . by minkowski s theorem , every such ray can be expressed as a non - negative linear combination of the extreme rays of the domain .therefore , the dual problem * dcf* is bounded if and only if its optimal objective value is less than or equal to zero . and this happens if and only if we call these the benders feasibility cuts or - for short .below we outline the benders decomposition algorithm as it applied to ccuc .initialization : let solve buc * if * buc is infeasible , ccuc has no feasible solution , exit * else * , let be an optimal solution of buc 0.8 cm * for * each , , 1.6 cm solve cf and let be the optimal objective value 1.6 cm * if * unbounded add -cut to 1.6 cm * end if * 0.8 cm * end for * 0.8 cm * if * -cut(s ) added in ( 7 ) , let and return to ( 2 ) 0.8 cm * else * , is an optimal solution , exit 0.8 cm * end if * * end if * even with a bd approach ccuc may not be tractable for practical size power systems , since for every contingency and time period , we need to ensure that a feasible dc power flow with limited loss - of - load exists , which is intractable in most cases . in this section , we describe a cutting plane algorithm that uses a bi - level separation problem to _ implicitly _identify a contingency state that would result in the worst - case loss - of - load for each contingency size , .if the worst - case generation shedding is non - zero and/or loss - of - load is above the given contingency budget , the current solution is infeasible , and we generate a cutting plane , corresponding to - to add to the buc to protect against this particular contingency state . given a time period and a contingency budget , unit commitment , and the no - contingency generation levels , we solve a bi - level _ power system inhibition problem _ ( psip ) , to determine the worst - case generation / load shedding under a contingency with _ exactly _ failed elements . in the context of psip , the contingency vector is no longer a parameter but a vector of upper - level decision variables . in psip ,the upper - level decisions correspond to binary contingency selection decisions and the lower level decisions correspond to recourse generation schedule and dc power flow under the state prescribed by the the unit commitment decisions , the no - contingency state economic dispatch , and upper - level contingency selection variables . before we introduce the power system inhibition problem ( psip ) , we augment the direct current power flow constraints as follows .we introduce three sets of non - negative , continuous variables corresponding to generation shedding for all , loss - of - load at each bus for all and auxiliary variable corresponding to total system loss - of - load above the allowable threshold .these variables in conjunction with additional constraints ensure that psip has a feasible recourse power flow for any unit commitment , no - contingency state economic dispatch and upper - level contingency selection decisions .we now state the power system inhibition problem .[ bpsip ] the bi - level objective seeks to maximize , the minimum generation shedding and loss - of - load quantity above the allowable threshold . since for all and non - negative variables , the objective value is bounded below by zero .if the objective value is equal to zero , the current solution has a feasible corrective recourse for all contingencies of size .otherwise , the current solution can not survive the contingency prescribed by upper - level contingency selection variables . given a contingency state defined by ,the objective of the power system operator ( the inner minimization problem ) is to find a corrective recourse power flow such that generation shedding and loss - of - load quantity above the allowable threshold is minimized .is a budget constraint on the number of power system elements , generation and/or transmission , that must be in the selected contingency state .constraints enforce power balance at each bus , with additional generation shedding variables for each generator located at a bus and a bus load - shedding variable to ensure system feasibility .constraints - are as stated in ( [ cont_const ] ) .constraints ( [ bpsip_r ] ) restrict the amount of generation shedding to be at most the generation output for each generator . constraint ( [ bpsip_s ] ) defines the amount of load shedding above the allowable threshold .if then , otherwise , .bi - level programs , such as ( [ bpsip ] ) , can not be solved directly .next , we describe a reformulation strategy to derive a mixed - integer linear programming equivalent for b - psip .we begin by fixing the upper level variables and dualizing the inner minimization problem .this results in a single - level , bilinear program with bilinear terms in the objective function . in the resulting reformulation, there are five nonlinear terms , which are products of binary contingency selection variables and continuous dual variables .each of these non - linear terms can be linearized using the following strategy .let and be two continuous variables and .then the bilinear term , , can be linearized as follows . letting , we introduce the following four constraints to linearize the bilinear term . here , parameter represents an upper bound for continuous variable and satisfies .assessing these constraints for both binary values of shows that they provide a linearization . if , then constraint ( [ lin24 ] ) implies that . with , constraints ( [ lin13 ] )implies that , which are never binding .if , then constraints ( [ lin13 ] ) implies and constraint ( [ lin24 ] ) implies , which are never binding .we follow a similar strategy to linearize all five bilinear terms .define continuous variables and let , , , and .following the same linearization strategy introduced above , we now state the full mixed - integer linear psip formulation for completeness .[ mod_psip_full ] next , we outline an algorithm for _ optimally _ solving problem ccuc that combines a benders decomposition with the aid of an oracle given by , which acts as a separation problem .a given solution is feasible if the oracle can not find a contingency of size that results in a loss - of - load above the allowable threshold .that is , if the optimal objective value is zero . for each contingency budget , we can check for the worst - case -element contingencies by solving using a failure budget of ( i.e. the right - hand side of inequality ( [ psip_full_budget ] ) is set to , as it is right now ) .whenever the oracle determines that the current solution is _ not _ -- compliant , it returns a contingency state , prescribed by , that results in a generation shedding and/or loss - of - load , above the allowable threshold for -element failures .we now present a cutting plane algorithm , referred to as the _ contingency screening algorithm 1 ( csa1 ) _ to solve ccuc implicitly by screening for the worst - case contingency , in terms of total generation and load shedding .initialization : let solve buc * if * buc is infeasible , ccuc has no feasible solution , exit * else * , let be an optimal solution of buc 0.8 cm * for * all , , 1.6 cm solve psip and let be the optimal objective value 1.6 cm * if * 2.4 cm add -cut to 1.6 cm * end if * 0.8 cm * end for * 0.8 cm * if * -cut(s ) added in ( 7 ) , let and return to ( 2 ) 0.8 cm * else * , is an optimal -- compliant solution , exit 0.8 cm * end if * * end if * in preliminary testing using csa1 , we found that run time is significantly impacted by the need to solve a large number of psip instances at each master iteration of the algorithm .specifically , we solve one instance of psip for each contingency - size and period pair for each master iteration .the solution time of psip , as expected , is heavily impacted by the size of the power system . figure [ figure0 ] shows the solution time ( on a logarithmic scale ) of psip for various power system sizes and maximum contingency budgets .-0.5 cm -1 cm in solving ccuc using csa1 we also made three observations .first , the majority of the the total run - time was spent solving psip ( [ mod_psip_full ] ) instances .secondly , a contingency that fails the system in one time period often fails the system in other time times as well , which suggests sharing of contingencies across time periods .thirdly , in the final solution only a small number of contingencies are actually identified .that is to say , it is often prudent to consider a small number of contingencies _ explicitly _ in solving ccuc. based on these observations , we found that it is most efficient to develop a version of the csa algorithm that minimizes the number times we solve psip ( [ mod_psip_full ] ) instances and allows for sharing of contingencies across time periods .we achieve this buy using a dynamic contingency list .we begin with an empty contingency list . at each master iteration , we first screen all contingencies in the list for each time period . for each time period , we generate feasibility cuts for each violated contingency in the list .if none of the contingencies in the list is violated in any time period , we proceed to solving psip instances to identify a new violated contingency .this simple procedure ensures that each violated contingency identified by solving psip is never redundant .that is to say , the new contingency is not in our existing contingency list .when a new contingency is identified , we add it to the contingency list and check for its violation in all other time periods by solving a linear dcf problem .our computational results indicate that this procedure results in the fewest total psip instances solved on average , which results in the fastest run time .the key idea is that this procedure avoids redundant psip solutions to re - identify violated contingencies .this algorithm is referred to as the _ contingency screening algorithm 2 ( csa2)_. initialization : , solve buc * if * buc is infeasible , ccuc has no feasible solution , exit * else * , let be an optimal solution of buc 0.8 cm * for * each , , 1.6 cm solve cf and let be the optimal objective value 1.6 cm * if * 2.4 cm add -cut to 1.6 cm * end if * 0.8 cm * end for * 0.8 cm * if * -cut(s ) added in step ( 7 ) 1.6 cm let , return to step ( 2 ) 0.8 cm * end if * 0.8 cm * for * all , , 1.6 cm solve psip and let be the optimal objective value 1.6 cm * if * add -cut to 2.4 cm let , , return to step ( 2 ) 1.6 cm * end if * 0.8 cm * end for * * end if * is an optimal -- compliant solution , exitwe implemented our proposed models and algorithms in c++ using ibm s concert technology library 2.9 and cplex 12.1 milp solver .all experiments were performed on a workstation with two quad - core , hyper - threaded 2.93ghz intel xeon processors with 96 gb of memory .this yields a total of 16 threads allocated to each invocation of cplex .the default behavior of cplex 12.1 is to allocate a number of threads equal to the number of machine cores . in the case of hyper - threaded architectures ,each core is presented as a virtual dual - core although it is important to note that the performance is not equivalent to a true dual core .the workstation is shared by other users , such that our run - time results should be interpreted as conservative . with the exception of the optimality gap , which we set to 0.1%, we used the cplex default settings for all other parameters .all runs were allocated a maximum of seconds ( 3 hours ) of wall clock time .we executed our models and algorithms on five test systems of varying size : the 6-bus , ieee 24-bus , rts-96 , and ieee 118-bus test systems , and a simplified model of the us western interconnection ( wecc-240) . the 6-bus system described in section [ ieee6_example ] is further augmented with three fast ramping generation units located at bus 1 , 2 , and 6 , respectively , to ensure there is sufficient generation capacity for larger - size contingencies .generator data for these three units are identical to g4-g6 in table [ tab10 ] .to ensure there is sufficient operational flexibility in the wecc-240 system , we made eight transmission lines and one generation unit immune to failures .these nine elements include serial lines , pairs of transmission lines , and generation unit and transmission line pairs , whose failure would result in islanding of subsystems ( buses ) .additionally , we assume that non - dispatchable generation injections into the system can be shed during contingency states . for each test system, we consider a 24 hour planning horizon and the four contingency budgets and , yielding a total of 20 instances .[ tab1 ] we first consider the run - times for the three different algorithms for solving the ccuc problem : the extensive form milp , benders decomposition , and the contingency screening algorithm 2 ( csa2 ) .the results are presented in table [ tab1 ] .all times are reported in wall clock ( elapsed ) seconds . the column labeled " reports the number of distinct contingencies for a given budget , while the column labeled " reports the fraction of total load ( demand ) that can be shed .entries in table [ tab1 ] reporting x " indicate that the corresponding algorithm failed to locate a -- compliant solution within the 0.1% optimality gap within the 3 hour time limit . for those instances that could not be solved within the allocated time , we provide exit status or feasibility gaps , indicating the maximum fraction of total demand shed _ above _ the allowable threshold in the final solution . in all runs of the csa2 algorithm , we initialize the contingency list to the empty list .as expected , the extensive form approach ( ef ) can only solve the smallest instances , since for each contingency , a full set of dc power flow constraints ( [ cont_const ] ) must be explicitly embedded in the formulation . as the number of contingencies grows , this formulation quickly becomes intractable .the exit status lpr " and `` om '' represent `` solving linear programming relaxation at root node '' and `` out of memory '' , respectively .note that our test instances only represent small to at most moderate sized systems relative to real power systems ( which can contain on the order of thousands to tens of thousands of elements ) , indicating that even significant advances in solver technology are unlikely to mitigate this issue .further , even given significant algorithmic advances , the memory requirements associated with the ef will likely cause the intractability to persist .the bd approach attempts to address the exponential , as shown in remark 6 , explosion in the number of contingencies via a benders reformulation / decomposition , with corresponding delayed cut generation .however , although the bd approach does not explicitly incorporate power flow constraints ( [ cont_const ] ) for each contingency into the formulation , those power flow constraints must still be solved to identify violated feasibility cuts ( which are then added to the master problem ) . in summary ,the bd approach mitigates the memory issues associated with the ef approach , but the cost of identifying feasibility violations for a rapidly growing number of contingencies remains prohibitive .overall , the bd approach can solve larger instances than the ef approach , but still fails given larger and larger test instances .finally , we consider the performance of our third approach : csa2 . here , we see that all of our test instances , with the sole exception of the wecc-240 system with , can be solved within the 3 hour time limit .this result is enabled by the combination of using a dynamic contingency list ( significantly reducing the number of psip solves ) and the fact that we are able to implicitly evaluate all the contingencies in order to identify a violated contingency , and then quickly find a corresponding feasibility cut by solving a single linear program ( dcf ) .these features of the csa2 algorithm allow it to mitigate the effects of a combinatorial number of contingencies and the associated impact on run - times and memory requirements .lastly , we note that although cpa2 failed to solve the wecc-240 system with within the allocate time , the final solution at the three hour mark is in fact a -- compliant solution . for large power systems and/or contingency budgets , significant computational time is required to `` prove '' feasibility .eliminating the three hour time limit , we observed that the wecc-240 system with could be solved in approximately 18 hours , with the majority of this time taken to prove feasibility of the final solution .we next examine the run - times of our csa2 algorithm in further detail , as reported in table [ tab2 ] . for each instance, we report the total number of possible contingency states and the number of contingencies for which corresponding feasibility cuts were actually generated .the latter corresponds to the final size of the dynamic contingency list , which is reported in the column labeled " .clearly , corresponds to a vanishingly small fraction of the possible number of contingencies , which is critical to the tractability of the approach .the remaining columns of table [ tab2 ] break down the total run time ( in wall clock seconds ) by the three main components of the algorithm the rmp , which identifies unit commitments ; the power system inhibition problem ( psip ) , which identifies a contingency that has no feasible corrective recourse power flow given the current rmp uc decisions and no - contingency economic dispatch ; and the contingency feasibility subproblems ( dcf ) , which yield the feasibility cuts .the final column , labeled `` cuts '' , reports the total number of feasibility cuts generated in solving the instance .it is clear from table [ tab2 ] that the computational bottleneck in the csa2 algorithm is the solution of the psip , such that any improvements to that process will yield immediate reductions in csa2 run - times .we have investigated the problem of committing generation units in power system operations , and determining a corresponding no - contingency state economic dispatch , such that the resulting solution satisfies the -- reliability criterion .this reliability criterion is a generalization of the well - known - criterion , and requires that at least fraction of the total demand is met following the failure of system components , for .we refer to this problem as the contingency - constrained unit commitment problem , or ccuc .we proposed two algorithms to solve the ccuc : one based on the benders decomposition approach , and another based on contingency screening algorithms .the latter method avoids the combinatorial explosion in the number of contingencies by seeking vulnerabilities in the current solution , and generating valid inequalities to exclude such infeasible solutions in the master problem .we tested our proposed algorithms on test systems of varying sizes .computational results show our proposed contingency screening algorithm ( csa2 ) , which uses a bi - level separation program to implicitly consider all contingencies and a dynamic contingency list to avoid re - identification of contingencies , significantly outperforms the benders decomposition approach .we were able to solve all test systems , with the exception of the largest wecc-240 instance , in under 3 hours .in contrast , both the benders decomposition algorithm and a straightforward solution of the ccuc extensive form , failed to solve all but the smallest instances within 3 hours .we believe that this paper will provide a significant basis for subsequent research in contingency - constrained unit commitment .for example , we are working to apply these methods to full - scale systems .while our results are promising in terms of scalability , full - scale problems pose more significant computational challenges , and consequently will require stronger formulations for the power system inhibition problem and possible adoption of high - performance computing resources .further , our current ccuc model assumes all component failures occur simultaneously . in order to reflect practical operational situations , where failures may happen consecutively , new ccuc models that consider timing between system component failures are needed .we plan to extend our ccuc models to include these cases .finally , we worked exclusively with a deterministic ccuc model to date .however , it is ultimately essential to take uncertainty into account in unit commitment , e.g. , to account for uncertain demand and renewable generation units .we believe our current cutting plane framework can be naturally extended to robust optimization and stochastic programming formulations via a nested decomposition approach .* _ acknowledgement_*. sandia national laboratories laboratory - directed research and development program and the u.s .department of energy s office of science ( advanced scientific computing research program ) funded portions of this work .sandia national laboratories is a multi - program laboratory managed and operated by sandia corporation , a wholly owned subsidiary of lockheed martin corporation , for the u.s .department of energy s national nuclear security administration under contract de - ac04 - 94al85000 .fan , n. , h. xu , f. pan , p.m. pardalos .2011 . economic analysis of the n - k power grid contingency selection and evaluation by graph algorithms and interdiction methods . _ energy syst . _ * 2*(3 - 4 ) : 313324 .hedman , k.w .ferris , r.p .oneill , e.b .fisher , s.s . oren . 2010 .co - optimization of generation unit commitment and transmission switching with n-1 reliability ._ ieee trans .power syst .* 24*(2 ) : 10521063 .b. lesieutre , a. pinar , and s. roy , power system extreme event detection : the vulnerability frontier , in _ proc .41st hawaii international conference on system sciences _ , pages 184 , waikoloa , big island , hi , 2008 .north american electric reliability corporation , transmission planning standards , accessed on april 2014 .available at http://www.nerc.com/pa/stand/reliability%20standards/forms/allitems.aspx oneill , r.p .hedman , e.r .krall , a. papavasiliou , s.s .oren . 2010. economic analysis of the n-1 reliable unit commitment and transmission swtiching problem using duality concepts . _ energy syst . _ * 1*)(2 ) : 165195 .
|
we consider the problem of minimizing costs in the generation unit commitment problem , a cornerstone in electric power system operations , while enforcing an -- reliability criterion . this reliability criterion is a generalization of the well - known - criterion , and dictates that at least fraction of the total system demand must be met following the failures of or fewer system components . we refer to this problem as the contingency - constrained unit commitment problem , or ccuc . we present a mixed - integer programming formulation of the ccuc that accounts for both transmission and generation element failures . we propose novel cutting plane algorithms that avoid the need to explicitly consider an exponential number of contingencies . computational studies are performed on several ieee test systems and a simplified model of the western us interconnection network , which demonstrate the effectiveness of our proposed methods relative to current state - of - the - art .
|
generation of turbulence - like fields ( also known as _ synthetic turbulence _ ) has received considerable attention in recent years .several schemes have been proposed with different degrees of success in reproducing various characteristics of turbulence .recently , scotti and meneveau further broadened the scope of synthetic turbulence research by demonstrating its potential in computational modeling . their innovative turbulence emulation scheme based on the _ fractal interpolation technique _ ( fit ) was found to be particularly amenable for a specific type of turbulence modeling , known as large - eddy simulation ( les , at present the most efficient technique available for high reynolds number flow simulations , in which the larger scales of motion are resolved explicitly and the smaller ones are modeled ) . the underlying idea was to explicitly reconstruct the subgrid ( unresolved ) scales from given resolved scale values ( assuming computation grid - size falls in the _ inertial range _ of turbulence ) using fit and subsequently estimate the relevant subgrid - scale ( sgs ) tensors necessary for les .simplicity , straightforward extensibility for multi - dimensional cases , and low computational complexity ( appropriate use of _ fractal calculus _ can even eliminate the computationally expensive explicit reconstruction step , see section [ sec4 ] for details ) makes this fit - based approach an attractive candidate for sgs modeling in les .although the approach of is better suited for les than any other similar scheme ( e.g. , ) , it falls short in preserving the essential small - scale properties of turbulence , such as multiaffinity ( will be defined shortly ) and non - gaussian characteristics of the probability density function ( pdf ) of velocity increments .it is the purpose of this work to extend the approach of in terms of realistic turbulence - like signal generation with all the aforementioned desirable characteristics and demonstrate its potential for les through a - priori analysis ( an les - sgs model evaluation framework ). we will also demonstrate the competence of our scheme in the emulation of passive - scalar fields for which the non - gaussian pdf and multiaffinity are significantly pronounced and can not be ignored .the fractal interpolation technique is an iterative affine mapping procedure to construct a synthetic deterministic small - scale field ( in general fractal provided certain conditions are met , see below ) given a few large - scale interpolating points ( anchor points ) .for an excellent treatise on this subject , the reader is referred to the book by barnsley . in this paper, we will limit our discussion ( without loss of generality ) only to the case of three interpolating data points : . for this case , the fractal interpolation iterative function system ( ifs ) is of the form , where , have the following affine transformation structure : \left ( { { \begin{array}{*{20}c } x \hfill \\ u \hfill \\ \end{array } } } \right ) + \left ( { { \begin{array}{*{20}c } { e_n } \hfill\\ { f_n } \hfill \\ \end{array } } } \right),\mbox { } n = 1,2.\ ] ] to ensure continuity , the transformations are constrained by the given data points as follows : and for the parameters can be easily determined in terms of ( known as the vertical stretching factors ) and the given anchor points by solving a linear system of equations .the attractor of the above ifs , , is the graph of a continuous function \to { \rm r} ] , and denote the haar wavelet coefficient at scale and location , spacing in physical space and length of the spatial series , respectively .the power spectrum displays the inertial range slope of , as anticipated . at this point , we would like to invoke an interesting mathematical result regarding the scaling exponent spectrum , , of the fractal interpolation ifs : where , = the number of anchor points ( in our case ) .the original formulation of was in terms of a more general scaling exponent spectrum , , rather than the structure function based spectrum .the spectrum is an exact legendre tranform of the singularity spectrum in the sense that it is valid for any order of moments ( including negative ) and any singularities . can be reliably estimated from data by the wavelet - transform modulus - maxima method . to derive equation [ eq6 ] from the original formulation , we made use of the equality : , which holds for positive and for positive singularities of hlder exponents less than unity . in turbulence ,the most probable hlder exponent is 0.33 ( corresponding to the k41 value ) and for all practical purposes the values of hlder exponents lie between and ( see ) . hence the use of the above equality is well justified . equation [ eq6 ] could be used to validate our previous claim , that the parameters of give rise to a monoaffine field ( i.e. , is a linear function of ) . if we consider [ qed ] .equation [ eq6 ] could also be used to derive the classic result of barnsley regarding the fractal dimension of ifs .it is well - known that the graph dimension ( or box - counting dimension ) is related to as follows : now , using equation [ eq6 ] we get , . for , we recover equation [ eq3 ] .intuitively , by prescribing several scaling exponents , ( which are known apriori from observational data ) , it is possible to solve for from the overdetermined system of equations ( equation [ eq6 ] ) .these solved parameters , , along with other easily derivable ( from the given anchor points and ) parameters ( and ) in turn can be used to construct multiaffine signals .for example , solving for the values quoted by frisch : and , along with ( corresponding to ) , yields the stretching factors .there are altogether eight possible sign combinations for the above stretching parameter magnitudes and all of them can potentially produce multiaffine fields with the aforementioned scaling exponents .however , all of them might not be the right " candidate from the les - performance perspective .rigorous a - priori and a - posteriori testing of these multiaffine sgs models is needed to elucidate this issue ( see section [ sec5 ] ) .we repeated our previous numerical experiment with the stretching parameters and .figure 3a shows the measured values ( ensemble averaged over one hundred realizations ) of the scaling exponents upto order . for comparisonwe have also shown the theoretical values computed directly from equation [ eq6 ] ( dashed line ) . a model proposed by she and lvque based on a hierarchy of fluctuation structures associated with the vortex filamentsis also shown for comparison ( dotted line ) .we chose this particular model because of its remarkable agreement with experimental data .the she and lvque model predicts : .figure 3b shows the pdfs of the increments , which are quite similar to what is observed in real turbulence for large the pdf is near gaussian while for smaller it becomes more and more peaked at the core with high tails ( see also figure 7b for the variation of flatness factors of the pdfs of increments with distance ) .in the case of an incompressible fluid , the spatially filtered navier - stokes equations are : + \nu \nabla^2 \tilde { u}_m \\ \mbox { } m , n = 1,2,3 .\nonumber\end{aligned}\ ] ] where is time , is the spatial coordinate in the -direction , is the velocity component in the -direction , is the dynamic pressure , is the density and is the molecular viscosity of the fluid .the tilde denotes the filtering operation , using a filter of characteristic width .these filtered equations are now amenable to numerical solution ( les ) on a grid with mesh - size of order , considerably larger than the smallest scale of motion ( the kolmogorov scale ) . however , the sgs stress tensor in equation [ ns2 ] , defined as is not known .it essentially represents the contribution of unresolved scales ( smaller than ) to the total momentum transport and must be parameterized ( via a sgs model ) as a function of the resolved velocity field . due to strong influence of the sgs parameterizations on the dynamics of the resolved turbulence ,considerable research efforts have been made during the past decades and several sgs models have been proposed ( see for reviews ) .the eddy - viscosity model ( ) and its variants ( e.g. , the dynamic model , the scale - dependent dynamic model ) are perhaps the most widely used sgs models .they parameterize the sgs stresses as being proportional to the resolved velocity gradients .these sgs models and other standard models ( e.g. , similarity , nonlinear , mixed models ) postulate the form of the sgs stress tensors rather than the structure of the sgs fields ( ) . philosophicallya very different approach would be to explicitly reconstruct the subgrid - scales from given resolved scale values ( by exploiting the statistical structures of the unresolved turbulent fields ) using a specific mathematical tool ( e.g. , the fractal interpolation technique ) and subsequently estimate the relevant sgs tensors necessary for les .the fractal model of and our proposed multiaffine model basically represent this new class of sgs modeling , also known as the `` direct modeling of sgs turbulence '' ( ) . in section [ sec3 ], we have demonstrated that fit could be effectively used to generate synthetic turbulence fields with desirable statistical properties .in addition , barnsley s rigorous fractal calculus offers the ability to analytically evaluate any statistical moment of these synthetically generated fields , which in turn could be used for sgs modeling .detailed discussion of the fractal calculus is beyond the scope of this paper .below , we briefly summarize the equations most relevant to the present work .let us first consider the moment integral : ^ldx}. ] . for instance , the 1-d component of the sgs stress tensor reads as : barnsley ( ) proved that for the fractal interpolation ifs ( equation [ eq1 ] ) , the moment integral becomes : where , after some algebraic manipulations , the sgs stress equation at node becomes : we would like to point out that the coefficients are sole functions of the stretching factors . in other words , if one can specify the values of in advance , the sgs stress ( ) could be explicitly written in terms of the coarse - grained ( resolved ) velocity field ( ) weighted according to weights uniquely determined by . in table [ tab : t1 ] , we have listed the values corresponding to eight stretching factor combinations , .it is evident that any two combinations ( ) and ( ) are simply `` mirror '' images of each other in terms of .thus , only four distinct multiaffine sgs models ( m1 , m2 , m3 and m4 ) could be formed from the aforementioned eight combinations and in each case the orderings could be chosen at random with equal probabilities . in this table , we have also included the fractal model of and the similarity model of in expanded form similar to the multiaffine models ( see the appendix for more information on standard sgs models ) .the multiaffine models and the fractal model differ slightly in terms of filtering operation . performed filtering at a scale ( see equation [ scotti ] ) , whereas in the case of similarity model , found that it is more appropriate to filter at .for the multiaffine models , we also chose to employ filtering scale .one noticable feature in table [ tab : t1 ] is that some combinations of result in strongly asymmetric weights . as an example , in the case of m4 with and , and .this means that the sgs stress at any node would have more weight from the resolved velocity at node than node .one would expect that such an asymmetry could have serious implication in terms of sgs model performance . in the following section , we will attempt to address this issue among others by evaluating several sgs models via the a - priori analysis approach . [ cols="<,^,^,^,^,^,^,^,^",options="header " , ] next , in figure 6 , we plot the mean correlation between real and modeled sgs stress and energy dissipation rates for = 1,2,4 and 8 m. as anticipated , for all the models , the correlation decreases with increasing filtering scale . also , the correlation of real vs. model sgs energy dissipation rates is usually higher compared to the sgs stress scenario , as noticed by other researchers .it is expected that in the abl the scaling exponent values ( ) would deviate from the values reported in due to near - wall effect .this means that the stretching factors based on the values we used in this work , are possibly in error .nevertheless , the overall performance of the multiaffine model is beyond our expectations .it remains to be seen how the proposed sgs scheme will perform in a - posteriori analysis and such work is currently in progress .our scheme could be easily extended to synthetic passive - scalar ( any diffusive component in a fluid flow that has no dynamical effect on the fluid motion itself , e.g. , a pollutant in air , temperature in a weakly heated flow , a dye mixed in a turbulent jet or moisture mixing in air ) field generation .the statistical and dynamical characteristics ( anisotropy , intermittency , pdfs etc . ) of passive - scalars are surprisingly different from the underlying turbulent velocity field .for example , it is even possible for the passive - scalar field to exhibit intermittency in a purely gaussian velocity field .similar to the k41 , neglecting intermittency , the kolmogorov - obukhov - corrsin ( koc ) hypothesis predicts that at high reynolds and peclet numbers , the -order passive - scalar structure function will behave as : in the inertial range .experimental observations reveal that analogous to turbulent velocity , passive - scalars also exhibit anomalous scaling ( departure from the koc scaling ) .observational data also suggest that passive - scalar fields are much more intermittent than velocity fields and result in stronger anomaly . to generate synthetic passive - scalar fields , we need to determine the stretching parameters and from prescribed scaling exponents , .unlike the velocity scaling exponents , the published values ( based on experimental observations ) of higher - order passive - scalar scaling exponents display significant scatter .thus for our purpose , we used the predictions of a newly proposed passive - scalar model : .this model based on the hierarchical structure theory of shows reasonable agreement with the observed data .moreover , unlike other models , this model manages to predict that the scaling exponent is a nondecreasing function of . theoretically , this is crucial because , otherwise , if as , the passive - scalar field can not be bounded . employing equation ( [ eq6 ] ) and the scaling exponents ( upto -order ) predicted by the above model , we get the following stretching factors : .we again repeated the numerical experiment of section [ sec3 ] and selected the stretching parameter combination : and .like before , we compared the estimated [ using equation ( 4 ) ] scaling exponents from one hundred realizations with the theoretical values [ from equation ( 6 ) ] and the agreement was found to be highly satisfactory . to check whether a generated passive - scalar field ( , ) possesses more non - gaussian characteristics than its velocity counterpart ( , ) , we performed a simple numerical experiment .we generated both the velocity and passive - scalar fields from identical anchor points and also computed the corresponding flatness factors , , as a function of distance ( see figure 7b ) . comparing fig 7a with fig 3b and also from fig 7b, one could conclude that the passive - scalar field exhibits stronger non - gaussian behavior than the velocity field , in accord with the literature .in this paper , we propose a simple yet efficient scheme to generate synthetic turbulent velocity and passive - scalar fields .this method is competitive with most of the other synthetic turbulence emulator schemes ( e.g. , ) in terms of capturing small - scale properties of turbulence and scalars ( e.g. , multiaffinity and non - gaussian characteristics of the pdf of velocity and scalar increments ) .moreover , extensive a - priori analyses of field measurements unveil the fact that this scheme could be effectively used as a sgs model in les .potentially , the proposed multiaffine sgs model can address two of the unresolved issues in les : it can systematically account for the near - wall and atmospheric stability effects on the sgs dynamics .of course , this would require some kind of universal dependence of the scaling exponents on both wall - normal distance and stability .quest for this kind of universality has began only recently .we thank alberto scotti , charles meneveau , andreas mazzino , venugopal vuruputur and boyko dodov for useful discussions .the first author is indebted to jacques lvy - vhel for his generous help .this work was partially funded by nsf and nasa grants .one of us ( sb ) was partially supported by the doctoral dissertation fellowship from the university of minnesota .all the computational resources were kindly provided by the minnesota supercomputing institute .all these supports are greatly appreciated .for 1-d surrogate sgs stress : further , by assuming that the smallest scales of the resolved motion are isotropic , the following equality holds : employing this assumption for the instantaneous fields , we can write the 1-d surrogate sgs stress could be simply written as : now , for 2 filtering this equation becomes : \ ] ] which on further simplification leads to the expression in table [ tab : t1 ] : \end{aligned}\ ] ] ^ 2dx - [ \int \limits_{\frac{1}{4}}^{\frac{3}{4 } } u(x)dx]^2 \\ & = & \frac{1}{12}(\delta_i\tilde{u})^2 + \frac { d_i(8 - 3d_i^2)}{48}\delta_i^2\tilde{u}\delta_i\tilde{u } + \nonumber \\ & & \frac{1 + 15d_i^2 - 24d_i^4 + 12d_i^6}{192(1-d_i^2)}(\delta_i^2\tilde{u})^2.\end{aligned}\ ] ] g. parisi and u. frisch , in _ proceedings of the international school on turbulence and predictability in geophysical fluid dynamics and climate dynamics _, m. ghil , r. benzi , and g. parisi ( north - holland , amsterdam , 1985 ) . in most of today s les computations ,the filtering operation is basically implicit .this means that the process of numerical discretization ( of grid size ) in itself creates the filtering effect .although explicit filtering of the grid - resolved velocity field is desirable from a theoretical point of view , limited computational resources make it impractical .however , in a - priori studies of dns , experimental or field observations , where fine resolution velocity fields are readily available , the explicit filtering ( typically with a top hat , gaussian or cutoff filter ) followed by downsampling are usually performed to compute the resolved ( and filtered ) velocity .this issue of filtering of velocity fields should not be confused with the explicit filterings done in the cases of similarity , dynamic , fractal or multiaffine models to compute the sgs stresses or other necessary sgs tensors . .the black dots denote initial interpolating points .( b ) structure functions of order 2 , 4 and 6 ( as labeled ) computed from the series in figure 1a .the slopes corresponding to this particular realization are 0.62 , 1.25 and 1.89 , respectively .( c ) pdfs of the normalized increments of the series in figure 1a .the plus signs correspond to , while the circles refer to a distance .the solid curve designates the gaussian distribution for reference.,title="fig : " ] .the black dots denote initial interpolating points .( b ) structure functions of order 2 , 4 and 6 ( as labeled ) computed from the series in figure 1a .the slopes corresponding to this particular realization are 0.62 , 1.25 and 1.89 , respectively .( c ) pdfs of the normalized increments of the series in figure 1a .the plus signs correspond to , while the circles refer to a distance .the solid curve designates the gaussian distribution for reference.,title="fig : " ] .the black dots denote initial interpolating points .( b ) structure functions of order 2 , 4 and 6 ( as labeled ) computed from the series in figure 1a .the slopes corresponding to this particular realization are 0.62 , 1.25 and 1.89 , respectively .( c ) pdfs of the normalized increments of the series in figure 1a .the plus signs correspond to , while the circles refer to a distance .the solid curve designates the gaussian distribution for reference.,title="fig : " ] .the continuous , dashed and dotted lines denote the k41 , equation [ eq6 ] , and the she - lvque model predictions respectively .the circles with error bars ( one standard deviation ) are estimated values over one hundred realizations using and .experimental data of anselmet et al.s [ 5 ] is also shown for reference ( star signs ) .( b ) pdfs of the normalized increments of the multiaffine series .the plus signs denote , while the circles refer to a distance .the solid curve designates the gaussian distribution for reference.,title="fig : " ] .the continuous , dashed and dotted lines denote the k41 , equation [ eq6 ] , and the she - lvque model predictions respectively .the circles with error bars ( one standard deviation ) are estimated values over one hundred realizations using and .experimental data of anselmet et al.s [ 5 ] is also shown for reference ( star signs ) .( b ) pdfs of the normalized increments of the multiaffine series .the plus signs denote , while the circles refer to a distance .the solid curve designates the gaussian distribution for reference.,title="fig : " ] .the results are based on 358 abl turbulent velocity series measured during several field campaigns.,title="fig : " ] .the results are based on 358 abl turbulent velocity series measured during several field campaigns.,title="fig : " ] , while the circles to a distance .the solid curve designates the gaussian distribution for reference .( b ) the flatness factors of the pdfs of the increments of the velocity ( circles ) and passive - scalar field ( stars ) as a function of distance .note that both the fields approach the gaussian value of 3 only at large separation distances .clearly the passive - scalar field is more non - gaussian than the velocity field.,title="fig : " ] , while the circles to a distance .the solid curve designates the gaussian distribution for reference .( b ) the flatness factors of the pdfs of the increments of the velocity ( circles ) and passive - scalar field ( stars ) as a function of distance .note that both the fields approach the gaussian value of 3 only at large separation distances .clearly the passive - scalar field is more non - gaussian than the velocity field.,title="fig : " ]
|
fractal interpolation has been proposed in the literature as an efficient way to construct closure models for the numerical solution of coarse - grained navier - stokes equations . it is based on synthetically generating a scale - invariant subgrid - scale field and analytically evaluating its effects on large resolved scales . in this paper , we propose an extension of previous work by developing a multiaffine fractal interpolation scheme and demonstrate that it preserves not only the fractal dimension but also the higher - order structure functions and the non - gaussian probability density function of the velocity increments . extensive a - priori analyses of atmospheric boundary layer measurements further reveal that this multiaffine closure model has the potential for satisfactory performance in large - eddy simulations . the pertinence of this newly proposed methodology in the case of passive scalars is also discussed .
|
cosmological distance formulae are standard formulae for different world models in use for more than half a century and are available in review articles and text books on cosmology .the two most commonly used formulae for cosmological distant sources are the luminosity distance formula and the angular diameter distance formula .a use of either of these in conjunction with the measured values of flux densities or angular sizes of a suitable sample , for ascertaining the geometry of the universe , depends upon the assumption of the existence of a standard candle / rod , and a successful interpretation is very much dependent on the cosmological evolution of the intrinsic luminosities / physical sizes of the parent population of the sources of interest .not only the large spread in their intrinsic values makes it very uncertain to define a standard candle / rod , in fact in most cases the effects of the evolutionary changes in source properties are overwhelmingly larger than those expected due to differences in geometry between different world models .the other two formulae , which presently are being used only for nearby objects , are the proper motion distance formula and the parallax distance formula . for testing different world models , application of the parallax distance in particular , unlike other distance measures ,does not depend upon any assumption about the intrinsic properties of the observed sources .thus it is also independent of the chosen frequency band since no source property is involved , as long as the source is detectable in that band . at a given redshift, the observed parallax depends only on the chosen baseline of the observer and the world model geometry .weinberg ( 1970 ) showed that even otherwise , the measurement of redshift and luminosity ( or angular diameter ) distance can not in principle determine the sign and magnitude of the spatial curvature unless supplemented with a dynamical model .however , this ambiguity can be resolved by parallax measurements at cosmological distances . with the achieved angular resolutions already in the micro - arcsec domain ( fomalont & kobayashi 2006 ) ,it could with the advancement of technology become important rather sooner than expected .the cosmological parallax distance formulation is available in text - books on cosmology ( weinberg 1972 ; peacock 1999 ) , review articles ( von hoerner 1974 ) or reference books ( lang 1980 ) , with expressions for the different world models given there .here we show that a subtle correction is required in the standard formulation found in the contemporary literature .the correction stems from the fact that the physical dimensions of a gravitationally bound system like that of the solar system ( or for that matter even of larger systems like that of a galaxy ) , do not change with the cosmological expansion of the universe .therefore two ends of a baseline , used by the observer for parallax measurements , do not partake in the free - fall - like motion of the cosmic fluid and thus can not be considered to form a set of co - moving co - ordinates , contrary to what seems to have been implicitly assumed in the standard text - book derivations of the parallax distance formula .the correction can be very large ( a factor of three or even more ) at large redshifts . in any case , irrespective of the amount of corrections involved , it is imperative to have formulae bereft of any shortcomings . in the next sectionwe shall briefly review the formulation given in the literature and in the section following that , we spell out the required corrections .in a homogeneous and isotropic universe , the line element can be expressed in the robertson - walker metric form , \;,\end{aligned}\ ] ] where , a function of time , is known as the cosmic scale factor , is the curvature index that can take one of the three possible values or and are the time - independent co - moving co - ordinates . from einstein s field equations, one can relate the curvature index and the present values of the cosmic scale factor to the hubble constant , the matter energy density and the vacuum energy ( dark energy ) density as ( peacock 1999 ) , the space is flat ( ) if . as shown by weinberg ( 1972 ), an observer using a baseline to make measurements of a source at radial co - ordinate will infer a parallax angle , defining a parallax distance , as in euclidean geometry , by we can write , in general it is not possible to express in terms of the cosmological redshift of the source in a close - form analytical expression and one may have to evaluate it numerically .for example , in the world - models , is given by , and since , one can write , for a given , one can evaluate from ( 5 ) by a numerical integration .however for cosmologies , where the deceleration parameter , it is possible to express in an analytical form ( mattig 1959 ) , also equation ( 1 ) now becomes , then it is straight forward to get , ^{1/2}}\;,\end{aligned}\ ] ] which is the expression derived by weinberg ( 1972 ) , quoted in lang ( 1980 ) and also used by von hoerner ( 1974 ) .one can get rid of the square - root in the denominator ( peacock 1999 ) to write equation ( 8) in an alternative form , the derivation of the expression for the parallax angle ( weinberg 1972 ) , while tracing the light path from the source to the observer , the baseline ends were defined by co - moving co - ordinates .but the two ends of any rod or baseline , be it the sun - earth line or some larger baseline in the solar system ( or still larger ones but as long as one is confined within a gravitationally bound system like our galaxy ) , can not be freely falling with the expanding cosmic fluid as the distance between the two ends of the rod is taken to be fixed .one can consider one end of the baseline to be at rest with respect to the underlying cosmic fluid , but then the other end , at a fixed proper distance , will have a velocity with respect to the underlying co - moving sub - stratum .that means the second end of the baseline , because of its motion with respect to the co - moving sub - stratum , will have an aberration , .this will add to the value of the parallax angle as given by ( 2 ) and the actually measured parallax angle will be , \;.\end{aligned}\ ] ] then for the parallax distance , defined by , we can write , in the cosmologies , the modified formula for parallax distance now becomes , } { \left[\left(q_0 - 2\right)\left(q_0 - 1-q_0z\right)+\left(3q_0 - 2\right)\sqrt{1 + 2q_0z}\right]}\;.\end{aligned}\ ] ] actually the relations ( 6 ) , ( 8) , ( 9 ) and ( 12 ) are clumsy when to be used for small and as then one has to evaluate these expressions in the limit .terrell ( 1977 ) provided a much simpler form for as an alternative to mattig s expression , } { \left[1+q_0z+\sqrt{1 + 2q_0z}\right]}\;.\end{aligned}\ ] ] using equation ( 3 ) and ( 13 ) , after some algebraic manipulations we get as , } { \left[(1+z+z^2+q_0z - q_0z^2)+(1+z)\sqrt{1 + 2q_0z}\right]}\;.\end{aligned}\ ] ] similarly one gets a much simpler form for also , } { \left[(1 + 2z+2z^2+q_0z - q_0z^2)+(1 + 2z)\sqrt{1 + 2q_0z}\right]}\;.\end{aligned}\ ] ] the expressions ( 14 ) and ( 15 ) are much simpler to use , especially when evaluating for small or values as one can avoid going through the process of taking a limit .as both the parallax and the correction for the parallax angle are proportional to the baseline , the relative correction to the parallax distance is independent of the length of the baseline and could become appreciable at large redshifts .figure ( 1 ) shows a plot of the text - book expression as well as the corrected expression , for different world models ( ) in the cosmologies .we see that at large redshifts the corrected values for the parallax distance are at least a factor of two or three lower .also the parallax distance does not seem to increase indefinitely with redshift . from the expressions ( 14 ) for find that as , reduces to ] .thus the corrected parallax distance values are smaller by a factor at large redshifts , which for case ( milne s world model ) is a reduction factor of 2 and in the flat space ( ) it is a factor of 3 , the factor is still larger for positive curvature spaces with .recent observations indicate that and the space may be flat with .in such a case , has to be evaluated numerically from , where figure ( 2 ) shows a plot of parallax distance with redshift for the flat space for different values , including the most likely value as inferred from the wmap observations ( hinshaw et al .we see that at large redshifts , the parallax distances calculated from the modified expressions could be lower than from the uncorrected ones by as much as a factor of three or more in the currently favoured cosmologies ( viz . ) .
|
it is shown that the standard cosmological parallax distance formula , as found in the literature , including text - books on cosmology , requires a correction . this correction arises from the fact that any chosen baseline in a gravitationally bound system does not partake in the cosmological expansion and therefore two ends of the baseline used by the observer for parallax measurements can not form a set of co - moving co - ordinates , contrary to what seems to have been implicitly assumed in the standard text - book derivation of the parallax distance formula . at large redshifts , the correction in parallax distance could be as large as a factor of three or more , in the currently favoured cosmologies ( viz . ) . even otherwise , irrespective of the amount of corrections involved , it is necessary to have formulae bereft of any shortcomings . we further show that the parallax distance does not increase indefinitely with redshift and that even the farthest observable point ( i.e. , at redshift approaching infinity ) will have a finite parallax value , a factor that needs to be carefully taken into account when using distant objects as the background field against which the parallax of a foreground object is to be measured .
|
the use of stochastic processes in interdisciplinary modelling has a long history dating back at least to bachelier s seminal work in finance and encompassing applications to traffic flow , biological processes and opinion dynamics , among others .often such systems are analysed using a markovian ( or memoryless ) approximation which considerably simplifies the theoretical treatment .however , within the statistical mechanics community there is much topical interest in characterizing the properties of non - markovian models .there are many ways to incorporate memory effects including generalized langevin or fokker - planck approaches , and the assumption of internal variables or non - exponential waiting times in many - particle microscopic models . at the random walk level , recent analytical studies in the physics literaturehave included the imaginatively named `` elephant '' random walker who remembers a property of the entire history , the `` alzheimer '' random walker who recalls just the distant past , and `` bold '' and `` timorous '' random walkers who behave differently only when they are at the furthest point ever attained .in fact , the elephant random walk can also be related to the older plya urn problem ; see for a mathematical review of this and other random processes with reinforcement . amongst the more recent rigorous resultsare some for the `` excited '' ( or `` cookie '' ) random walk , random walks with different kinds of self - interaction , and those with internal states . in real - life social and economic scenarios the dependence on memory is , of course ,rather complex .however , one psychological heuristic is the `` peak - end rule '' suggested by kahneman et al .this asserts that the remembered utility ( loosely speaking the pleasure or pain experienced ) of a specific situation / episode is approximately given by the average of the peak experience ( best or worst ) during the event and the final experience of that event .notice in particular that this implies `` duration neglect '' in the sense that the extreme and final snapshots are considerably more important in the memory than the overall length of the experience ( even if it is an unpleasant one ! ) . empirical support for this peak - end approximation comes from situations ranging from the pain of medical procedures to the pleasure of material goods . whilst other work paints a more complicated picture , particularly for extended events ,it is clear that peak experiences play an important role and , to the best of our knowledge , such memory of extreme values is largely unexplored from the perspective of statistical physics . in this spirit ,our contribution is to consider a random walk model where the probability of moving left or right depends on the maximum value of a random variable associated to each time step .as we will show , this can be thought of as a simple discrete choice model with a dependence on the `` peak '' of past experience . in particular, we use this framework to investigate whether increased noise in the model ( corresponding perhaps to the `` churn '' of changing circumstances or some kind of disruption , cf .e.g. , ) , always leads to more switching between decisions . using the mathematics of extreme values ,we show that the answer to this question depends on the distribution of the random variable encoding the experience at each step .our work thus helps to shed light on real - world issues as well as contributing to building up general understanding of memory effects in statistical mechanics models .the remainder of the paper is structured as follows . in section[ s : setup ] we describe our random walk formalism and explain its significance as an opinion choice model as well as the manner in which it extends previous work on generalized plya urns . in section [s : extreme ] we employ extreme value theory to develop a heuristic argument for different classes of long - time behaviour depending on the distribution of past experience , and compare our predictions with simulations .finally , in section [ s : dis ] , we conclude with a discussion of implications and open questions .we consider a one - dimensional random walker who steps right or left in discrete time , denoting by the number of steps right up to time and the corresponding number left .note that by construction . for later convenience, we also define the corresponding time averages ( `` velocities '' ) and , suppressing the notational dependence on where no confusion should arise .in addition , at each time step we associate an independent identically distributed ( i.i.d . )random variable from some known distribution with cumulative distribution function ( c.d.f . ) .crucially , the walker `` remembers '' the maximum value of for all rightward steps in its history and , separately , the maximum value of for all leftward steps .we denote these history - dependent random variables by and respectively so that formally we have memory is then built into the dynamics via the setting of left and right hopping probabilities for the next step to depend on the current values of and .it is clear that the system is non - markovian in position space although , of course , still markovian in an enlarged state space including and .the central idea is that this set - up is analogous to a single agent in a discrete decision model where is some kind of `` utility '' and the agent remembers its extreme value ( corresponding to the `` peak '' part of kahneman s peak - end rule ) for each of two choices .specifically , we fix the right and left stepping probabilities as functions of the random variables and to accord with the familiar `` logit '' choices of economic theory where the positive parameter represents the level of noise in the decision .is chosen here to reinforce the analogy with temperature in a physical system and is not to be confused with a time parameter ; the update probabilities loosely resemble the glauber refreshment formulae for an ising model . ] throughout the paper we set so that the two choices ( step directions ) are initially equally likely ; as the system evolves the jump probabilities become asymmetric due to differing values of , and . in particular , note that and are monotonically increasing with the number of steps right and left respectively .in passing , we note here that if were deterministic functions of the velocities the model would closely resemble the plya urn problem , familiar in the mathematics literature , where the probability of selecting a ball of a particular colour depends on the fraction of that colour chosen previously ( in a similar manner , the elephant random walker of steps left and right with probabilities depending on the relative number of such steps in his past ) . if , as here , the probability function is nonlinear , the urn model is known as a generalized plya process .the crucial difference in our model is that fluctuate in a correlated way due to the statistics of the extreme values we seek to determine the effect this has on long - time properties such as the average velocity of the random walker ( or , equivalently , the proportion of time the agent makes each decision ) .one might naively expect that , as for the plya models , in the large limit our random walker approaches a fixed - point state where the relative probabilities ( and hence the fraction of steps left and right ) do not change .the symmetry of suggests two specific types of fixed point : ( i ) with and asymptotically equal and hence symmetric behaviour of the random walker , i.e. , both choices equally likely in the long run ; ( ii ) or with one of or negligibly small with respect to the other and hence an asymmetric random walker moving only in one direction , i.e. , the agent frozen in one or other choice .we shall demonstrate the existence of these fixed points more carefully later .for now , we remark that a pertinent question relates to their stability , in particular , whether the symmetric fixed point can be made stable by increasing the noise. this could be important for sharing the load between two different choices ( e.g. , two different routes or transport options ) . in the next section, we address this issue for different distributions of before , in section [ s : dis ] , considering the added effect of the `` end '' part of the peak - end rule .the behaviour of the model will obviously depend on the distribution of .our strategy is first to analyse the typical long - time dynamics by approximating in by the so - called `` characteristic largest value '' of extreme value theory and then , where relevant , to consider the added effect of fluctuations about this .the characteristic largest value after trials is defined for a given as the value of at which .it gives a straightforward way to obtain the scaling of the maximum value and is closely related to other properties of the full distribution , as we shall see for various cases in the following subsections .our approach using the characteristic largest value leads to hopping probabilities depending on the number of left / right steps over the whole previous history and is thus in the spirit of the generalized plya urn models mentioned above or continuous - time analogues with current - dependent hopping rates .one subtlety here is that the resulting probabilities in our model depend directly on the number of steps left and right , , not the fractions , .depending on the functional form of the characteristic largest value this may introduce an explicit time dependence in the dynamics for as we shall see in some of the subsequent examples .in fact , since by construction , this procedure enables us for a given utility distribution to write simply in terms of and possibly time .now , if the random variable takes value , the mean distance moved in the next step is given by the corresponding value of which we denote by .hence , on average , we expect a `` typical trajectory '' given by the discrete mapping for cases where has no dependence on it is immediately clear from that fixed points should satisfy and a standard `` cobweb''-type construction predicts that the stability of the fixed points is determined by the slope of the increasing function , see figure [ f : cobweb ]. specifically , if [ figure [ f : cobweb](a ) ] then small fluctuations below are characterized by , so on average the velocity increases back towards the fixed point .similarly , fluctuations above have , so on average the velocity decreases , again back towards the fixed point .an analogous argument shows that fixed points with are unstable [ figure [ f : cobweb](b ) ] .notice that due to the time dependence of the mapping the decay towards fixed points is expected to be power - law rather than exponential in nature physically this is because as the measurement time increases the last step has a smaller and smaller effect on the overall time average . in the following subsections we illustrate this approach for three qualitatively different scenarios corresponding to the three known families of extreme value theory .( in cases where itself depends on time we shall chiefly be interested in its behaviour as . )we then confront the predictions with simulation results and discuss how fluctuations in the extreme values modify the picture of typical behaviour given above .to demonstrate the method , we first look in detail at the case where the utility variable has an exponential distribution with c.d.f. here the characteristic largest value after steps is given by so , substituting for in , we approximate in the long - time limit by or equivalently , in terms of the time averages , in this particular case , the probabilities can be written in terms of without explicit time dependence illustrating a direct connection to the class of elephant random walker and ( time - homogeneous ) plya urn problems . to determine the fixed points we further write in terms of the net velocity by substituting to obtain the function specifying the mean displacement of the next step can be compactly written as where is the current value of the velocity .the fixed points satisfying are then seen by inspection to be as predicted from symmetry arguments .recall from the previous subsection that to determine which of these fixed points is stable we need to check the slope ; here it is straightforward to show from that hence if , the mixed solution is stable and the asymmetric frozen solutions and correspondingly unstable .similarly , for , the mixed solution is unstable and we predict that the random walker becomes frozen into ballistic motion in one of the two directions . to check this heuristicargument we appeal to monte carlo simulations in figure [ f : expeg ] we show the empirical distribution of velocities at for an exponential utility distribution ( with mean ) and values of noise predicted to correspond to the two different cases ( and ) . for random walkers with dependence on peak values of exponential utility distribution ( ) and two different noise levels ( and ) .distribution calculated from trajectories each running up to final time .,scaledwidth=80.0% ] we see good qualitative agreement of the simulations with the prediction : in the low - noise case the trajectories are sharply peaked around the asymmetric fixed points , i.e. , ( corresponding to each agent almost always making the same choice ) whilst in the high - noise case the trajectories are clustered around the symmetric fixed point , i.e. , ( corresponding to each agent sampling the two choices approximately equally ) .however , there is a finite width of the distribution about the fixed point(s ) even for significantly greater than unity to investigate this more systematically , and reveal possible finite - time effects , we plot in figure [ f : range ] the standard deviation of the distribution as a function of for increasing measurement times . for random walkers with dependence on peak values of exponential utility distribution ( ) and range of noise values .points show simulation results for increasing times ( trajectories in each case ) ; solid line is numerical solution of ; dashed line is approximation .,scaledwidth=80.0% ] this quantifies how close the trajectories end to symmetric or asymmetric fixed points without making a distinction between the two asymmetric states ( whose selection is expected to depend sensitively on the agent s first few choices ) .according to the analysis of typical behaviour given above , we expect the standard deviation to be unity for and zero for .in fact , although the simulations do show evidence of a transition around , the situation is somewhat more complicated ; in particular , the standard deviation clearly converges to a finite value even for .these observed results for the standard deviation suggest that , even in the long - time limit , the properties of the model are sensitive to the full distribution of maximum values not just the characteristic largest value . as further evidence of this , we remark that if _ were _ given deterministically by the characteristic largest value , the variance for could only be due to decay towards the stable fixed point and fluctuation of individual trajectories about the typical behaviour . in this case, would be expected to obey a large deviation principle with some `` speed '' and the variance would eventually converge to zero , as confirmed in [ a : det ] where , for comparative purposes , we present simulation results from an artificial model with at every time step set equal to .it is clear then that the limiting value of the variance in the full model is determined by fluctuations in the extreme values , leading to fluctuations of the typical trajectories themselves . in the case of an exponential distributionit is , of course , well known that the limiting form of the rescaled maximum has a gumbel distribution ; here the c.d.f .of is asymptotically given by where and ( see , e.g. , and references therein ) .the mode coincides with the characteristic largest value calculated earlier while the mean is ( with the euler - mascheroni constant ) so differs from it only by a constant amount .taking account of the fluctuations , the maximum value random variables thus obey where the distribution of is given by the difference of the two gumbel distributions as a logistic distribution with mean zero and variance .substituting the form of in the expression for and repeating the calculations leading to one finds that for a given , non - zero , value of the position of the `` symmetric '' fixed point is shifted from zero although both its stability and the position of the asymmetric fixed points remain unchanged .a crude estimate of the standard deviation in the position of the symmetric fixed point can be obtained as the value of which solves the transcendental equation as seen in figure [ f : range ] this leads to a loose upper bound on the observed standard deviation .the actual standard deviation is smaller because the value of and the corresponding fixed point changes during the course of each trajectory . in[ a : var ] , we include this effect within a linear expansion to obtain an analytical expression for which is a better approximation for large ( see , again , figure [ f : range ] ) .notwithstanding the finite variance , the claim that one can control the long - time behaviour by increasing / decreasing the noise is well borne out by simulation .for example , in figure [ f : control ] we show the evolution of the standard deviation in a scenario where the noise level ( and hence the stability of the fixed points ) is abruptly changed after the first 500 time steps . against time for random walkers with dependence on peak values of exponential utility distribution ( ) and change in noise level at : to in red ( symbols ) , to in green ( symbols ) .inset shows final ( ) velocity histograms in the two cases .all calculations from trajectories.,scaledwidth=80.0% ] the cornerstone of extreme value theory , the fisher - tippett - gnedenko theorem , asserts that the gumbel distribution is universal for the rescaled maximum of i.i.d .random variables drawn from a distribution with exponential tails . however , the functional form of the scaling parameters depends on the distribution being considered . as a second example, we now make the arguably reasonable hypothesis that agents assign utilities according to a gaussian with some mean and standard deviation .the mode of the limiting distribution , again given by the characteristic largest value , is where is the c.d.f . of the standard normal distribution .we note that , in this case , retains a logarithmic dependence on growing like as . ignoring the fluctuations about this value ,an analogous argument to that given above then yields for large } - \sqrt{2\ln\left[\frac{(1-v)t}{2}\right ] } \right\ } \right).\ ] ] it is clear that is a fixed point for all and its stability is controlled by the slope as the slope tends to zero and hence we predict that the symmetric fixed point is always stable in the long run .however , since the dependence is only logarithmic in , one still expects to see a noise - controlled transition for large but finite times .this is supported by the simulation results for standard deviation shown in figure [ f : rangeg ] .for comparison , we have set there the first two moments equal to those of the exponential distribution in figure [ f : range ] and the picture for the gaussian case is qualitatively similar with the transition between low and high - noise regimes only weakly dependent on .but for gaussian utility distribution ( , ) ., scaledwidth=80.0% ] to complete the story , we can again consider fluctuations of the maximum values . in this case , the width of the gumbel distribution is controlled by which decays to zero as ( again see , e.g. , ) .hence , in contrast to the exponential case , we do not expect a finite limiting velocity variance in the high - noise regime and indeed the relevant simulation data do seem to show a slow convergence towards zero .a similar argument applies to other distributions with exponential tails the characteristic largest value of converges to the mode of the corresponding gumbel distribution and generically grows as where the power determines the long - time stability of the symmetric fixed point via the exponential distribution corresponds to the special case of , while for we expect long - term stability of the symmetric fixed point ( ) and , for we expect long - term instability ( ) . for intermediate timescales, the system can be driven towards either the symmetric mixed state or the asymmetric frozen state by increasing or decreasing the noise , as we demonstrate for the gaussian distribution in figure [ f : controlg ] .but for gaussian utility distribution ( , ) .note that in the low - to - high noise case with these parameters , two peaks are still visible in the velocity histogram at ; they are a residual effect from the first 500 time steps and merge if the final time is further increased ., scaledwidth=80.0% ] the second class of extreme value statistics corresponds to distributions with power - law tails as typified by the pareto distribution with c.d.f. where is a lower bound and . in this case onefinds that the characteristic largest value after steps is given by leading to the approximation and hence , by the same method as previously , ^{1/\alpha } - \left[\frac{1-v}{2}\right]^{1/\alpha } \right\ } \right).\ ] ] again , for all we find a symmetric fixed point at , here with stability determined by the slope which is greater than unity for .in fact , in the limit , approaches the step function with corresponding stable fixed points at . in this pareto case , it is straightforward to show that for large the maximum value has approximately a frchet c.d.f. where the scale parameter is given by .the mean of this distribution is only finite for but the mode and the median are both proportional to so , again , the trivially calculated characteristic largest value should give a good indication of the long - time behaviour .this is confirmed in figure [ f : rangep ] where the standard deviation of the velocity against noise strength is plotted for a case where the utility has a pareto distribution with unit mean ( , ) .but for pareto utility distribution ( , ) ., scaledwidth=80.0% ] for all values of , the velocity variance converges towards unity ( corresponding to individual trajectories approaching the asymmetric fixed points at ) .we have also checked that the convergence is faster for smaller values of ( `` longer tails '' ) , noting in particular that the distribution of has infinite mean for . more generally , the frchet distribution is the limiting form for the rescaled maximum of i.i.d .random variables drawn from any distribution with power - law tails . in all such cases we expect that increases as some power of , leading each agent to ultimately become frozen in a pure state corresponding to one or other choice .we remark that this power - law dependence is stronger than the logarithmic form found in section [ ss : exp ] ; even by increasing the noise we only expect to be able to favour the mixed state for short timescales , e.g. , up to the order of for the pareto distribution considered above . finally , we consider distributions of with finite upper bound ( as might be appropriate , for instance , if an agent s memory is based on some predetermined numerical scale with given minimum and maximum ) .the obvious example is a uniform distribution with c.d.f. whose characteristic largest value after steps is given by .notice that , in contrast to the previous examples , this converges to a finite constant as which is an elementary consequence of the upper bound on the underlying distribution and already gives a hint at the long - time behaviour . in this case , following our previous heuristic procedure we have and the slope at the symmetric fixed point is given by which is less than unity for and tends to zero as .hence we argue that the symmetric fixed point is always stable for long enough times ( regardless of noise strength ) .this conclusion is supported by the simulation data in figure [ f : rangeu ] .but for uniform utility distribution ( , ) ., scaledwidth=80.0% ] the observed behaviour of the variance for very small can be roughly explained by noting that , for this version of the model , the walker can become stuck for finite times in a metastable fixed point at . to see this , we plot in figure [ f : uncob ] the function of andexamine its intersections with the line , for fixed and increasing . given by for low noise and increasing times .note intersections with diagonal ( solid red line ) and compare sketch graphs in figure [ f : cobweb ] .parameters are , , and .,scaledwidth=80.0% ] notice that , in this case , for both symmetric _ and _ asymmetric fixed points are stable but separated by an unstable point whose position tends to as .the corresponding potential landscape has metastable states at and a trajectory can be trapped in such a state until fluctuations drive it over the barrier ( whose relative height decreases with time ) to the global minimum at .we emphasize that , since the fixed point at is always stable except for very short times , the long - time behaviour of the system can not be effectively controlled by altering the noise ( confirmed by further simulations , not shown ) .it is easy to show that , for large , the maximum of i.i.d .uniform random variables has approximately the reversed ( unit ) weibull distribution with scale parameter and mean coinciding with the characteristic largest value calculated above .however , once again , the argument is more broadly applicable for bounded distributions the limiting distribution of the rescaled maximum is generically reversed weibull ( also known as `` type iii '' extreme value ) with mean and median typically approaching the upper bound as some inverse power of the number of trials . in all such cases , as , meaning each agent is expected to ultimately end up in the mixed state with both choices equally likely .physically , it is clear that in the long - time limit the system approaches a standard memoryless diffusive model with and fixed and equal .in this paper we have performed a detailed analysis of random walkers with peak memory dependence .commensurate with the original motivation of kahneman s peak - end rule , we now make some comments on the effect of also including an explicit dependence on the final value of the utility .to be precise , we consider the mean of peak and final experience so that the right and left hopping probabilities in are replaced by where here ( ) is the value of corresponding to the last step right ( left ) .note that the are in general smaller and much less strongly correlated than the so we might expect their effect to cancel out on average in the long - time limit . at the same time , the dependence on is here weakened in the sense that the values are now divided by rather than .the simulation results in figure [ f : pe ] , for the case of an exponential utility distribution , confirm that this modified model behaves very similarly to an increased noise version of the original model with the replacement of by . for random walkers with dependence on peak _ and _ end values of exponential utility distribution ( ) and range of noise values .red points ( symbols ) show simulation data for the peak - end model while green points ( symbols ) show comparative results for peak dependence only but with noise . both cases calculated from trajectories and time steps.,scaledwidth=80.0% ] with the preceding paragraph in mind , we argue that our work on the peak memory model also has implications for the peak - end case .specifically , we have found that the effect of noise / disruption in the model is dependent on the properties of the utility distribution .using the characteristic largest value to cast the problem as an effective plya process provides direct information on the long - time dynamics ( in particular the stability of fixed points in the system ) but , in order to quantify the observed variance , one also needs to consider the distribution of maximum values .the examples we have shown , together with general arguments rooted in extreme value theory , reveal three qualitatively different classes of behaviour : * for utility distributions with * heavy tails * each random walker ( agent ) eventually becomes frozen in a state corresponding to one or other step direction ( choice ) , regardless of the level of noise .* for * bounded * utility distributions each agent samples both choices approximately equally in the long - time limit , again regardless of the level of noise . * for utility distributions with* exponential tails * the situation is more subtle for an decay we find a transition between frozen and mixed states at ; in other cases there is a weak logarithmic dependence on the time .furthermore , for the special case of decay , even in the high - noise regime there is a finite variance around the mixed state which can be attributed to fluctuations in the maximum values .significantly , this implies that only for exponential - tailed utility distributions can one hope to increase the switching between decisions on intermediate / long timescales simply by increasing the noise . from a statistical physics point of view, it would be interesting in the exponential case to characterize the phase transition and scaling exponents at , e.g. , by calculating the correlation function .this latter is also relevant in the opinion dynamics context as it quantifies how sensitive the long - time behaviour is to the first step and thus the extent to which a particular one of the two asymmetric fixed points might be favoured by a small initial perturbation .preliminary simulations suggest that the correlation function in the full model converges to zero for and , for , decreases more strongly with than in the artificial model of [ a : det ] ( presumably due to the added fluctuations reducing the effect of the initial conditions ) .however , a more detailed analysis with finite - time scaling along the lines of is deferred to a future publication . other extensions ofthe work might include considering coupled random walkers ( modelling collective rather than individual memory ) or peak effects in other opinion dynamics models , such as contact processes and voter models .the peak - end rule itself can also be critiqued ( see , e.g. , the discussion in ) and realistic refinements such as the slow fading of peak memories in the distant past could be incorporated into the modelling .however , we believe that our current work represents an important first step beyond simply averaging over the whole past experience or just recalling the most recent history . although our analytical calculations thus far have been carried out in the framework of a specific toy model they highlight more generally the possible role of ( experienced or remembered ) utility distributions in maintaining and controlling behaviour . in particular ,if agents are encouraged to reflect on their own experiences with a view to possibly modifying future choices then the outcome could be subtlely dependent on any numerical scale offered for reflection , e.g. , whether or not it has a fixed upper bound .there is much scope here for future interdisciplinary work linking with current understanding in psychology and economics .this research was carried out as part of the project _ reflect a feasibility study in experienced utility and travel behaviour _ funded by research councils uk ( ep / j004715/1 ) . the culmination of the work also benefited from the kind hospitality of the galileo galilei institute for theoretical physics ( ggi ) florence and the national institute for theoretical physics ( nithep ) stellenbosch . the author wishes to thank jennifer roberts and hugo touchette for many helpful discussions as well as comments on a draft manuscript .raw data underlying the figures shown here is stored in the queen mary research online repository ( http://dx.doi.org/10.17636/01007419 ) .here we present a brief analysis of a simplified model in which at every time step is deterministically given by ( the characteristic largest value of an exponential distribution with trials ) . in this case , the future hopping rates are completely determined by the past velocity so previous work on generalized plya urn models , etc. should be directly applicable .the standard deviation against noise in this case is shown in figure [ f : prange ] which is to be compared with figure [ f : range ] in the main text .but for simplified model ( still with ) . , scaledwidth=80.0% ] for the artificial model , the standard deviation _ does _ appear to converge to unity for and zero for as predicted by the analysis of typical trajectories outlined in section [ ss : method ] . in figure[ f : decay ] we examine on log - log scale the limiting behaviour for selected points in the high - noise regime and clearly see a power - law decay .in fact , the asymptotic behaviour of the variance can be predicted on general grounds to depend on the slope at the stable fixed point ( cf . for the continuous - time case ) .specifically , as found in other models , one anticipates a dynamical phase transition at with diffusive fluctuations ( i.e. , decay of the velocity standard deviation ) for and superdiffusive behaviour ( with decay ) for . setting , the resulting predictions indeed fit the simulation data very well ( with logarithmic corrections expected at the dynamical phase transition itself ) .note that this provides a quantitative explanation for the observed slow convergence close to the transition point ( ) which may also be relevant in the full model . against time for random walkers in simplified model ( ) at selected noise values ( trajectories in each case ) .points are simulation data for ( top to bottom ) : .black solid lines are fits corresponding to power laws with negative exponent ; logarithmic corrections are expected at .,scaledwidth=80.0% ] another important quantity for generalized plya processes such as this is the correlation function between the direction of the first step and the step .the long - time limit of this quantity plays the role of an order parameter and simulation results in the present case ( not shown ) confirm a continuous phase transition at with converging to zero for all .to understand the fluctuations of the velocity , one needs to take account of the fact that each value of persists for a number of time steps before being replaced by a larger one . in this appendix, we pursue such an approach with an exponential utility distribution to obtain an approximation for the long - time limit of the standard deviation in the high - noise regime .we let be the time at which the last record occurred ( i.e. , the last change in either or up to the current time ) and note that , by definition , the value of is unchanged for . in the exponential case given by and typical trajectories should then obey the _ stochastic _mapping where and are random variables and the second term in the numerator gives the expected displacement since the last record . in the case where and for all time ( i.e. , or updated at every step with no fluctuations ) we recover the deterministic mapping of with . in slight abuse of notation ,we denote a typical trajectory by even in the present stochastic case and argue that it is the fluctuations of this trajectory which lead to the long - time variance of . for large times we can approximate and ( time - averaged velocity changes slowly ) which allows us to write with the fraction . to proceed furtherwe then make three key assumptions : 1 .[ a : linear ] and are sufficiently small that we can approximate the term by a linear function[ a : uniform ] the right steps ( and therefore also the left steps ) are uniformly distributed throughout the trajectory .[ a : indep ] , and are mutually independent .all three of these assumptions are expected to fail as approaches 1 but they do facilitate analytical progress for .first , we use assumption ( [ a : linear ] ) to make a linear expansion and trivially rearrange to obtain where for convenience we have rescaled to which has a standard logistic distribution with variance . to obtain the distribution of , we first consider separately the fractional times and for the last records corresponding to right and left steps respectively .each of the previous steps is equally likely to have produced the maximum value so , with assumption ( [ a : uniform ] ) , we have in the long - time limit .then , by straightforward calculation , is governed by a triangular distribution on $ ] with mode 1 .now , in the long - time limit , we expect the distribution of to be the same as that of , both characterized by standard deviation . hence using the independence assumption ( [ a : indep ] ) , together with the symmetry of trajectories around zero , we obtain where the functions and are given by expectations with respect to the distribution of : & = x\left[-2 + 9x-6x^2 + 6x(x-1)^2\ln\left(\frac{x}{x-1}\right)\right ] , \\\fl q(x)&=e\left [ \left ( \frac{{1-\rho_\tau}}{x-1+\rho_\tau } \right)^2 \right ] & = 1 - 6x+2x(3x-2)\ln\left(\frac{x}{x-1}\right).\end{aligned}\ ] ] rearranging and substituting in for yields the final approximation the expression in clearly diverges at but we can estimate its range of applicability by considering assumption ( [ a : linear ] ) .specifically , approximates to within 10% for so , since and are fairly strongly correlated , we require which is satisfied for . indeed is seen to provide a reasonable approximation to the simulation data in this regime ( cf .figure [ f : range ] ) .the small remaining discrepancy is probably mainly due to the failure of assumption ( [ a : indep ] ) ; in particular , is not strictly independent of .10 url # 1#1urlprefix[2][]#2
|
motivated by the psychological literature on the `` peak - end rule '' for remembered experience , we perform an analysis within a random walk framework of a discrete choice model where agents future choices depend on the peak memory of their past experiences . in particular , we use this approach to investigate whether increased noise / disruption always leads to more switching between decisions . here extreme value theory illuminates different classes of dynamics indicating that the long - time behaviour is dependent on the scale used for reflection ; this could have implications , for example , in questionnaire design .
|
the astonishing amount of information available on the web and the highly variable quality of its content generate the need for an absolute measure of importance for web pages , that can be used to improve the performance of web search .link analysis algorithms such as the celebrated _ pagerank _ , try to answer this need by using the link structure of the web to assign authoritative weights to the pages .pagerank s approach is based on the assumption that links convey human endorsement .for example , the existence of a link from page to page in fig .[ tiny_web](a ) is seen as a testimonial of the importance of page .furthermore , the amount of importance conferred to page is proportional to the importance of page and inversely proportional to the number of pages links to . in their original paper ,page et al . imagined of a _ random surfer _ who , with probability follows the links of a web page , and with probability jumps to a different page uniformly at random .then , following this metaphor , the overall importance of a page was defined to be equal to the fraction of time this random surfer spends on it , in the long run . formulating pagerank s basic idea with a mathematical model ,involves viewing the web as a directed graph with web pages as vertices and hyperlinks as edges . given this graph, we can construct a _ row - normalized hyperlink matrix _ , whose element {uv} ] .we use to denote the matrix having vector on its diagonal , and zeros elsewhere .we use calligraphic letters to denote sets ( e.g. , ) . ] .each set is referred to as an * ncd block * , and its elements are considered related according to a given criterion , chosen for the particular ranking problem ( e.g. the partition of the set of web pages into websites ) .we define to be the set of _ proximal _ nodes of , i.e the union of the ncd blocks that contain and the nodes it links to .formally , the set is defined by : where is used to denote the unique block that includes node .finally , denotes the number of different blocks in . hyperlink matrix . : : the hyperlink matrix , as in the standard pagerank model , is a row normalized version of the adjacency matrix induced by the graph , and its element is defined as follows : {uv } \triangleq \left\ { \begin{array}{l l } \frac{1}{d_u } & \quad \mbox{if }\\ 0 & \quad \mbox{otherwise}\\ \end{array } \right.\ ] ] + matrix is assumed to be a row - stochastic matrix .the matter of dangling nodes ( i.e. nodes with no outgoing edges ) is considered fixed through some sort of stochasticity adjustment .inter - level proximity matrix .: : the inter - level proximity matrix is created to depict the interlevel connections between the nodes in the graph .in particular , each row of matrix denotes a probability vector , that distributes evenly its mass between the blocks of , and then , uniformly to the included nodes of each block .formally , the element of matrix , that relates the node with node , is defined as {uv}\triangleq \left\ { \begin{array}{l l } \frac{1}{n_u|\mathcal{d}_{(v)}| } & \quad \mbox{if }\\ 0 & \quad \mbox{otherwise}\\ \end{array } \right .\label{def : m}\ ] ] from the definition of the ncd blocks and the proximal sets , it is clear that whenever the number of blocks is smaller than the number of nodes in the graph , i.e. , matrix is necessarily low - rank ; in fact , a closer look at the definitions ( [ def : proximal ] ) and ( [ def : m ] ) above , suggests that matrix admits a very useful factorization , which was shown in to ensure the tractability of the resulting model .in particular , matrix can be expressed as a product of 2 extremely sparse matrices , and , defined below .+ matrix is defined as follows : where denotes a row vector in whose elements are all 1 .now , using the diagonal matrix : and a row normalized matrix , whose rows correspond to nodes and columns to blocks and its elements are given by {ij } \triangleq \left\ { \begin{array}{l l } \frac{1}{n_u } & \quad \mbox{if }\\ 0 & \quad \mbox{otherwise}\\ \end{array } \right .\label{matrixgamma}\ ] ] we can define the matrix as follows : + using ( [ matrixa ] ) and ( [ matrixr ] ) , it is straight forward to verify that : as pointed out by the authors , this factorization can lead to significant advantages in realistic scenarios , in terms of both storage and computability ( see , section 3.2.1 ) . teleportation matrix .: : finally , ncdawarerank model also includes a teleportation matrix , where , such that .the introduction of this matrix , can be seen as a remedy to ensure that the underlying markov chain , corresponding to the final matrix , is irreducible and aperiodic and thus has a unique positive stationary probability distribution . the resulting matrixwhich we denote is expressed by : parameter controls the fraction of importance delivered to the outgoing edges and parameter controls the fraction of importance that will be propagated to the proximal nodes . in order to ensure the irreducibility and aperiodicity of the final stochastic matrix in the general case , must be less than .this leaves of importance scattered throughout the graph through matrix .although in the general case the teleportation matrix is required to ensure the final stochastic matrix produces a well - defined ranking vector , in this section we show that ncdawarerank model carries the possibility of discarding matrix altogether .before we proceed to the proof of our main result ( section [ subsec : primitivity ] ) we present here the necessary preliminary definitions and theorems . an non - negative matrix is called _ irreducible _ if for every pair of indices ] . the class of all non - negative irreducible matrices is denoted .the _ period _ of an index ] . for an irreducible matrix, the period of every index is the same and is referred to as the period of the matrix .an irreducible matrix with period , is called _primitive_. the important subclass of all primitive matrices will be denoted .finally , we give here , without proof , the following fundamental result of the theory of non - negative matrices .suppose is an non - negative primitive matrix .then , there exists an eigenvalue such that : 1 . is real and positive , 2 . with be associated strictly positive left and right eigenvectors , 3 . for any eigenvalue 4 .the eigenvectors associated with are unique to constant multiples , 5 . if and is an eigenvalue of , then .moreover , 6 . is a simple root of the characteristic equation of .mathematically , in the standard pagerank model the introduction of the teleportation matrix can be seen as a _primitivity adjustment _ of the final stochastic matrix .indeed , the hyperlink matrix is typically reducible , so if the teleportation matrix had not existed the pagerank vector would not be well - defined . in the general case ,the same holds for ncdawarerank , as well .however , for suitable decompositions of the underlying graph , matrix opens the door for achieving primitivity without resorting to the uninformative teleportation matrix . here, we show that this `` suitability '' of the decompositions can , in fact , be reflected on the properties of a low dimensional * indicator matrix * defined below : for every decomposition , we define an indicator matrix designed to capture the inter - block relations of the underlying graph . concretely , matrix is defined as follows : where are the factors of the inter - level proximity matrix .clearly , whenever {ij} ] for every in ] holds . or equivalently there exists a positive integer such that is a positive matrix ( see ) .this can be seen easily using the factorization of matrix given above . in particular , since , there exists a positive integer such that .now , if we choose , we get : however , matrix is positive and since every row of matrix and every column of matrix are by definition non - zero , the final matrix , , is also positive .thus , , and the proof is complete .now , in order to get the primitivity of the final stochastic matrix , we use the following useful lemma which shows that any convex combination of stochastic matrices that contains at least one primitive matrix , is also primitive .let be a primitive stochastic matrix and stochastic matrices , then matrix where and such that is a primitive stochastic matrix .[ lemma1 ] clearly matrix is stochastic as a convex combination of stochastic matrices ( see ) . for the primitivity part it suffices to show that there exists a natural number , , such that can be seen very easily .in particular , since matrix , there exists a number such that every element in is positive .consider the matrix : now letting , we get that every element of matrix is strictly positive , which completes the proof . as we have seen , when , matrix is primitive .furthermore , and are by definition stochastic .thus , lemma [ lemma1 ] applies and we get that the ncdawarerank matrix , is also primitive . in conclusion , we have shown that : which proves the reverse direction of the theorem . to prove the forward direction( i.e. ) it suffices to show that whenever matrix is reducible , matrix is also reducible ( and thus , not primitive ) .first observe that when matrix is reducible the same holds for matrix .the reducibility of the indicator matrix implies the reducibility of the inter - level proximity matrix .concretely , assume that matrix is reducible .then , there exists a permutation matrix such that has the form where are square matrices .notice that a similar block upper triangular form can be then achieved for matrix .in particular , the existence of the block zero matrix in ( [ rel : blockupperdiagonal ] ) , together with the definition of matrices ensures the existence of a set of blocks , that have the property none of their including nodes to have outgoing edges to the rest of the nodes in the graph .thus , organizing the rows and columns of matrix such that these nodes are assigned the last indices , results in a matrix that has a similarly block upper triangular form .this makes reducible too .thus , we only need to show that the reducibility of matrix implies the reducibility of matrix also. this can arise from the fact that by definition {ij}=0 \implies [ \mathbf{h}]_{ij}=0.\ ] ] so , the permutation matrix that brings in the form of ( [ rel : blockupperdiagonal ] ) , has exactly the same effect on matrix .similarly the final stochastic matrix has the same block upper triangular form as a sum of matrices and .this makes matrix reducible and hence non - primitive .therefore , we have shown that , which is equivalent to putting everything together , we see that both directions of our theorem have been established . thus we get , and our proof is complete .now , when the stochastic matrix is primitive , from the perron - frobenius theorem it follows that its largest eigenvalue which is equal to 1 is unique and it can be associated with strictly positive left and right eigenvectors .therefore , under the conditions of theorem [ theorem : primitivityconditions ] , the ranking vector produced by the ncdawarerank model which is defined to be the stationary distribution of the stochastic matrix : ( a ) is uniquely determined as the ( normalized ) left eigenvector of that corresponds to the eigenvalue 1 and , ( b ) its support includes every node in the underlying graph .the following corollary , summarizes the result . when the indicator matrix is irreducible , the ranking vector produced by ncdawarerank with , where positive real numbers such that holds , denotes a well - defined distribution that assigns positive ranking to every node in the graph .in our discussion so far , we assumed that the block decomposition defines a partition of the underlying space .however , in many realistic ranking scenarios it would be useful to be able to allow the blocks to overlap .for example , if one wants to produce top n lists of movies for a ranking - based recommender system , using ncdawarerank , a very intuitive criterion for decomposition would be the one depicting the categorization of movies into genres .of course , such a decomposition naturally results in overlapping blocks , since a movie usually belongs to more than one genres .fortunately , the factorization of the inter - level proximity matrix , paves the path towards a straight forward generalization , that inherits all the useful mathematical properties and computational characteristics of the standard ncdawarerank model .in particular , it suffices to modify the definition of decompositions as indexed families of non - empty sets that collectively cover the underlying space , i.e. and to change slightly the definitions of the : * proximal sets : * inter - level proximity matrix : {uv}\triangleq \sum_{\mathcal{\hat{d}}_k \in \mathcal{\hat{m}}_{u } , v \in \mathcal{\hat{d}}_k}\frac{1}{n_{u}\lvert \mathcal{\hat{d}}_k\rvert } \label{def : m_overlapping}\ ] ] * factor matrices : we first define a matrix , whose element is 1 , if and zero otherwise , and a matrix , whose element is 1 if and zero otherwise .then , if , denote the row - normalized versions of and respectively , matrix can be expressed as : notice that the inter - level proximity matrix above is a well - defined stochastic matrix , for every possible decomposition .its stochasticity can arise immediately from the row normalization of matrices , together with the fact that neither matrix nor matrix have zero rows .indeed , the existence of a zero row in matrix implies which contradicts the definition of ; similarly the existence of a zero row in matrix contradicts the definition of the ncd blocks which are defined to be non - empty . also notice that our primitivity criterion given by theorem [ theorem : primitivityconditions ] , applies in the overlapping case too , since our proof made no assumption for mutual exclusiveness for the ncd - blocks .in fact , it is intuitively evident that overlapping blocks promote the irreducibility of the indicator matrix .in this work , using an approach based on the theory of non - negative matrices , we study ncdawarerank s inter - level proximity model and we derive necessary and sufficient conditions , under which the underlying decomposition alone could result in a well - defined ranking vector eliminating the need for uniform teleportation .our goals here were mainly theoretical . however , our first findings in applying this `` no teleportation '' approach in realistic problems suggest that the conditions for primitivity are not prohibitively restrictive , especially if the criterion behind the definition of the decomposition implies overlapping blocks .a very exciting direction we are currently pursuing involves the spectral implications of the absence of the teleportation matrix . in particular, a very interesting problem would be to determine bounds of the subdominant eigenvalue of the stochastic matrix , when the indicator matrix is irreducible .another important direction would be to proceed to randomized definitions of blocks that satisfy the primitivity criterion and to test the effect on the quality of the ranking vector . in conclusion , we believe that our results , suggest that the ncdawarerank model presents a promising approach towards generalizing and enriching the standard random surfer model , and also carries the potential of providing an intuitive alternative teleportation scheme to the many applications of pagerank in hierarchical or otherwise specially structured graphs .avrachenkov , k. , litvak , n. , pham , k. : distribution of pagerank mass among principle components of the web . in : bonato ,a. , chung , f. ( eds . ) algorithms and models for the web - graph , lecture notes in computer science , vol .4863 , pp .springer berlin heidelberg ( 2007 ) , http://dx.doi.org/10.1007/978-3-540-77004-6_2 baeza - yates , r. , boldi , p. , castillo , c. : generic damping functions for propagating importance in link - based ranking .internet mathematics 3(4 ) , 445478 ( 2006 ) , http://dx.doi.org/10.1080/15427951.2006.10129134 boldi , p. : totalrank : ranking without damping .in : special interest tracks and posters of the 14th international conference on world wide web .. 898899 .www 05 , acm , new york , ny , usa ( 2005 ) , http://doi.acm.org/10.1145/1062745.1062787 boldi , p. , santini , m. , vigna , s. : a deeper investigation of pagerank as a function of the damping factor . in : frommer ,a. , mahoney , m.w . ,szyld , d.b .web information retrieval and linear algebra algorithms , 11.02 . - 16.02.2007 .dagstuhl seminar proceedings , vol . 07071 .internationales begegnungs- und forschungszentrum fr informatik ( ibfi ) , schloss dagstuhl , germany ( 2007 ) , http://drops.dagstuhl.de/opus/volltexte/2007/1072 eiron , n. , mccurley , k.s . , tomlin , j.a .: ranking the web frontier . in : proceedings of the 13th international conference on world wide web .www 04 , acm , new york , ny , usa ( 2004 ) , http://doi.acm.org/10.1145/988672.988714 nikolakopoulos , a.n . ,garofalakis , j.d .: ncdawarerank : a novel ranking method that exploits the decomposable structure of the web . in : proceedings of the sixth acm international conference on web search and data mining .wsdm 13 , acm , new york , ny , usa ( 2013 ) , http://doi.acm.org/10.1145/2433396.2433415 nikolakopoulos , a.n . ,garofalakis , j.d .: ncdrec : a decomposability inspired framework for top - n recommendation . in : 2014ieee / wic / acm international joint conferences on web intelligence ( wi ) and intelligent agent technologies ( iat ) , warsaw , poland , august 11 - 14 , 2014 - volume ii .. 183190 .ieee ( 2014 ) , http://dx.doi.org/10.1109/wi-iat.2014.32 nikolakopoulos , a.n . ,kouneli , m.a . ,garofalakis , j.d . : hierarchical itemspace rank : exploiting hierarchy to alleviate sparsity in ranking - based recommendation .neurocomputing 163(0 ) , 126 136 ( 2015 ) , http://www.sciencedirect.com/science/article/pii/s0925231215002180
|
in the standard _ random surfer model _ , the teleportation matrix is necessary to ensure that the final pagerank vector is well - defined . the introduction of this matrix , however , results in serious problems and imposes fundamental limitations to the quality of the ranking vectors . in this work , building on the recently proposed _ ncdawarerank _ framework , we exploit the decomposition of the underlying space into blocks , and we derive easy to check necessary and sufficient conditions for _ random surfing without teleportation_. analysis , ranking , pagerank , teleportation , non - negative matrices , decomposability
|
gene expression , a fundamental activity in the living cell , is a complex sequence of events resulting in protein synthesis . in the case of deterministic time evolution ,the temporal rate of change of protein concentration is given by the difference between the rates of synthesis and decay of proteins .when these rates balance each other the net rate of change is zero and one obtains a steady state described by a fixed protein concentration . in the case of positively regulated gene expression, the dynamics may result in bistability , i.e. , the existence of two stable steady states for the same parameter values .bistability , in general , is an outcome of dynamics involving positive feedback and sufficient nonlinearity .one way of achieving the latter condition is when multiple bindings of regulatory molecules occur at the promoter region of the gene ( cooperativity in regulation ) or when the regulatory proteins form multimers like dimers and tetramers which then bind the specific regions of the dna .the simplest example of bistability in gene expression is that of a gene the protein product of which promotes its own synthesis .positive autoregulation occurs via the binding of protein dimers at the promoter region resulting in the activation of gene expression .experimental evidence of bistability has been obtained in a wide range of biological systems , e. g. , the lysis - lysogeny genetic circuit in bacteriophage , the lactose utilization network in _ , the network of coupled positive feedback loops governing the transition to the mitotic phase of the eukaryotic cell cycle , the development of competence in the soil bacteria _b. subtilis _ and more recently the activation of the stringent response in mycobacteria subjected to nutrient depletion .a number of synthetic circuits have also been constructed which exhibit bistable gene expression under appropriate conditions . recently , tan et al . have proposed a new mechanism by which bistability arises , termed emergent bistability , in which a noncooperative positive feedback circuit combined with circuit induced growth retardation of the embedding cell give rise to two stable expression states .the novel type of bistability was demonstrated in a synthetic gene circuit consisting of a single positive feedback loop in which the protein product x of a gene promotes its own synthesis in a noncooperative manner .if the circuit is considered in isolation , one obtains monostability , i.e. , a single stable steady state . in the actual system ,the production of protein x has a retarding effect on the growth of the host cell so that the circuit function is linked to cellular growth .the protein decay rate has , in general , two components , the natural degradation rate and the dilution rate due to cell growth .since the latter is reduced on increased protein production , a second positive feedback loop is effectively generated , as illustrated in fig .an increased synthesis of protein x leads to a lower dilution rate , i.e. , a greater accumulation of the protein which in turn promotes a higher amount of protein production . tan et al . developed a mathematical model to capture the essential dynamics of the system of two positive feedback loops and showed that in a region of parameter space bistable gene expression is possible .the rate equation governing the dynamics of the system is given by where the variables and the parameters are nondimensionalized with being a measure of the protein amount .the parameter is the rate constant associated with basal gene expression , represents the effective rate constant for protein synthesis , denotes the maximum dilution rate due to cell growth and is a parameter denoting the ` metabolic burden ' .when limited resources are available to the cell , the synthesis of proteins imposes a metabolic cost , i.e. , the availability of resources for cellular growth is reduced .the form of the nonlinear decay rate ( the second term in eq .( 1 ) ) is arrived at using the monod model which takes into account the effect of resource or nutrient limitation on the growth of a cell population .an alternative explanation for the origin of the nonlinear protein decay rate is based on the fact that the synthesis of a protein may retard cell growth if it is toxic to the cell .the experimental signature of bistability lies in the coexistence of two subpopulations with low and high protein levels . in the case of deterministic dynamics, the cellular choice between two stable expression states is dictated by the previous history of the system .in this picture , if the cells in a population are in the same initial state , the steady state should be the same in each cell .the experimental observation of population heterogeneity in the form of two distinct subpopulations can be explained once the stochastic nature of gene expression is taken into account .bistable gene expression is characterized by the existence of two stable steady states separated by an unstable steady state .a transition from the low to the high expression state , say , is brought about once the fluctuations associated with the low expression level cross the threshold set by the protein concentration in the unstable steady state .noise - induced transitions between the stable expression states give rise to a bimodal distribution in the protein levels . in a landscape picture, the two stable expression states correspond to the two minima of an expression potential and the unstable steady state is associated with the top of the barrier separating the two valleys .a number of examples is now known which illustrate the operation of stochastic genetic switches between well - defined expression states .stochasticity in gene expression has different possible origins , both intrinsic and extrinsic .the biochemical events ( reactions ) involved in gene expression are probabilistic in nature giving rise to fluctuations ( noise ) around mean mrna and protein levels .the randomness in the timing of a reaction arises from the fact that the reactants have to collide with each other and the energy barrier separating the reactants from the product state has to be crossed for the occurrence of the reaction .the stochastic time evolution of the system can be studied using the master equation ( me ) approach .the me is a differential equation describing the temporal rate of change of the probability that the system is in a specific state at time t. the state at time t is described in terms of the number of biomolecules ( mrnas , proteins etc . )present in the system at t. the solution of the me gives a knowledge of the probability distribution the lower two moments of which yield the mean and the variance .one is often interested in the steady state solution of the me , i.e. , when the temporal rate of change of the probability is zero .the me has exact , analytical solutions only in the cases of simple biochemical kinetics . a rigorous simulation technique based on the gillespie algorithm provides a numerical solution of the me .the computational cost in terms of time and computer memory can , however , become prohibitive as the complexity of the system studied increases . two approximate methods for the study of stochastic processesare based on the langevin and fokker - planck ( fp ) equations .these equations are strictly valid in the case of large numbers of molecules so that a continuous approximation is justified and the system state is defined in terms of concentrations of molecules rather than numbers . in fact , both the equations are obtained from the me in the large molecular number limit . in the langevin equation ( le )additive and multiplicative stochastic terms are added to the rate equation governing the deterministic dynamics .the corresponding fp equation is a rate equation for the probability distribution .noise in the form of random fluctuations has a non - trivial effect on the gene expression dynamics involving positive feedback . in this paper, we investigate the effects of additive and multiplicative noise on the dynamics , described by eq .( 1 ) , with special focus on emergent bistability . in sec .ii , we first present the bifurcation analysis of eq .( 1 ) and then describe the general forms of the langevin and fp equations as well as the expression for the steady state probability distribution of the protein levels in an ensemble of cells . sec .iii contains the results of our study when only additive as well as when both additive and multiplicative types of noise are present . in sec .iv , the mean first passage times for escape over the potential barrier are computed . in sec .v , we discuss the significance of the results obtained and also make some concluding remarks .- plane with and 17 in the successive plots .( inset ) portions of bifurcation diagrams amplified to indicate how a transition from the low to the high expression state can be brought about without passing through the region of bistability .the rate constant . ]in eq . ( 1 ) , the steady state condition results in bistability , i.e. , two stable gene expression states in specific parameter regions . fig .2 represents the bifurcation diagrams in the plane for different values of the metabolic cost parameter .the region of bistability is bounded by lines at which bifurcation from bistability to monostability occurs .the steady states in the region of bistability are the three physical solutions ( real and positive ) of the cubic polynomial equation obtained from eq .( 1 ) by putting . two of the solutions correspond to stable steady states and these are separated by an unstable steady state represented by the third solution . at the bifurcation point separating bistability from the monostable low expression state , the higher stable state solution merges with the unstable steady state solution .similarly , at the bifurcation point separating bistability from the monostable high expression state , the lower two solutions merge so that away from this point and towards the right in the - plane only the stable high expression state survives .the bifurcation analysis has been carried out using the software package mathematica .one notes that the extent of the region of bistability increases as the value of the parameter increases . when , i.e. , the net decay rate of the proteins has the form , there is no region of bistability in the parameter space . for , one can pass from a monostable low expression state across a region of bistability to a monostable high expression state by increasing and decreasing .one also observes that one can directly go from the low to the high expression state without passing through the region of bistability .the region of parameter space through which the bypass can occur is dependent on , diminishing in size with increasing values of .we next include appropriate noise terms in eq .( 1 ) to investigate the effects of noise on the deterministic dynamics .( 16 ) ) in the case of purely additive noise as the parameter is tuned from low to high values ( and ) .the other parameter values are , , and with the noise strength kept fixed at . *( b ) * steady state probability distribution , , for successive values of and . for each values of , is computed and displayed for three different noise strengths ( solid line ) , ( dashed line ) , ( dot - dashed line ) . ]the general stochastic formalism based on the langevin and fp equations for the steady state analysis of a bistable system is described in refs .we apply the formalism to investigate the effects of additive and multiplicative noise on emergent bistability , the governing equation of which is described in eq .we consider an one - variable le containing a multiplicative and an additive noise term: where and represent gaussian white noises with mean zero and correlations given by and are the strengths of the two types of noise and respectively and is the degree of correlation between them . the first term , , in eq .( 2 ) represents the deterministic dynamics . with the dynamics governed by eq .( 1 ) , the function is given by , the additive noise represents noise arising from an external perturbative influence or originating from some missing information embodied in the rate equation approximation .gene expression consists of the major steps of transcription and translation which are a complex sequence of biochemical events .regulation of gene expression as well as processes like cell growth have considerable influence on the gene expression dynamics . in many of the models of gene expression , some of the elementary processes ( say , transcription and translation as in the case of the model described by eq .( 1 ) ) are lumped together and an effective rate constant associated with the combined process ( e.g. , , the effective rate constant for protein synthesis in eq .. it is , however , expected that the rate constants fluctuate in time due to a variety of stochastic influences like fluctuations in the number of regulatory molecules and rna polymerases . in the le ,the fluctuations in the rate constants are taken into account through the inclusion of multiplicative noise terms like in eq .( 2 ) . in the present study , we consider two types of multiplicative noise associated with the rate constants ( effective rate constant for protein synthesis ) and ( the maximum dilution rate ) .the rates vary stochastically .i.e. , with in one case and with in the other case ( see eq . ( 1 ) ) . as mentioned before, represents random fluctuations of the gaussian white noise - type .there are alternative versions of the le in which the noise terms added to the deterministic part have an explicit structure in terms of gene expression parameters .gillespie has shown how to derive the chemical le starting with the master equation describing the stochastic time evolution of a set of elementary reactions with the noise terms depending on the number of molecules as well as the gene expression parameters .similarly , for an effective kinetic equation of the form where is the number of molecules and , the synthesis and decay rates respectively , one can write the le as while these alternative approaches have their own merit , the majority of stochastic gene expression studies , based on the langevin formalism , start with equations of the type shown in ( 2 ) .the choice is dictated by the simplicity of the calculational scheme with specific focus on the separate effects of additive and multiplicative noise ( fluctuating rate constants ) on the gene expression dynamics .the fp equation can be developed from the le following usual procedure .the fp equation corresponding to eq .( 2 ) is +\frac{\partial^{2}}{\partial x^{2}}[b(x)p(x , t)]\ ] ] where and ^{2}+2\lambda\sqrt{d_{1}d_{2}}g_{1}(x)+d_{2}\ ] ] the steady state probability distribution ( sspd ) , from eq .( 4 ) , is given by \ ] ] ^{2}+2\lambda\sqrt{d_{1}d_{2}}g_{1}(x)+d_{2}\}^{\frac{1}{2}}}\exp[\int^{x}\frac{f(x')dx'}{d_{1}[g_{1}(x')]^{2}+2\lambda\sqrt{d_{1}d_{2}}g_{1}(x')+d_{2}}]\ ] ] where n is the normalization constant .( 12 ) can be recast in the form with ^{2}+2\lambda\sqrt{d_{1}d_{2}}g_{1}(x)+d_{2}]-\int^{x}\frac{f(y)dy}{d_{1}[g_{1}(y)]^{2}+2\lambda\sqrt{d_{1}d_{2}}g_{1}(y)+d_{2}}\ ] ] defines the ` stochastic potential ' corresponding to the fp equation .we first consider the case when only the additive noise term is present in eq .( 2 ) , i.e. , the second term on the r.h.s . is missing . the function is given in eq .( 4 ) . as pointed out in , the parameters ( effective rate constant for protein synthesis ) and ( maximum dilution rate ) are experimentally tunable . from eqs .( 12 ) and ( 14 ) , the expression potentials , , and the associated steady state probability distributions , , can be computed .the stochastic potential has the form where is the deterministic potential , i.e. , . in the presence of only additive noise ,the stochastic and deterministic potentials have similar forms .figure 3 displays and (x ) versus as the parameter is tuned from low to high values ( ) .the other parameter values are kept fixed with , .01 and , the values being identical to those reported in . for each value of ,the potential is plotted only for the noise strength = 0.05 whereas is plotted for the noise strengths =0.05 ( solid line ) , 0.1 ( dashed line ) and 0.4 ( dot - dashed line ) .one finds that by tuning from low to high values , one can pass from a monostable low expression state through a region of bistability , i.e. , a coexistence of low and high expression states to a monostable high expression state .the region of bistability is distinguished by the appearance of two prominent peaks in the distribution of protein levels .one can also keep fixed and obtain a similar set of plots by varying the maximum dilution rate ., computed using a mixture model .( first panel ) deterministic potentials for , and in the region of bistability .the other parameter values are the same as in the case of fig .3 . ( second panel ) steady state probability distribution versus ( solid line ) for , and as fitted by the distribution ( dashed line ) obtained from a mixture model .the individual probability distributions are lognormal ( subpopulation 1 ) and gaussian ( subpopulation 2 ) .the parameters of the distributions and the coefficients and are also mentioned ( ) .( third panel ) relative weight ( in percentage ) of subpopulation 2 versus .( fourth panel ) variance of gaussian probability distribution ( subpopulation 2 ) as a function of . ]the stochastic potential is indicative of the steady state stability . in fig .3(a ) , for , the potential has a deep ( shallow ) minimum at low ( high ) values of x. for low values of the noise strength , e. g. , =0.05 ( solid line ) , the steady state probability distribution has a single prominent peak .noise can induce transitions from one local minimum to the other of the stochastic potential .the minima represent steady states which are separated by an energy barrier . for and , there are two energy barriers corresponding to the transitions from the low to the high expression states and vice versa . in the case of ,increased magnitudes of the additive noise flip the switch in the unfavorable direction ( energy barrier higher ) so that has a second peak , albeit not prominent , at a higher expression level . when , the energy barriers are of similar magnitude and has two prominent peaks at low and high expression levels when the noise strength is low ( ) . with increased noise strengths ,the two expression levels are more readily destabilized resulting in a smearing of the expression levels . has now a finite value for intermediate values of . in the case of , the stochastic potential has a single minimum at the high expression level so that is unimodal . with higher values of the noise strength ,the probability distribution becomes broader .one also notes that with increased values of , the magnitude of the high expression level also increases .( a ) and the steady state probability distributions ( ( b ) and ( c ) ) are displayed for and ( b ) and and (c ) .the other parameter values are fixed at , and with ( b ) and ( c ) .the noise strength is kept fixed at whereas the noise strength has the values ( solid line ) , ( dashed line ) , ( dot - dashed line ) in ( b ) and ( c ) .the stochastic potentials in ( a ) are plotted for and with the other parameter values as in ( b ) . ] in the region of bistability , the total population is a mixture of two subpopulations . in order to determine the effects of additive noise on the relative weights and variances of the subpopulation probability distributions , we take recourse to a mixture model in which the steady state probability distribution for the total population , . and are the steady state probability distributions for subpopulation 1 and 2 respectively and , are the coefficients in the linear combination ( ) .subpopulation 1 ( 2 ) is characterized by predominantly low ( high ) expression levels .the upper panel of fig .4 shows the deterministic potentials , , for , and in the region of bistability .the other parameter values are the same as in the case of fig .3 . the second panel in fig . 4 shows the steady state probability distributions , ( solid line ) , in the three cases as well as the fitting distributions ( dashed line ) using the mixture model .the additive noise strength is kept fixed at the value . in each case , and are given by a lognormal and a gaussian distribution respectively with the forms -\mu_{1})^{2}}{2\sigma_{1}^{2}}}}{\sqrt{2\pi}x\sigma_{1}}$ ] with mean and variance and with mean and variance .the values of the parameters of the individual distributions as well as the coefficients and are listed in fig .the third panel in the figure shows the relative weight ( in percentage ) of the subpopulation 2 as the additive noise is varied for the three different cases , and .the lowest panel of fig .4 shows the variances of the probability distributions associated with the subpopulation 2 versus in the three cases .the plots for subpopulation 1 are similar in nature .increased additive noise strength enhances noise - induced transitions over the potential barrier . for ,the transitions are from the low to the high expression state so that the relative weight of subpopulation 2 increases with increased noise strength . in the other two cases , the relative weight of subpopulation 2 decreases with the increase in the magnitude of .increased additive noise strength further increases the spreading ( measured by variance ) of the protein distributions around the mean levels , i.e. , brings in greater heterogeneity in the cell population .the lowest panel of fig .4 shows that the variance is not affected by but depends only on the additive noise strength . .stochastic potential ( a ) and the steady state probability distribution ( b ) are displayed for and .the other parameter values are fixed at , , and . the noise strength kept fixed at whereas the noise strength has values ( solid line ) , ( dashed line ) , ( dot - dashed line ) in ( b ) . ]we next consider the case when both additive and multiplicative noise terms are present in the le ( eq .we first assume that the multiplicative noise is associated with the maximum dilution rate , i.e. , in eq .( 2 ) is given by eq .we designate this type of noise as type 1 multiplicative noise .also , the two types of noise are taken to be uncorrelated , i.e. , the parameter ( eq . ( 3 ) ) is zero .figure 5 shows the expression potential ( a ) and the steady state probability distributions , (x ) ( ( b ) and ( c ) ) versus for and 19 ( b ) and and ( c ) .the other parameter values are fixed at .01 and with ( b ) and ( c ) . in computing , the additive noise strength kept fixed at whereas the multiplicative noise strength has the values ( solid line ) , 0.4 ( dashed line ) and ( dot - dashed line ) .the expression potentials in ( a ) are plotted for and with the other parameter values as in ( b ) .the form of the stochastic potential is given by eq .( 14 ) , which differs substantially from that of the deterministic potential .as in the case of fig .2 , there is a progression from the monostable low expression state through a region of bistability to the monostable high expression state . from fig .5(b ) one finds that the probability distributions versus are almost unaffected by changing the multiplicative noise strength from to .this is because , as given in eq .( 6 ) , has a small value due to the high value of ( ) in the denominator .noticeable differences in the probability distributions appear ( fig .5(c ) ) when has a lower value ( ) .one can conclude that increased fluctuations in the maximum dilution rate parameter have little effect on for moderately high values of .this is certainly not the case when or has lower values ., as computed using a mixture model .the additive noise strength is kept fixed at .the value of with the other parameter values the same as in the case of fig .figure 6 exhibits plots similar to figs . 3 and 5 with the multiplicative noise term associated with the protein synthesis term , i.e., is as given in eq .we designate this type of noise as type 2 multiplicative noise .increased multiplicative noise strengths now have a greater effect on the steady state probability distributions than in the earlier case when the multiplicative noise is associated with the maximum dilution rate ( fig .we again use the mixture model to determine the relative weights ( in percentage ) of the subpopulations as well as their variances as a function of the multiplicative noise strength with the noise being either of type 1 ( fluctuations in the maximum dilution rate ) or type 2 ( fluctuations in the effective protein synthesis rate constant ) . in both the cases ,the additive noise strength is kept fixed at .figure 7 shows the plots of the relative weights of subpopulation 2 and the variances of the associated probability distributions as a function of the multiplicative noise strength for both the type 1 and type 2 noises .the value of the deterministic potential of which is shown in fig .the other parameter values are kept the same as in the case of fig .3 . in the case of the type 1 noise ,the relative weight of a subpopulation is very little affected by increased noise strength whereas the type 2 noise has a substantially greater effect in bringing about phenotypic transitions between the stable expression states .for example , in the case of the type 1 noise , the change in the relative weight is very small , from 18.1% to 18.4% , when is increased from 0.05 to 1.0 . in the same range of values for ,the change in the relative weight in the case of the type 2 noise is much larger , from 22% to 47% . in the first case , the change in variance is also negligible , from to . in the latter case ,the change in variance is more prominent , from to .similar conclusions hold true for the other values of considered in fig .4 , namely , and . comparing figs . 4 and 7, one finds that the additive noise has the more dominant effect in the spreading of the probability distributions .the emergent bistability model studied by us has the major feature that the protein decay rate is nonlinear in form .the significance of the results obtained by us is better understood if comparisons are made with the results obtained in the cases of unregulated gene expression and gene expression involving positive feedback ( hill coefficient ) and linear protein dilution rate .the dynamics of protein concentration in the case of unregulated gene expression is given by where and denote the protein synthesis rate and the cell growth rate ( rate of increase in cell volume ) respectively .experimental and theoretical results on the simple dynamical model indicate that multiplicative noise in the cell growth rate accounts for a considerable fraction of the total noise whereas the noise associated with the synthesis rate has only a moderate contribution . in the case of cooperative regulation of gene expression involving a positive feedback loop but linear protein decay rate the dynamics of protein concentrationis given by if the protein synthesis term is sufficiently nonlinear ( ) , one can obtain bistability in specific parameter regions .the positive feedback amplifies the small fluctuations associated with the synthesis rate constant so that the contribution of the synthesis term to the total noise is not negligible compared to that of the protein decay term . in the case of emergent bistability, we have shown that for moderately large values of the metabolic cost parameter , the multiplicative noise associated with the maximum dilution rate parameter has little influence on the shape of the steady state probability distribution . in this case , the contribution of the protein synthesis term to the total noise is greater .the specific nonlinear form of the protein dilution rate appears to attenuate the effect of fluctuations in the maximum dilution rate parameter . for low values of the parameter ,the effect of the multiplicative noise associated with the maximum dilution rate on the steady state probability distribution is noticeable but the extent of the region of bistability decreases as the values are lowered . in our study , the finding that the additive noise has the most dominant effect on the steady state protein distributions followed by the multiplicative type 2 and type 1 noise terms respectively is straightforward to explain .while the additive noise has a bare form , the noise in the effective synthesis rate constant ( type 2 noise ) is damped by a factor and the noise in the maximum dilution rate ( type 1 noise ) is damped even further by the factor for . .the two types of noise are now correlated with being the strength of the correlation .stochastic potential ( a ) and the steady state probability distribution ( b ) are displayed for where the system is bistable .the other parameter values are , , and .the noise strengths are kept fixed at the values and has the values ( dashed line ) , 0.0 ( solid line ) and -0.7 ( dot - dashed line ) in the successive plots . ] we lastly consider the case when the additive and multiplicative types of noise are correlated , i.e. , in eq .( 3 ) is .such correlations occur when the two types of noise have a common origin .we consider the multiplicative noise to be associated with the maximum dilution rate , i.e. , in eq .( 2 ) has the form shown in eq .figure 8 shows the plots of the stochastic potential ( eq .( 14 ) ) and the sspd ( eq ( 12 ) ) as functions of .the noise strengths and are kept fixed at the values . the stochastic potential and shown for three values of , ( dashed line ) , ( solid line ) and ( dot - dashed line ) . the value is included for the sake of comparison between the correlated and uncorrelated cases .the parameter values are kept fixed at , , and .we note from fig . 8that negative ( positive ) correlation decreases ( increases ) the depth of the right potential well .this is reflected in the steady state probability distribution with negative ( positive ) decreasing ( increasing ) the height of the second peak of from that in the case .the changes in the depth of the left potential well and the height of the first peak of are just the reverse since the probability distribution is normalized .we consider a bistable potential with two stable steady states at and ( ) separated by an unstable steady state at defining the barrier state . in the presence of noise , exits from the potential wells are possible .the exit time is a random variable and is designated as the first passage time . in this section , we study the effects of additive and multiplicative noise on the mean first passage time ( mfpt ) .consider the state of the system to be defined by at time with x lying in the interval ( ) .the first passage time is the time of first exit of the interval .the mfpt is the average time of the first exit and satisfies the equation the mfpt for exit from the basin of attraction of the stable steady state at satisfies eq .( 19 ) with the interval and boundary conditions given by the prime denotes differentiation with respect to , with reflecting boundary condition at and absorbing boundary condition at . in a similar manner , one can compute the mfpt ( from eq .( 19 ) ) for exit from the basin of attraction of the stable state at .the interval is now with and being an absorbing and a reflecting boundary point respectively , i.e. , and . and are respectively the mfpts for exits from the basins of attraction of the stable steady states at and .six different cases are considered : ( a ) log [ mfpt ] versus , with only the additive noise term ( strength ) present , ( b ) log [ mfpt ] versus with both additive and multiplicative ( associated with the effective protein synthesis rate constant ) noise terms are present .the noise strengths are and ; ( c ) and ( d ) are similar respectively to ( a ) and ( b ) except that log [ mfpt ] is plotted as a function of * * ; * * ( e ) and ( f ) are similar respectively to ( b ) and ( d ) except that the additive noise strength is changed to . in the cases ( a ) , ( b ) and ( e ) , . in the cases( c ) , ( d ) and ( f ) , . in all the casesthe parameter and . ] figure 9 displays the results of the computations of the mfpts in different cases : ( a ) the logarithm of the mfpt is plotted versus with only the additive noise ( strength ) present , ( b ) the same as in ( a ) but with the addition of the multiplicative noise associated with the effective protein synthesis rate constant ( the multiplicative noise strength ) , ( c ) and ( d ) correspond respectively to the cases considered in ( a ) and ( b ) but now [mfpt ] is plotted as a function of the parameter ( maximum dilution rate ) , cases ( e ) and ( f ) correspond respectively to ( b ) and ( d ) but with the additive noise strength changed from to . in all the casesthe parameter and . in the cases ( a ) , ( b ) and ( e ) , . in the cases( c ) , ( d ) and ( f ) , .some features are worth pointing out in the plots of the mfpts versus and .the mfpt decreases and increases with increasing values of ( figs .9(a ) , 9(b ) and 9(e ) ) whereas increases and decreases with increasing values of ( figs .9(c ) , 9(d ) and 9(f ) ) . in the log [ mfpt ] versus plots the rise in is sharper than that of in the log [ mfpt ] versus plots .similarly , in the first set of plots , the fall in is slower than that of in the second set of plots . comparing the figs .9(b ) and 9(e ) , the crossing point of the two mfpts and shifts to the left when the noise strength is changed from ( fig .9(b ) ) to ( fig .9(e ) ) . on the other hand , the crossing point shifts to the right in the log [ mfpt ] versus plots whenthe noise strength is increased ( figs .9(d ) and 9(f ) ) .a feature to emerge out of our study concerns the opposite effect that the two types of multiplicative noise have on the dynamics .this is more clearly seen in the plots of log [ mfpt ] versus and in fig .the mfpt decreases and increases with increasing values of whereas the opposite trend is observed for increasing values of .the shifts in the crossing points of the two mfpts and when the additive noise strength is increased are in opposite directions in the two cases .8 shows the effect of changing the correlation strength when the additive and multiplicative noise terms are correlated .the multiplicative noise appears in the maximum dilution rate parameter .similar plots ( not shown ) are obtained when the multiplicative noise is associated with the effective protein synthesis rate constant . for this case as well as for the type of dynamics described by eq . ( 18 ) ( with the multiplicative noise associated with the protein synthesis rate constant ) , the effect of changing the strength of the correlation between the additive and multiplicative noises is opposite to that seen in fig .8 . as a function of the parameter exhibits hysteresis .the stable steady states are represented by solid lines whereas the dotted line describes the branch of unstable steady states .the bistable region separates the monostable low and high expression states .the points marked and denote the lower and upper bifurcation points . ]bistability is often accompanied by hysteresis an example of which is shown in fig .10 . in the figure ,the steady state protein concentration is plotted as a function of the parameter .the solid branches represent the stable steady states separated by a branch of unstable steady states ( dotted line ) .the bistable region separates the monostable low from the monostable high expression state .the transitions from the low to high and high to low expression states are discontinuous in nature and the special values of ( marked and in the figure ) at which they occur are the bifurcation points of the dynamics .the path from the low to high expression state is not reversible .as increases from low to high values , a discontinuous transition occurs at the upper bifurcation point .as increases beyond this point , the steady state continues to be the high expression state .if one now reverses the direction of change in the value of , i.e. , decreases from high to low values , there is no discontinuous transition at from the high to the low expression state .the reverse transition occurs only at the lower bifurcation point .the irreversibility of paths between the low and high expression states results in hysteresis . as pointed out earlier , in the case of the model under study, one can pass continuously from the low to the high expression state bypassing the region of bistability ( fig .an example of this type of behavior is obtained in the study on multistability in the lactose utilization network . in the wild - type _ lac _ system, one can not go from one region of monostability to the other without passing through a region of bistability .appropriate modification of the natural system make it possible to connect the two regions of monostability via a path in which no discontinuities in the steady state expression levels occur .11 exhibits the steady state probability distributions versus for different values of when the path from the low to the high expression state is continuous ( a ) and when the transition path passes through a region of bistability ( b ) . in the latter case ,the probability distribution is bimodal in the intermediate range of values .only additive noise ( strength ) is considered in both the cases and the other parameter values are and with in ( a ) and in ( b ) . versus .by changing the values of , a transition from the low to the high expression state is obtained in a continuous manner ( a ) and by passing through a region of bistability ( b ) .only additive noise ( strength ) is considered in both the cases and the other parameter values are and with in ( a ) and in ( b ) . ]emergent bistability is a recently discovered phenomenon demonstrated in the case of a synthetic circuit .there is now some experimental evidence that a similar mechanism may be at work in microorganisms like mycobacteria subjected to nutrient depletion as a source of stress . in emergent bistability ,the coexistence of two stable gene expression states is an outcome of a nonlinear protein decay rate combined with a positive feedback in which cooperativity in the regulation of gene expression is not essential .the nonlinear protein decay rate is obtained if the synthesized proteins inhibit cell growth .cell growth results in the dilution of protein concentration so that the protein decay rate is a sum of two terms : the dilution rate and the protein degradation rate . in most cases ,the latter rate is sufficiently slow so that the protein decay rate is dominated by the dilution rate . in the case of emergent bistability , the dilution rate has the form where is the parameter representing the metabolic burden . for ,the dilution rate is linear in as is the protein degradation rate . in this case ,bistable gene expression via a positive feedback is possible only if the protein synthesis rate is sufficiently nonlinear .this is achieved when the regulatory proteins form multimers ( dimers , tetramers etc . ) so that is replaced by ( n , the hill coefficient , is ) in eq .this is the scenario that has been mostly studied so far whereas the issue of bistability due to a combination of non - cooperative positive feedback and nonlinear protein degradation rate is not fully explored . in the case of bistable gene expression ,the generation of phenotypic heterogeneity in a population of cells is brought about by fluctuation - driven transitions between the stable expression states . in this paper , we analyze for the first time the effects of additive and multiplicative noise on the dynamics governing emergent bistability .such studies acquire significance in the light of the fact that the generation of phenotypic heterogeneity enables a subset of a population of microorganisms to survive under stress .examples of such stresses include depletion of nutrients , environmental fluctuations , lack of oxygen , application of antibiotic drugs etc .there is now considerable experimental evidence that positive feedback and gene expression noise provide the basis for the ` advantageous ' heterogeneity observed in microbial populations .the heterogeneity is usually in the form of two distinct subpopulations with low and high expression levels of a key regulatory protein , e.g. , comk in _ b. subtilis _ and rel in _ m. smegmatis _ .high comk levels in a fraction of the _ b. subtilis _ population result in the development of ` competence ' in the subset of cells enabling the subpopulation to adapt to changed circumstances .the role of noise in bringing about phenotypic transitions from the low to high comk expression states has been demonstrated experimentally , the reduction of noise results in a smaller fraction of cells in which competence is developed . in mycobateria ,high rel levels in a subpopulation of cells ( the so - called _ persisters _ ) initiate the stringent response in these cells enabling the subpopulation to survive under stresses like nutrient depletion .the role of positive feedback and gene expression noise in the generation of two distinct subpopulations has been investigated experimentally in _m. smegmatis _ .as mentioned earlier , there is some experimental evidence that the appearance of two distinct subpopulations , in terms of the rel expression levels , is an outcome of emergent bistability .the key elements of the stringent response pathway and the ability to survive over long periods of time under stress are shared between the mycobacterial species _m. smegmatis _ and _ m. tuberculosis _ .the latter , the causative agent of tuberculosis , has remarkable resilience against various types of stress including that induced by drugs .a mechanism similar to that in _ m. smegmatis _ may be responsible for the generation of the subpopulations of persisters ( not killed by drugs ) and non - persisters .studies based on stochastic dynamic approaches provide knowledge of the key parameters controlling the operation of relevant gene circuits and the effects of fluctuations in these parameters towards the generation of phenotypic heterogeneity .the studies provide valuable inputs in the designing of effective strategies for drug treatment .we have further pointed out the possibility of connecting the low and high expression states in a continuous manner . in this case , the region of bistability is bypassed so that no discontinuity in the steady state protein levels as seen in the hysteresis curve of fig .10 occurs .the bypassing is facilitated for low values of the ` metabolic burden ' parameter . since experimental modulation of the parameter and possible , the theoretical prediction could be tested in an actual experiment .the dynamics considered in the present paper correspond to that of an average cell . in a microscopic model , the growth rates of the two subpopulations in the region of bistability could be different .the potential impact of the growth rate on model parameters other than the protein dilution rate has been ignored in our study but this simplification is possibly well justified .an issue that has not been explored in the present study is that of stochastic gene expression resulting in a bimodal distribution of protein levels in the steady state but without underlying bistability arising from deterministic dynamics .the issue of bimodality without bistability has been explored both theoretically and experimentally .the present study is based on the approximate formalism involving the langevin and fp equations .the chief advantage of the formalism lies in its simplicity and its ability to identify the separate effects of additive and multiplicative noises of various types .the formalism is valid when the number of molecules involved in the dynamics is large and the noise is small .studies based on more rigorous approaches are desirable for a greater understanding of the effects of noise on emergent bistability .sg acknowledges the support by csir , india , under grant no . 09/015(0361)/2009-emr - i .
|
positive feedback and cooperativity in the regulation of gene expression are generally considered to be necessary for obtaining bistable expression states . recently , a novel mechanism of bistability termed emergent bistability has been proposed which involves only positive feedback and no cooperativity in the regulation . an additional positive feedback loop is effectively generated due to the inhibition of cellular growth by the synthesized proteins . the mechanism , demonstrated for a synthetic circuit , may be prevalent in natural systems also as some recent experimental results appear to suggest . in this paper , we study the effects of additive and multiplicative noise on the dynamics governing emergent bistability . the calculational scheme employed is based on the langevin and fokker - planck formalisms . the steady state probability distributions of protein levels and the mean first passage times are computed for different noise strengths and system parameters . in the region of bistability , the bimodal probability distribution is shown to be a linear combination of a lognormal and a gaussian distribution . the variances of the individual distributions and the relative weights of the distributions are further calculated for varying noise strengths and system parameters . the experimental relevance of the model results is also pointed out .
|
it is generally conjectured that solar flares represent a dissipative part of the release of the magnetic energy accumulated in active regions at the sun . the standard cshkp model ( see , e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* and references therein ) agrees well with the observed large - scale dynamics of eruptive events . in this modelthe flares are initiated by eruption of a flux - rope ( in many cases observed as a filament ) via , for example , a kink or torus instability ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) evolving later into a coronal mass ejection ( cme ) .the latter is trailed by a large - scale current layer behind the ejecta . in this trailing current layer reconnectionis supposed to give rise to various observed phenomena like hot sxr / euv flare loops rooted in h chromospheric ribbons , hxr sources in loop - top and foot - points , and radio emissions of various types . some authors ( e.g. * ? ? ?* ; * ? ? ?* ) argue that the bright thin ray - like structure observed sometimes behind cmes may represent a manifestation of the density increase connected with this current sheet ( cs ) . however , closer analysis of the classical cshkp model revealed some of its open issues .namely , the time - scales of reconnection in such a thick flare cs appeared to be much longer than typical flare duration .in other words , reconnection rate in such configurations has been found to be insufficient for the rapid energy release observed in flares .later it was found that the dissipation necessary for reconnection in the practically collisionless solar corona is an essentially plasma - kinetic process ( see , e.g. , * ? ? ?* ) which takes place at very small spatial scales .hence , the question arises , how sufficiently thin css can build up within the global - scale , thick cme - trailing current layer : open is the actual physical mechanism that provides the energy transfer from the global scales , at which the energy is accumulated to the much smaller scales , at which the plasma - kinetic dissipation takes place . addressing these questions suggested a concept of cascading ( or _ fractal _ , as they call it ) reconnection . according to their scenarioa cascade of non - linear tearing instabilities occurs in the continuously stretched current layer formed behind a cme .multiple magnetic islands ( helical flux - ropes in 3d ) , also called plasmoids , are formed , interleaved by thin css .due to increasing separation of the plasmoids in the continuously vertically extending trailing part of the cme the interleaving css are subjected to further filamentation until the threshold for secondary tearing instability is reached .this process continues further , third and higher levels of tearing instabilities take place , until the width of the css reaches the kinetic scale .this scenario has recently been supported by the analytical theory of _ plasmoid instability _ by .they show that the high - lundquist - number systems with high enough current - sheet length - to - width ratio are not subjected to the slow sweet - parker reconnection but they are inherently unstable to formation of plasmoids on very short time - scales . , , and confirmed predictions of this analytical theory by numerical simulations with high lundquist numbers . the model by presence of shear flows around current sheet ( cs ) . relate the theory of plasmoid instability further to the concept of fractal reconnection suggested by . and study the plasmoid instability numerically at smaller scales and investigate its relation to the hall reconnection .they found various regimes of parameters where different type of reconnection prevails .eventually , however , kinetic scales are reached where dissipation and particle acceleration take place most likely via kinetic coalescence of micro - plasmoids and , possibly , their shrinkage .in addition to the issue of energy transport there are also other questions that remain open in the cshkp model .it is its apparent insufficiency to accelerate such a number of particles in its single diffusion region around the x - line that would correspond to the fluxes inferred from hxr observations in the thick - target model ( * ? ? ?* ; * ? ? ?* and references therein ) . and ,furthermore , the hxr and radio ( e.g. decimetric spikes , see * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) observations indicate that the particle acceleration takes place via multiple concurrent small - scale events distributed chaotically in the flare volume rather than by a single compact acceleration process hosted by a single diffusion region .such observations are usually referred to as signatures of fragmented / chaotic energy release in flares . because of these difficulties an alternative concept based on the so - called `` self - organized criticality '' ( soc ) has been proposed .this class of models is based on the idea of multiple small - scale css embedded in chaotic ( braided ) magnetic fields that are formed as a consequence of random motions at the system boundary ( photosphere ) .multiple css can host multiple reconnection sites what provides natural explanation of observed signatures of fragmented energy release . at the same moment they provide larger total volume of diffusion regions , perhaps sufficient to account for the observed particle fluxes also quantitatively .organised large - scale picture ( i.e. coherent structures like the flare - loop arcades ) should be in the case of soc - based models achieved by the so - called avalanche principle : a small - scale energy release event can trigger similar events in its vicinity provided the system is in marginally stable state ( due to the continuous pumping of energy and entropy through the boundary ) .nevertheless , it is difficult to achieve such coherent large - scale structures as they are usually observed in solar flares in the frame of soc - based models .thus , solar flares appear to be enigmatic phenomena exhibiting duality between the regular , well - organized dynamics of flares observed at large scales and signatures of fragmented / chaotic energy release seen in observations related to flare - accelerated particles .while the coherent global eruption ( flare ) picture seems to be in agreement with the cshkp scenario , the observed fragmented - energy - release signatures favor the soc - based class of models . in the present paper , we suggest that cascading reconnection can address these three pressing questions ( i.e. energy transport across the scales , accelerated - particles fluxes , and the organized / chaotic picture duality ) as closely related to each other . in our viewthe energy is transferred from large to small scales by the cascade of fragmentation of originally large - scale magnetic structures to smaller elements .we identify two elementary processes of this fragmentation ( see section [ sect : results ] ) . in the course of this process also the initial current layer fragments into multiple small - scale , short - living current sheets .these current sheets are hierarchically embedded inside the thick current layer in qualitatively self - similar manner . in this sensethe cascading fragmentation reminds soc models , but now the chaotic distribution of small - scale currents results from _ internal _ instabilities of the global current layer .the fragmented current layer represents the modification of the standard cshkp model and thus it keeps coherent large - scale picture of solar flares . at the same time it addresses the observed signatures of fragmented energy release and the question of efficient particle acceleration .we believe that the cascading reconnection in solar coronal current layers can thus address the three main problems mentioned above _ en bloc _ , and it reconciles the two concepts of the standard cshkp and soc - based models seen hitherto as antagonistic .the paper is organized as follows : first , we describe the model used in our investigations . then we present results of our high - resolution mhd simulation of cascading reconnection in an extended , global , eruption - generated current layer .we identify the processes that lead to the fragmentation of magnetic and current structures to smaller elements. then we analyze the resulting scaling law of the energy cascade .we describe the structure , distribution and dynamics of small dissipation regions embedded in an initially thick current layer .finally , we discuss the implications that cascading reconnection have for theory of solar flares .generally speaking , the solar flare involves three kinds of processes that take place in different scale domains see fig [ fig : transfer ] . at the largest scales magnetic - field energy is accumulated . during this stage flux - rope ( filament ) is formed and its magnetic energy increases .eventually it looses its stability and gets ejected .this process already represents ( ideal ) release of the magnetic - energy at large scales .subsequently , a current layer is formed and stretched behind ejected flux - rope .energy transfer from large scales at which the magnetic energy has been accumulated to the small dissipation scale occupies intermediate range of scales .the dissipation itself takes place at smallest , kinetic scales . in this paperwe aim at studying energy transfer from large to small scales by means of numerical simulations . despite the high spatial resolutionour simulation is still within the mhd regime .also , we do not address the very process of energy accumulation i.e. the flux - rope formation and energization , nor its instability and subsequent current - layer formation .instead we assume a relatively thick and extended current layer to be already formed at the initial state of our study . in order to cover a large range of scales we limit ourselves to the 2d geometry allowing for all three components of velocity and magnetic field ( commonly referred to as 2.5d models ) .this is a reasonable assumption since observations show that the typical length of flare arcades along the polarity inversion line ( pil ) is much larger than the dimension across the pil . in the range of scales that we are interested in the evolution of magnetized plasmacan be adequately described by a set of compressible resistive one - fluid mhd equations ( e.g. * ? ? ?* ) : the set of equations ( [ eq : mhd ] ) is solved by means of finite volume method ( fvm ) . for the numerical solution it is first rewritten in its conservative form .the ( local ) state of magneto - fluid is then represented by the vector of basic variables , where , , , and are the plasma density , plasma velocity , magnetic field strength and the total energy density , respectively . the energy flux and auxiliary variables plasma pressure and current density are defined by the formulae : and is the gravity acceleration at the photospheric level .microphysical ( kinetic ) effects enter into the large - scale dynamics by means of transport coefficients here via a ( generalized ) resistivity .in general , the role of non - ideal terms in the generalized ohm s law increases as the current density becomes more concentrated via current sheet filamentation .to quantify this intensification we use the current - carrier drift velocity as the threshold for non - ideal effects to take place .such behavior is presumed by theoretical considerations and confirmed by kinetic ( vlasov and pic codes ) numerical experiments .in particular , we assume the following law for ( generalized ) resistivity ( see also * ? ? ? * ) : in order to study the energy - transfer cascade it is appropriate to cover a large range of scales . for structured gridsit means the utilization of very fine meshes . for a given simulation box sizethe number of finite grid cells is limited technically by cpu - time and memory demands .alternatively , one can use a refined mesh only at locations where the small - scale dynamics becomes important this idea forms the base for the adaptive mesh refinement ( amr ) technique ( see , e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?we implemented the numerical solver for the mhd system of equations ( [ eq : mhd ] ) in the form of block amr finite - volume method ( fvm ) code : whenever the current sheet width drops below a certain threshold , a refined mesh sub - domain is created and initialized with values one step backward in time .its evolution is then computed using accordingly refined time - step ( this procedure is commonly known as sub - cycling ) .the global dynamics influences the sub - domain evolution by means of time- and spatially varying ( interpolated ) boundary conditions . for the details of our amr algorithm see . as an illustration , in fig .[ fig : zoom_cells ] the regions of enhanced grid resolution at ( for units used see below ) are depicted at the background of current density and magnetic field .the partial differential equations ( [ eq : mhd ] ) are of a mixed hyperbolic - parabolic type .we utilized the time - splitting approach for their solution : first , hyperbolic ( conservative ) part is solved using a second - order fvm leap - frog scheme . in a second stepthe magnetic - diffusivity term is solved by means of a ( semi - implicit ) alternating - direction - implicit ( adi ) scheme .we solve the mhd system of eqs .( [ eq : mhd ] ) in a 2d simulation box initially on a global ( coarse ) cartesian grid .the horizontal and vertical dimensions of calculated box are 800 and 6400 grid cells , respectively . using mirroring boundary condition at in the symmetric cs ( see below )we obtain a doubled box with an effective grid of cells ( see also * ? ? ?* for details ) .we use the following reference frame : the -axis corresponds to the vertical direction , the -axis is the invariant ( i.e. ) direction along the pil .the -axis is perpendicular to the current layer and centered at the initial current maximum .the simulation is thus performed in the -plane , while the -plane corresponds to the solar photosphere ; the pil is located at , ( see fig . 5 in * ? ? ?the simulation is performed in dimensionless variables .they are obtained by the following normalization : the spatial coordinates , , and are expressed in units of the current sheet half - width at the photospheric level ( ) .time is normalized to the alfvn - wave transit time through the current sheet , where is the asymptotic value at and of the alfvn speed at .( [ eq : eta ] ) for anomalous resistivity in the dimension - less variables then reads for . and are now dimension - less parameters .we used and in our simulation .the choice of the threshold is not arbitrary as it is closely related to the numeric resolution reached by the code see the discussion in section [ sect : discussion ] .the parameter was adjusted to reach peak anomalous resistivities in the order of times higher than the spitzer resistivity in the solar corona similar values for resistivity based on non - linear wave - particle interaction are indicated by vlasov simulations . if not specified otherwise , all quantities in the paperare expressed in this dimension - less system of units . in order to apply our results to actual solar flares appropriate scaling of dimension - less variables , however , has to be performed .the gravity stratification included in our model introduces a natural length scale . assuming an ambient coronal temperature of mk the corresponding scale - height for a fully - ionized hydrogen plasma is mm .the value used in our simulation is , hence km . for thisscaling the flare arcade loop - top is km high , which corresponds well to observed values .the initial cs width km is roughly in line with the fact that the cs was formed by stretching of the magnetic field in the trail of ejected flux - rope / filament which itself has typical transversal dimensions km .it also corresponds ( by order of magnitude ) to estimations made from observations of thin layers trailing behind cmes that are sometimes interpreted as signatures of current sheets . for the ambient magnetic field in the vicinity of the current layerwe assume gauss ( see , e.g. , the discussion in * ? ? ?the initial state has been chosen in the form of a vertical generalized harris - type cs with the magnetic field slightly decreasing with height : in the following we will refer to and as the _ principal components _ and as the _ guide field_. the characteristic width of the initial current sheet varies with as and , , , , and are appropriately chosen constants : , , , , and . the initial state given by eq .( [ eq : init ] ) corresponds to a stratified atmosphere in the presence of gravity ( which is consistent with eqs .( [ eq : mhd ] ) ) .the divergence of the magnetic field - lines towards the upper corona is in agreement with the expansion of the coronal field .it also favors up - ward motion of secondary plasmoids formed in the course of cs tearing .this leads to further filamentation of the current sheets which develop between the plasmoids . the current density and magnetic field at the initial stateare displayed in fig .[ fig : init ] .a rather thick current layer is visible .an enhanced view of the selected area is presented in the right panel for the sake of direct comparison with the current - density filamentation developed in later stages of the evolution ( see below in fig . [fig : zoom1 ] ) .free boundary conditions are applied to the upper and right part of the actually calculated right half of the box .it means that von neumann prescription has to be fulfilled for all calculated quantities except the normal component of magnetic field and its contribution to the total energy density . and are extended in the second step fulfilling . in order to satisfy mhd boundary conditions symmetric ( ) relationsare used for , , , and anti - symmetric ( ) for at the bottom boundary while velocities are set to zero there ( ) .this ensures that the principal magnetic - field component is vertical at the bottom boundary and that the total magnetic flux passing through that boundary does not change on the rather short time - scales of the eruption , as enforced by the presence of a dense solar photosphere .mirroring boundary conditions ( symmetric in , , , and and antisymmetric in and ) are used for the left part of boundary at ( = the center of the cs ) .we use these symmetries to construct ( mirror ) the left half of the full effective box .the asymptotic plasma beta parameter at and is and the ratio of specific heats is ( adiabatic response ) .the coarse - mesh sizes are in the dimensionless units .thus , with the reference frame established above the entire box corresponds to in the -plane .the simulation was performed over 400 normalized alfvn times . to save disk space only the most interesting interval has been recorded with a step of .note that the initial state described by eq .( [ eq : init ] ) is not an exact mhd equilibrium. nevertheless , the resulting field variations are much weaker than those introduced by reconnection . at the very beginning , in order to trigger reconnection , the system is perturbed by enhanced resistivity localized in a small region surrounding a line , in the invariant direction for a short time ( see also * ? ? ?this short perturbation sets a localized inflow which somewhat compresses the current layer around the selected point .it should mimic the effect of various irregularities that can be expected during the cs stretching in actual solar eruptions , see also .later , the resistivity is switched on only if the threshold according to equation ( [ eq : eta ] ) is exceeded .as the threshold for anomalous resistivity can not be reached for a coarse grid ( the threshold for mesh - refinement is reached earlier than the threshold for anomalous resistivity onset ) , the condition in eq .( [ eq : eta ] ) is actually checked only at the smallest resolved scale , for the large - scale dynamics we take .we used the above described numerical code in order to study , which mechanisms are involved in the transfer of free magnetic energy from large to small scales . thanks to the adaptive mesh we were able to cover scales from to ( the larger size of the simulation box ) , i.e. over almost five orders of magnitude .the early system evolution can be briefly described as follows : after the localized initial resistivity pulse a flow pattern sets - up that leads to cs stretching ( in the - ) and compression ( in the -direction ) . eventually , the condition in eq .( [ eq : eta ] ) for anomalous resistivity is reached at the smallest resolved scale and first tearings occur .dynamics of the plasmoids formed by the tearing process leads to further stretching of css interleaving the mutually separating plasmoids .this leads to further generation of tearing . later , after the smallest magnetic structures yet consistent with the resolution start appearing . herewe present an analysis of this more developed stage of cascade .results are shown in figs .[ fig : zoom1 ] and [ fig : zoom2 ] . fig .[ fig : zoom1 ] shows the state of magnetic field and current density at . for better orientation auxiliary linesare added indicating the locations where and .their intersections show the positions of o - type and x - type `` nulls '' the points , where only the guide field remains finite .areas indicated by blue line are consecutively zoomed ( from left to right panels ) .the left - most panel shows the entire simulation box and the right - most corresponds to a zoom at the limit of the amr - refined resolution .the figure shows , how parts of the current layer are stressed and thinned between separating magnetic islands / plasmoids formed by tearing instabilities .the current - layer filamentation is the most clearly pronounced if one compares the zoomed views of the same selected area at the initial state ( fig .[ fig : init ] , right panel ) and the system state at ( fig .[ fig : zoom1 ] , the third panel ) . during the dynamic evolutionthe even thinner , stretched current layers become , after some time , unstable to the next level of tearing and even smaller plasmoids are formed .the zoomed figures show that cascading reconnection has formed plasmoids at the smallest resolved scales : the -sizes of the largest and smallest resolved plasmoid in fig .[ fig : zoom1 ] range from down to , the -sizes are from to .plasmoids formed by tearing instability are not only subjected to the separation but they can also approach each other . as a result the magnetic flux piles - up and transversal ( i.e. horizontal , perpendicular to the original current layer ) current sheets are formed between pairs of plasmoids approaching each other .earlier simulations with lower effective resolution treated the plasmoid merging as a coalescence process without any internal structure of the small - scale ( sub - grid ) current sheet between the magnetic islands since their thickness was not resolved .if resolved , however , the transversal current sheet does not just dissipate .instead it is subjected to the tearing instability in the direction perpendicular to the primary current layer .this is shown in fig .[ fig : zoom2 ] .the most detailed resolution ( right - most panel ) clearly reveals the formation of the o - point at , and two adjacent x - points at , and , .we call this process `` fragmenting coalescence '' in order to emphasize that even smaller structures are formed during the merging of two plasmoids .thus both the tearing and ( fragmenting ) coalescence processes contribute to the fragmentation of the original thick and smooth current layer . in order to study scaling properties of the continued fragmentation of the magnetic structures associated with the current layer we performed both a 1d fourier and a wavelet analysis of the magnetic field along the vertical axis \}$ ] .we use for this study the component since there due to the boundary condition .the results are shown in fig . [fig : scaling ] .the upper panel shows the magnetic field and current density in the sub - set of the entire computation domain ( , note the rotated view ) , where the current layer is fragmented .panel ( b ) shows the profile of along the current - layer axis , and panels ( c ) and ( d ) the fourier and wavelet analyses of this profile .the fourier power spectrum exhibits a power - law scaling with the spectral index in rather wide range of scales 300 km 10000 km .this clearly indicates cascading nature of the continued fragmentation .the energy - transfer cascade ends at km in fig .[ fig : scaling ] .this is closely related to the dissipation threshold that has been chosen as in our simulation . by selecting this value we shifted dissipation - scale domain into the window of resolved scales see the discussion in section [ sect : discussion ] .typical width of dissipative current sheets in our model is thus km . since the plasmoid dimensions along the cs are about one order of magnitude ( typically 6 ) larger than across cs , distribution of magnetic energy in structures along cs , which is depicted in fig .[ fig : scaling ] , reaches its dissipation scale at km . in reality , the ion inertial length is considered as a typical width of dissipative current sheets .its value for parameters and used in this paper is m in the simulation - box center , i.e. at ( see fig . [fig : transfer ] ) .the most pronounced features in the wavelet spectrum are the locations of low signal ( the white islands ) .they correspond to the filamented parts of the current layer between plasmoids .their distribution indicates , that the filamented current sheets are embedded within the global current layer in a hierarchical ( qualitatively self - similar ) manner .the current sheets are filamented down to the resolution limit of our simulations ( in reality to the kinetic scales ) .the smallest current - density structures contain dissipative / acceleration regions . in the followingwe will study the structure , distribution and dynamics of these non - ideal regions embedded in the global current layer .cascading reconnection and consequent fragmentation of the current layer may have significant impact also for particle acceleration in solar flares . instead of a single diffusion region assumed in the classical picture of the solar reconnection ,cascading fragmentation causes the formation of large amount of thin non - ideal channels .the structuring of non - ideal regions in our simulation is depicted in fig .[ fig : dissip ] .the left panel shows two areas of dissipation around , and , . a closer look ( right panel , note the large zoom ) , however , reveals that the bottom dissipation region is structured and it is in fact formed by two regions of finite magnetic diffusivity that are associated with two x - points at , and , interleaved with a ( micro ) plasmoid .the multiple dissipative regions embedded in the global current layer are favorable for efficient ( and possibly multi - step ) particle acceleration . at the same time they provide a natural explanation of _ fragmented energy release _ as it has been inferred from hxr and radio observations .since they are embedded in the large - scale current layer the classical well - organized global picture of eruptions is kept simultaneously .[ fig : dissip ] shows that the x - points formed in the thinned current sheets between magnetic islands are connected with the thin channels of magnetic diffusivity .hence it is appropriate to study the distribution and dynamics of these non - ideal regions by means of tracking the x - points associated with them .we present such analysis in fig .[ fig : nulls ] . in order to see the `` skeleton '' of the reconnection dynamics we followed the positions of all magnetic `` null '' points during the entire recorded interval .the results show the kinematics of the o - type ( red circles ) and x - type points .motivated by our endeavour to establish a closer relation of the model to observable quantities in our consecutive study we also paid special attention to the magnetic connectivity of the x - points to the bottom boundary . for this sake , the x - points connected to the model base (= the photosphere ) are painted as green asterisks while the unconnected x - points are displayed as black crosses . since we are interested in the cascade that already developed into the stage reaching the smallest resolved structures ( ) , many x- and o- points connected with the larger - scale ( and therefore longer - living ) structures ( plasmoids ) are already formed and their space - time trajectories enter fig .[ fig : nulls](a ) from the bottom . fig .[ fig : nulls ] thus shows mainly full life - cycles of the x- and o- points related to the smallest resolved plasmoids .it is best visible in the three bottom panels ( b ) ( d ) that show zoomed views ( projected to the -plane ) of typical examples of null - point dynamics the creation of temporary x o pairs ( panels ( b ) and ( d ) ) and plasmoid merging ( c ) . as it can be seen from panels ( b ) and ( d ) the x - points can become magnetically connected ( the right x - point in panel ( d ) ) or disconnected ( panel ( b ) ) to / from the bottom boundary during their lifetime .note also the splitting ( and subsequent merging ) of the x - point at , into an x - o - x configuration between and in panel ( a ) .this process maps the tearing in the transversal ( horizontal ) current sheet formed between interacting plasmoid and the loop - arcade ( see also fig . [fig : zoom2 ] ) . note that fig .[ fig : nulls ] can be compared with fig . 5 in .the main difference is just in the presence of the off - plane x - points formed by the fragmentation of the cs between coalescing plasmoids in our simulation .reconnection in the trailing current layer behind an ejected flux - rope ( filament ) is a key feature of the standard cshkp scenario of solar flares .a large amount of free magnetic energy is accumulated around this rather thick ( relative to plasma kinetic scales ) and very long layer .the thickness of this layer was estimated both from the observed brightening and based on a typical transversal dimensions of a filament .both ways one obtains the order of magnitude of km . on the other hand collisionless reconnection requires dissipation at very small scale , thin current sheets with typical width of the order of m in the solar corona .the fundamental question arises how the accumulated energy is transferred from large to small scales . or , in other words , what are the mechanisms of direct energy cascade in magnetic reconnection .we addressed this question using high - resolution amr simulation covering broad range of scales to investigate the mhd dynamics of an expanding current layer in the solar corona .our simulations reveal the importance of a continued fragmentation of the current layer due to the interaction of two basic processes : the tearing instability of stretched current sheets and the fragmenting coalescence of flux - ropes / plasmoids formed by the tearing and subsequently forced to merge by the tension of ambient magnetic field .after ejection of the primary flux - rope ( i.e. the filament / cme ) , a trailing current layer is formed behind it which becomes long and thins down . as it has been pointed out by theoretical analysis by , current layers with high enough length - to - width ratio become unstable for fast plasmoid instability .moreover , any irregularity in the plasma inflow that stretches the sheet facilitates the tearing .plasmoids that are formed are subjected to the tension of ambient magnetic field , which causes them to move .the motion can lead to their increasing separation .a secondary current layer then formed between them becomes , again , stretched and a secondary tearing instability can take place .this simulation result , illustrated by fig .[ fig : zoom1 ] , fully confirms the scenario suggested by , developed further by and into the analytical theory of chain plasmoid instability .the results are also in qualitative agreement with the simulations of plasmoid instability by and , which has been performed , however , with constant resistivity . in addition to that, our simulation has shown that the converging motion of plasmoids leads to a magnetic - flux pile - up between mutually approaching plasmoids .consequently , secondary ( oppositely directed ) current sheets are formed perpendicular to the original current layer . while previous studies found only unstructured current density pile - ups between merging magnetic islands our enhanced - by - amr resolution reveals secondary tearing mode instabilities that take place in the transversal to the primary current sheet direction ( see fig .[ fig : zoom2 ] ) .this process represents a new mechanism of fragmentation and changes our view to coalescence instability , which has been hitherto commonly considered as a simple merging process of two plasmoids contributing to the inverse energy cascade only . note that this behavior is different from that seen for plasmoids at the dissipation scales in pic simulations , where plasmoids merge without subsequent tearing .one can suppose that with even higher spatial resolution one would see more subsequent tearing mode instabilities altering with the fragmenting coalescence of the resulting magnetic islands / flux - ropes . as a result third- and higher order current sheets could form . to sum up , the results of our simulation support the idea that both the tearing and `` fragmenting coalescence '' processes lead to the formation of consecutively smaller magnetic structures ( plasmoids / flux - ropes ) and associated current filaments .subsequent stretching and compression cause a filamentation of the current .this situation is schematically depicted in fig .[ fig : fragmentation ] which can be seen as a generalization of the scheme in fig . 6 . in .one can expect that this cascade will continue down to the scales where the magnetic energy is , finally , dissipated .note that the physics and the corresponding scaling laws may change at intermediate ( but still relatively small ) scales when additional contributions to the generalized ohm s law become significant , e.g. a hall term see recent simulations by and .reconnection in the current sheet between merging plasmoids is fast since it is driven by ambient - field magnetic tension which naturally pushes the flux - ropes together .thus even shorter time - scales can be reached by this process than by tearing cascade in the stretched cs . and yet another point makes the overall reconnection process more efficient : many magnetic - flux elements except those ejected out - wards to the escaping cme reconnect several times .first , during the primary tearing and plasmoid formation and then again during plasmoid coalescence . since coalescence leads to a follow - up tearing instability ( in the transversal direction ) the remaining magnetic flux is subjected to another act of magnetic reconnection .this process resembles the _ recurrent separator reconnection _ simulated by . to some extentthe initial situation of global , smooth and relatively thick sheets is similar to the turbulence on - set in a sheared flow in ( incompressible ) fluid dynamics ( fd ) as schematically shown in fig .[ fig : analogy ] .usually the typical length - scale of shear flows the counterpart of the width of current layers is much larger than the dissipative ( molecular ) scale .the mechanism of energy transfer from large to small scales in classical fd is mediated by a cascade of vortex tubes : large - scale vortices formed by shear flows can mutually interact giving rise to increased velocity shear at the smaller scales in the space between them .each small shear flow element formed by this process can be , again , subjected to this fragmentation .based on our simulation results , we suggest a similar scenario for current - layer fragmentation .the role of the vortex - tubes in fd is in mhd taken over by flux - ropes / plasmoids . in analogy with the on - set of turbulence in sheared flows, one could expect that a dynamical balance would arise between fragmentation and coalescence processes in later more developed stage .this should be manifested by a power - law scaling rule .using amr we reached a rather broad ( five orders of magnitude ) range of scales .this allowed us to perform a 1d scaling analysis of the magnetic - field structures formed along the current layer for the first time .the scaling rule found exhibits , indeed , a power - law distribution with the index ( fig .[ fig : scaling ] ) . since our resolution still does not allow to make this scaling analysis only within the small selected subdomain around the cs center where one could expect isotropic turbulence ( we would lack sufficient range of scales for that ) it is difficult to compare the spectral index found over the whole ( clearly anisotropic ) simulation domain with the values expected from the theory for fully developed isotropic turbulence .obtained power - law distribution is also in qualitative agreement with the concept of fractal reconnection by and with hierarchical analytical model of plasmoid instability as described by .they use distribution functions for plasmoid width and contained flux in order to characterize statistical properties of plasmoid hierarchy rather than power spectrum .we plan to perform similar analysis of our results in the future study in order to compare the results also quantitatively . in order to obtain as broad as possible scale range in the plane where reconnection occurs , we performed these simulations using 2.5d approach .the question arises to what extent a full-3d treatment would change the resulting picture . in the fd cascade vorticesare deformed , their cross - sections change along their main axis , even in the topological sense . the object defined as a single vortex tube in one placecan be split into two in another location .one can expect a similar behavior of plasmoids / flux - tubes in mhd .there they could be subjected to the kink and similar instabilities with .such processes would naturally lead to the modulation of reconnection rate along the pil .observations indicating such effect have already been presented . to some extentthe expected behavior can also be obtained for kink instabilities of tiny current channels at the dissipative scale studied by 3d pic simulations .nevertheless , found no such evidence in their 3d amr mhd simulation , which uses , however , a different set - up .sizes of the plasmoids formed under 3d perturbation in the direction that corresponds to the invariant -axis in our 2.5d case have been found very short preventing a kink - like instabilities to develop . on the other hand, the resulting plasmoid lengths might depend on the initial guide field . to sum up, the answer to this question can be found only via full 3d simulations with similar initial set - up as we used here .therefore we plan to extend our current 2.5d simulations with very high in - plane resolution with moderately resolved structuring in the third dimension .cascading fragmentation of the current layer is closely related to another puzzling question of current solar flare research the apparent contradiction between observed regular large - scale dynamics and signatures of fragmented energy release in ( eruptive ) flares .this duality is reflected by two classes of flare models : the classical cshkp scenario based on magnetic reconnection in a single global flare current sheet and the class of `` self - organized criticality '' ( soc ) models based on the avalanche of small - scale reconnection events in multiple current sheets formed as a consequence of either chaotic or regular but still complex boundary motions causing , e.g. , magnetic braiding .the model of cascading reconnection has the potential to provide a unified view on these seemingly very different ( see the discussion in the next paragraph ) approaches . from the global point of view, it coincides with the classical cshkp model keeping the regular picture of the process at large scales . at the same time , due to the _ internal _ current - layer fragmentation the tearing / coalescence cascade forms multiple small - scale current sheets and potential diffusive regions . as a consequence , fragmented energy release , e.g. , by particle acceleration , can take place in these tiny regions .to some extent this finding can be seen as a follow - up of the bursty reconnection regime found by .these authors show that intermittent signal ( x - ray , radio ) can be related to chaotic pulses of the ( resistive ) electric field in the dissipation region around x - point .the pulsed regime is a consequence of non - linear interplay between governing mhd equations and the anomalous resistivity model . in our view , however , these pulses result from the subgrid physics unresolved in earlier simulations .essentially , what has been seen as a single dissipative region around a single x - point in the coarse - grid models is in fact a ( `` fractal - like '' ) set of non - ideal areas around multiple x - points ( see fig .[ fig : dissip ] ) that interleave very small - scale mutually interacting plasmoids .this view based on high - resolved simulation is perhaps closer to the term of `` fragmented energy release '' that assumes the energy dissipation to be performed via many concurrent small - scale events appearing in multiple sites distributed in space . in this contextit is interesting to note how surprisingly well the phenomenological resistivity model ( eq . [ eq : eta ] ) used by mimics the sub - grid scale physics as it has been able to reproduce qualitatively temporal behavior of resistive electric field even without resolving actual processes that are responsible for it .note also that a possible role of tearing and coalescence in fragmentation of the energy release in solar flares has been mentioned already by .we would like to emphasize that there is a fundamental difference between the fragmented energy release by cascading reconnection and soc models .it is rooted in the fact that in cascading reconnection the complexity / chaoticity is due to an intrinsic current - layer dynamics , i.e. , due to spontaneous fragmentation , while it is introduced in soc models through ( external ) boundary conditions ( chaotic boundary motions ) .in fact , these two concepts are contrary to some extent : while in soc - based models the global flare picture is built as an avalanche of many small - scale events ( bottom process in the scale hierarchy ) in cascading reconnection small scale structures are formed as a consequence of internal dynamics of large - scale cs ( top process ) .fragmented energy release is closely related to the number problem of particles accelerated in solar flares .a single diffusion region assumed in the cshkp model provides a far too small volume for accelerating strong fluxes of particles as they are inferred from the hxr observations .this argument has been used in favor of soc - based models as they provide energetic - particle spectra and time - profile distributions as observed and explain large energetic particle fluxes .we suggest that , however , the inclusion of cascading reconnection into the cshkp has even more capabilities than soc models .it could explain both the distribution and the number of accelerated particles , based on a physical consideration of many small - scale current sheets which can host tiny diffusive channels that all can act as the acceleration regions ( see fig .[ fig : dissip ] ) . hereit is appropriate to make one technical comment : in an mhd simulation with resistivity model described by eq .( [ eq : eta ] ) , respectively its dimension - less version , the size and the number of diffusive regions are controlled mostly by the threshold for the onset of ( anomalous ) diffusivity .the higher is chosen , the thinner the current sheets can become , the smaller and more numerous are the embedded diffusion regions . since one has to resolve these diffusive regions in the simulation, one has to choose the threshold low enough to be able to resolve the dissipation regions appropriately . in ideally resolved simulations , covering all scales down to the real physical dissipation length, the critical velocity could be chosen of the order of physically relevant value the electron thermal speed . in dimensionless unitsthis corresponds to , where and are the proton and electron masses . for a technically limited spatial resolution ,one has to choose a ( much ) smaller value of in order to resolve the smallest possible current sheets , before dissipation sets in , by a reasonable number of grid points .since the resolution in our current amr simulation is higher than in earlier models , we could choose a more reasonable value of .this allowed us to track down more fragmented , smaller reconnection regions .if we extrapolate this trend , we can expect that with even higher resolution one would find even more and tinier diffusive regions .they would be grouped hierarchically ( self - similarly ) , occupying a sub - space of the global current layer .such kind of distribution is indicated in the wavelet spectra ( white islands in fig .[ fig : scaling](d ) ) , and also by the positions and motion of the associated x - points in fig .[ fig : nulls ] .the latter shows a structured grouping of `` null points '' and their various life times .our simulation has shown that cascading reconnection due to the formation and fragmenting coalescence of plasmoids / flux - ropes is a viable physical model of fragmented magnetic energy release in large - scale systems , like solar flares .cascading reconnection addresses at once three key problems of the current solar - flare research : the scale - gap between energy - accumulation and dissipation scales , the duality between regular global - scale dynamics and fragmented energy - release signatures observed simultaneously in solar flares , and the issue of particle acceleration .all these problems arising from observations appear to be tightly related via cascading reconnection . in order to evaluate relevance of the cascading reconnection for actual solar flaresfurther it is desirable , however , to identify and predict model - specific observables and to search for them in observed data .we are going to propose possible specific signatures and compare them with observations in a consecutive paper .this research was performed under the support of the european commission through the solaire network ( mtrn - ct-2006 - 035484 ) and the grant p209/10/1680 of the grant agency of the czech republic , by the grant 300030701 of the grant agency of the czech academy of science and the research project av0z10030501 of astronomical institute of the czech academy of science .the authors thank to dr .antonius otto for inspirational discussions and to unknown referee for valuable comments that helped to improve the quality of the paper . , l. 2007 , in lecture notes in physics , berlin springer verlag , vol .725 , `` magnetic complexity , fragmentation , particle acceleration and radio emission from the sun '' , berlin springer verlag , ed .klein & a. l. mackinnon , 1531
|
magnetic reconnection is commonly considered as a mechanism of solar ( eruptive ) flares . a deeper study of this scenario reveals , however , a number of open issues . among them is the fundamental question , how the magnetic energy is transferred from large , accumulation scales to plasma scales where its actual dissipation takes place . in order to investigate this transfer over a broad range of scales we address this question by means of a high - resolution mhd simulation . the simulation results indicate that the magnetic - energy transfer to small scales is realized via a cascade of consecutive smaller and smaller flux - ropes ( plasmoids ) , in analogy with the vortex - tube cascade in ( incompressible ) fluid dynamics . both tearing and ( driven ) `` fragmenting coalescence '' processes are equally important for the consecutive fragmentation of the magnetic field ( and associated current density ) to smaller elements . at the later stages a dynamic balance between tearing and coalescence processes reveals a steady ( power - law ) scaling typical for cascading processes . it is shown that cascading reconnection also addresses other open issues in solar flare research such as the duality between the regular large - scale picture of ( eruptive ) flares and the observed signatures of fragmented ( chaotic ) energy release , as well as the huge number of accelerated particles . indeed , spontaneous current - layer fragmentation and formation of multiple channelised dissipative / acceleration regions embedded in the current layer appears to be intrinsic to the cascading process . the multiple small - scale current sheets may also facilitate the acceleration of a large number of particles . the structure , distribution and dynamics of the embedded potential acceleration regions in a current layer fragmented by cascading reconnection are studied and discussed .
|
theano was introduced to the machine learning community by as a cpu and gpu mathematical compiler , demonstrating how it can be used to symbolically define mathematical functions , automatically derive gradient expressions , and compile these expressions into executable functions that outperform implementations using other existing tools . then demonstrated how theano could be used to implement deep learning models . in section [ sec :main_features ] , we will briefly expose the main goals and features of theano .section [ sec : new_in_theano ] will present some of the new features available and measures taken to speed up theano s implementations .section [ sec : benchmarks ] compares theano s performance with that of torch7 on neural network benchmarks , and rnnlm on recurrent neural network benchmarks .here we briefly summarize theano s main features and advantages for machine learning tasks . , as well as theano s website have more in - depth descriptions and examples .theano includes powerful tools for manipulating and optimizing graphs representing symbolic mathematical expressions . in particular ,theano s _ optimization _ constructs can eliminate duplicate or unnecessary computations ( e.g. , replacing by , obviating the need to compute in the first place ) , increase numerical stability ( e.g. , by substituting stable implementations of when is tiny , or ) , or increase speed ( e.g. , by using loop fusion to apply a sequence of scalar operations to all elements of an array in a single pass over the data ) .this graph representation also enables symbolic differentiation of mathematical expressions , which allows users to quickly prototype complex machine learning models fit by gradient descent without manually deriving the gradient , decreasing the amount of code necessary and eliminating several sources of practitioner error .theano now supports forward - mode differentiation via the r - operator ( see section [ sec : r - op ] ) as well as regular gradient backpropagation .theano is even able to derive symbolic gradients through loops specified via the scan operator ( see section [ sec : scan ] ) .theano s dependency on numpy and scipy makes it easy to add an implementation for a mathematical operation , leveraging the effort of their developers , and it is always possible to add a more optimized version that will then be transparently substituted where applicable .for instance , theano defines operations on sparse matrices using scipy s sparse matrix types to hold values .some of these operations simply call scipy s functions , other are reimplemented in c++ , using blas routines for speed .theano uses cuda to define a class of -dimensional ( dense ) arrays located in gpu memory with python bindings .theano also includes cuda code generators for fast implementations of mathematical operations .most of these operations are currently limited to dense arrays of single - precision floating - point numbers .theano s development team has increased its commitment to code quality and correctness as theano usage begins to spread across university and industry laboratories : a full test suite runs every night , with a shorter version running for every pull request , and the project makes regular stable releases .there is also a growing community of users who ask and answer questions every day on the project s mailing lists .this section presents features of theano that have been recently developed or improved .some of these are entirely novel and extend the scenarios in which theano can be used ( notably , scan and the r operator ) ; others aim at improving performance , notably reducing the time not spent in actual computation ( such as python interpreter overhead ) , and improving parallelism on cpu and gpu .theano offers the ability to define symbolic loops through use of the _ scan op _ , a feature useful for working with recurrent models such as recurrent neural networks , or for implementing more complex optimization algorithms such as linear conjugate gradient .scan surmounts the practical difficulties surrounding other approaches to loop - based computation with theano .using theano s symbolically - defined implementations within a python loop prevents symbolic differentiation through the iterative process , and prevents certain graph optimizations from being applied . completely unrollingthe loop into a symbolic chain often leads to an unmanageably large graph and does not allow for `` while''-style loops with a variable number of iterations .the _ scan _ operator is designed to address all of these issues by abstracting the entire loop into a single node in the graph , a node that communicates with a second symbolic graph representing computations inside the loop . without going into copious detail , we present a list of the advantages of our strategy and refer to section [ sec : benchmark_scan ] where we empirically demonstrate some of these advantages . tutorials available from the theano website offer a detailed description of the required syntax as well as example code . 1 .scan allows for efficient computation of gradients and implicit `` vector - jacobian '' products .the specific algorithm used is _ backpropagation through time _ , which optimizes for speed but not memory consumption .2 . scan allows for efficient evaluation of the r - operator ( see ) , required for computing quantities such as the gauss - newton approximation of hessian - vector products .the number of iterations performed by scan can itself be expressed as a symbolic variable ( for example , the length of some input sequence ) or a symbolically specified condition , in which case scan behaves as a `` do while '' statement . if the number of steps is fixed and equal to 1 , the scan node is `` unrolled '' into the outer graph for better performance .any loop implemented with scan can be transparently transferred to a gpu ( if the computation at each iteration can itself be performed on the gpu ) .the body of scan ( which involves computing indices of where to pick input slices and where to put the output of each iteration ) is implemented with cython to minimize the overhead introduced by necessary bookkeeping between each iteration step . 6 .whenever possible , scan detects the amount of memory necessary to carry out an operation : it examines intermediate results and makes an informed decision as to whether such results are needed in subsequent iterations in order to partially optimize memory reuse .this decision is taken at compilation time .loops represented as different scan instances are merged ( given that certain necessary conditions are respected , e.g. , both loops perform the same number of steps ) .this aids not only in reducing the overhead introduced by each instance of scan , but also helps optimize the computation performed at each iteration of both loops , e.g. certain intermediate quantities may be useful to the body of each individual loop , and will be computed only once in the merged instance .finally , whenever a computation inside the loop body could be performed outside the loop , scan moves said computation in the main graph .for example element - wise operations are moved outside , where , given that they are done by a single call to an elementwise operations , one can reduce overhead .another example is dot products between a vector and a matrix , which can be transformed outside of the loop into a single matrix - matrix multiplication .such optimizations can lead to significant speed improvement and in certain cases to the elimination of the scan node completely .all of these features make it easier for a user to implement a variety of recurrent neural networks architectures , and to easily change the equations of the model without having to derive gradients by hand or worry about manually optimizing the implementation .recent results proposed a specific pipeline for efficiently implementing truncated newton - like second - order methods such as hessian - free optimization .the pipeline relies on the `` r - operator '' , introduced by , which is a mathematical operator that given a function , the current parameter configuration and a vector , efficiently computes the `` jacobian - vector '' product , where is the jacobian of the function evaluated at . for the sake of completeness, we would mention that the `` r - operator '' evaluates the directional derivative of , and is known in the automatic differentiation community as the _forward mode_. this operation can be seen analogous to the _ backward mode _ or backpropagation , which computes the `` vector - jacobian '' product , where is some row vector .theano offers efficient computation of both operators by employing the chain rule on the computational graph , where each operational node knows how to compute the product of its jacobian and some vector in an efficient way . because the output of any such operation is a symbolic graph , the computations get further optimized at compilation time .this provides flexibility in writing down the computations that represent a model , without worrying about details that would lead to faster gradients , or faster `` jacobian - vector '' products .for example , let us consider a complicated model , a recurrent network and the task of computing the gauss - newton approximation of the hessian times some vector ( which lies at the heart of the hessian - free algorithm ) .a naive implementation would imply at least three passes through the loop , once for evaluating the function , the second one to backpropagate the gradient ( reverse - mode ) and the third time to compute the `` jacobian - vector '' dot product involved in the equation .a more careful implementation however reveals that two passes should be sufficient ( see ) . by simply calling _tt.lop(f , , tt.rop(f , , ) ) _ , theano is able to figure out the relationship between the different loops , resulting in only two passes .when a compiled theano function is called , a runtime engine orchestrates which operations should be performed on which data , calling the appropriate functions in the right order .this was previously implemented as a python loop , calling either native python functions or c functions made available through a python module interface , in a pre - determined order ( i.e. , a forward traversal of the computational graph , from inputs to outputs ) .the main drawback of this approach is that it was impossible to implement lazy evaluation in the computational graph .for instance , the `` if - then - else '' construct would always compute the result of both `` then '' and `` else '' branches , as well as the condition , before updating its output value .a new runtime , dubbed the `` vm '' ( for `` virtual machine '' , because it drives the execution of small code units ) , enables lazy evaluation of such operations , meaning that we evaluate only branches that are actually necessary for correct computation of the output .a c implementation of the vm was also added ( dubbed the `` cvm '' ) . beyond the performance advantage inherent in running the loop itself in c, the cvm also avoids the performance penalty of returning control to the python interpreter after each operation : if a c implementation of a given operation is available , the cvm will execute it directly without the overhead of a python function call .the performance gain is particularly significant for graphs that perform many operations on relatively small operands .in particular , if all operations used in a compiled theano function have c implementations , the entirety of the cvm s execution will be performed at c speed , returning control to the python interpreter only after all outputs have been computed .the cvm is now the default runtime . to derive fuller benefit from the existence of the cvm, we have added new c implementations of existing operations ( even when python implementations were almost as efficient ) in order to avoid context switches .for instance , matrix - vector dot products on cpu had previously resulted in a call to a scipy function that wraps the gemv routine from blas .we have since added a wrapper in c that calls the gemv routine directly .in addition to dense tensors , theano supports sparse matrices based on scipy s implementations of compressed sparse row ( csr ) and compressed sparse column ( csc ) formats .support for efficient sparse operations , in particular operations needed to compute derivatives of sparse operations , has been greatly improved .the online documentation lists currently supported operations .theano supports two kinds of gradient computation through sparse matrices .`` regular '' differentiation does not suppose that the sparsity structure of a matrix at a given time is preserved , and thus a sparse variable may have a dense gradient .`` structured '' differentiation considers the sparsity structure of a matrix as permanent , and the gradient with respect to that matrix will have the same sparsity structure . in the past, not much effort had been put into allowing theano to leverage multi - core cpu architectures for parallel execution ; development effort was instead focused on gpu implementations and new automatic optimizations .multi - core parallelism was therefore only available to operations that called into a parallel blas implementation . showed that using openmp to parallelize the c implementation of cpu operations can bring substantial speed improvements with relatively little development effort .we recently added support for openmp - enabled operations in theano , and used this support to parallelize 2-dimensional convolution . adding parallel implementations for other operations will proceed more rapidly with this infrastructure in place .when executing cuda kernels on the gpu , the function call that starts it does not wait for the execution of the kernel to complete .instead , it will merely schedule the kernel to be executed at some point in the future , allowing the main program to perform other tasks , including scheduling other gpu kernels .when the result of the gpu computation is needed , the program can wait for the end of the kernel to execute , and return its result .before release 0.6 , theano always waited for the result of the kernel computation as soon as it was launched , effectively preventing the execution of other operations on the cpu during this time .this approach eases profiling and debugging because at any given time , it is clear which gpu kernel is currently being executed , and error messages are retrieved as soon as possible ; however , such an approach prohibits the concurrent use of cpu - based computation , passing up an opportunity for further speed gains .the new default behaviour of theano is not to wait on the result of gpu computation until it is strictly needed .it is also possible to revert to the previous behaviour , which is useful for profiling execution time of the different gpu kernels . showed that theano was faster than many other tools available at the time , including torch5 . the following year, showed that torch7 was faster than theano on the same benchmarks . herewe briefly introduce torch7 and evaluate performance of their latest versions on neural network tasks , using the aforementioned benchmarks . then , section [ sec : benchmark_scan ] will compare the performance of theano against another package , rnnlm , when training recurrent neural networks .torch7 is advertised as a matlab - like environment for machine learning .it aims to ease development of numerical algorithms and to allow for their fast execution , while also being easy to extend .table [ theano - torch7-feature ] provides a summary comparison of the features provided by torch7 ( including the ones inherited from lua ) and theano ( including the ones coming from python and numpy / scipy ) .this section exposes the common features and differences between torch7 and theano .[ theano - torch7-feature ] theano and torch7 are two computing frameworks that were developed for the machine learning community , to make it easier to quickly implement and test new mathematical models and algorithms , without giving up the execution speed that a manually - optimized implementation would provide .both are the foundation of machine learning specific packages or projects , notably for neural networks and unsupervised learning .like theano , torch7 is based on a scripting language ( lua ) , uses heavily - optimized scientific computation libraries ( for instance , blas and lapack for linear algebra computations ) , and internal modules written in c / c++ , for the sections where execution speed is critical .it also has the capability of running parallel computation on multi - core cpus ( via openmp ) , and on gpus via cuda .both have access to a matlab - like environment : torch7 includes modules for tensor manipulations and plotting , while theano benefits from various external python libraries to perform those tasks ( notably scipy , numpy , matplotlib , ipython ) .some of torch7 s strengths stem from lua s advantages over python : lower interpreter overhead , simpler integration with c code , easy embedding in a c application .in particular , since the overhead of calling a function and executing c code are lower , higher performance will result in the case of simple functions ( that do not perform large amounts of computation ) and functions that process only a small quantity of data at a time . parallelism on multi - core cpus is another important feature of torch7 , as it was designed to use open multi - processing ( openmp ) parallel directives , notably in the tensor and neural network modules .the potential for cpu parallelization ( outside calls to blas ) in theano has only started to be explored .theano s distinguishing feature is its powerful engine for graph optimization and symbolic differentiation , mentioned in section [ sec : symbolic_math ] .the downside is that users are faced with a more complex workflow : first , define an abstract mathematical graph ( without values ) , then optimize and compile it into a callable function , and finally execute the function .this additional complexity also makes it harder to interpret errors that may be raised during the compilation or execution phase .the experiments reported here were conducted on a machine with an intel core i7 cpu 930 @ 2.80ghz , and a nvidia gtx480 gpu .the commit i d of theano s version was ` 254894fac ` , torch7 was ` 41d3b8b93 ` .as the multi - layer perceptron ( mlp ) examples in the benchmarks rely on function calls to a blas library , we made sure the same blas library was used for both torch7 and theano , in order to ensure a fair comparison .we benchmarked the gemm routine ( matrix - matrix dot product , scaling and accumulation ) , with matrix sizes large enough that any overhead becomes negligible , for a number of openmp threads limited to 1 and 4 , confirming that both tools are linked to the same blas library , and that controlling the number of openmp threads works as expected .+ in figure [ fig : benchmark ] , the left - most blue bar ( lightest shade of blue ) in each of the bar groups shows the performance of a theano function with the default configuration . that default configuration includes the use of the cvm ( section [ sec : cvm ] ) , and asynchronous execution of gpu ops ( section [ sec : async_gpu ] ) .this section shows ways to further speed up the execution , while trading off other features .[ [ disabling - garbage - collection ] ] disabling garbage collection + + + + + + + + + + + + + + + + + + + + + + + + + + + + we can save time on memory allocation by disabling garbage collection of intermediate results .this can be done by using the linker ` cvm_nogc ` . in this case, the results of intermediate computation inside a theano function will not be deallocated , so during the next call to the same function , this memory will be reused , and new memory will not have to be allocated .this increases memory usage , but speeds up execution of the theano function . in figure[ fig : benchmark ] , the second - to - left blue bar shows the impact of disabling garbage collection .it is most important on the gpu , because the garbage - collection mechanism forces a synchronization of the gpu threads , largely negating the benefits of asynchronous kernel execution .[ [ removing - overhead - of - data - conversion ] ] removing overhead of data conversion + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + when a theano function is called and the data type of the provided input is different from the expected one , a silent conversion is automatically done ( if no precision would be lost ) .for instance , a list of integers will be converted into a vector of floats , but floats will not be converted into integers ( an error will be raised ) .this is a useful feature , but checking and converting the input data each time the function is called can be detrimental to performance .it is now possible to disable these checks and conversions , which gives better performance when the input data is actually of the correct type .if the input data would actually need to be converted , then some exceptions due to unexpected data types will be raised during the execution . to disable these checks , simply set the ` trust_input ` attribute of a compiled theano function to true .the third blue bar on figure [ fig : benchmark ] shows the speed up gained with this optimization , including the ` cvm_nogc ` optimization .[ [ executing - several - iterations - at - once ] ] executing several iterations at once + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + when a theano function does not have any explicit input ( all the necessary values are stored in shared variables , for instance ) , we can save even more overhead by calling its fn member : ` f.fn ( ) ` .it is also possible to call the same function multiple consecutive times , by calling ` f.fn(n_calls=n ) ` , saving more time .this allows to bypass the python loop , but it will only return the results of the last iteration .this restriction means that it can not be used everywhere , but it is still useful in some cases , for instance training a learning algorithm by iterating over a data set , where the important thing is the updates to the parameters , not the function s output . the performance of this last way of calling a theano function is shown in the right - most , dark blue bar .figure [ fig : benchmark ] shows speed results ( in example per second , higher is better ) on three neural network learning tasks , which consists in 10-class classification of a 784-dimensional input .figure [ fig : mlp0h ] shows simple logistic regression , figure [ fig : mlp1h ] shows a neural network with one layer of 500 hidden units , and figure [ fig : mlp3h ] shows a deep neural network , with 3 layers of 1000 hidden units each .torch7 was tested with the standard lua interpreter ( pale red bars ) , and luajit , a lua just - in - time compiler ( darker red bars ) ; theano was tested with different optimizations ( shades of blue ) , described in section [ sec : speed ] . when not using mini - batches , on cpu , theano beats torch7 on the models with at least one hidden layer , and even benefits from blas parallel implementation .on the logistic regression benchmark , torch7 has the advantage , due to the small amount of computation being done in each call ( executing several iterations at once help , but not enough to beat luajit ) .torch7 also has an edge over theano on the gpu , when the batch size is one . when using mini - batches , whether of size 10 or 60 , theano is faster than torch7 on all three architectures , or has an equivalent speed .the difference vanishes for the most computationally - intensive tasks , as the language and framework overhead becomes negligible . in figure[ fig : rnn ] , we present a benchmark of a simple recurrent network , on theano and rnnlm , a c++ implementation of recurrent networks for language modeling .they were done with a batch size of 1 , which is customary with recurrent neural networks .while rnnlm is faster than theano on smaller models , theano quickly catches up for bigger sizes , showing that theano is an interesting option for training recurrent neural networks in a realistic scenario .this is mostly due to overhead in theano , which is a drawback of the flexibility provided for recurrent models .we presented recent additions to theano , and showed how they make it a more powerful tool for machine learning software development , and allow it to be faster than competing software in most cases , on different benchmarks .these benchmarks aim at exposing relative strengths of existing software , so that users can choose what suits their needs best .we also hope such benchmarks will help improving the available tools , which can only have a positive effect for the research community .we would like to thank the community of users and developpers of theano for their support , nserc and canada research chairs for funding , and compute canada and calcul qubec for computing resources .
|
theano is a linear algebra compiler that optimizes a user s symbolically - specified mathematical computations to produce efficient low - level implementations . in this paper , we present new features and efficiency improvements to theano , and benchmarks demonstrating theano s performance relative to torch7 , a recently introduced machine learning library , and to rnnlm , a c++ library targeted at recurrent neural networks .
|
a number of papers have considered in recent years , the feedback control of systems governed by the laws of quantum mechanics rather than systems governed by the laws of classical mechanics ; e.g. , see . in particular , the papers consider a framework of quantum systems defined in terms of a triple where is a scattering matrix of operators , is a vector of coupling operators and is a hamiltonian operator .all operators are on an underlying hilbert space .the paper considers a quantum system defined by a triple such that the quantum system hamiltonian is written as . here is a known nominal hamiltonian and is a perturbation hamiltonian , which is contained in a set of hamiltonians .the paper considers a problem of absolute stability for such uncertain quantum systems for the case in which the nominal hamiltonian is a quadratic function of annihilation and creation operators and the coupling operator vector is a linear function of annihilation and creation operators . such as nominal quantum systemis said to be a linear quantum system ; e.g. , see .however , the perturbation hamiltonian is assumed to be contained in a set of non - quadratic hamiltonians corresponding to a sector bounded nonlinearity .then , the paper obtains a frequency domain robust stability result .extensions of the approach of can be found in the papers in which similar robust stability results are of obtain for uncertain quantum systems with different classes of uncertainty and different applications to specific quantum systems . also , in the paper a problem of robust performance analysis as well as robust stability analysis is considered . in this paper, we extend the results of by considering a problem of robust performance analysis with a non - quadratic cost functional for the class of uncertain quantum systems of the form considered in .the motivation for considering robust performance of a quantum system with a non - quadratic cost function arises from the fact that the presence of nonlinearities in the quantum system allows for the possibility of a non - gaussian system state ; e.g. , see .such non - gaussian system states include important non - classical states such as the schrdinger cat state ( also known as a superposition state , e.g. , see ) .these non - classical quantum states are useful in areas such as quantum information and quantum communications ; e.g. , see . the presence of such non - classical states can be verified by obtaining a suitable bound on a non - quadratic cost function ( such as the wigner function , e.g. , see ) .our approach to obtaining a bound on the non - quadratic cost function is to extend the sector bound method considered in to bound both the nonlinearity and non - quadratic cost function together .it is important that these two quantities are bounded together since the non - gaussian state only arises due to the presence of the nonlinearity in the quantum system dynamics .then , by applying a similar approach to that in we are able to derive a guaranteed upper bound on the non - quadratic cost function in terms of an lmi problem . in order to illustrate this result ,it is applied to an example of a quantum system consisting of a josephson junction in an electromagnetic cavity .the robust stability of a similar system was previously considered in the paper . in this paper , we consider the robust performance of this system with respect to a non - quadratic cost functional. a future application of the robust performance analysis approach proposed in this paper would be to use it to develop a method for the design of coherent quantum feedback controllers for quantum systems to achieve a certain closed loop performance bound in terms of a non - quadratic cost functional .in such a coherent quantum feedback control scheme both the plant and controller are quantum systems ; e.g. , see . this would be useful in the generation of non - classical quantum states which are needed in areas of quantum computing and quantum information ; e.g. , see .the parameters will be considered to define an uncertain nonlinear quantum system . here, is the scattering matrix , which is chosen as the identity matrix , l is the coupling operator vector and is the system hamiltonian operator . is assumed to be of the form m \left[\begin{array}{c}a \\ a^\#\end{array}\right]+f(z , z^*).\ ] ] here , is an -dimensional vector of annihilation operators on the underlying hilbert space and is the corresponding vector of creation operators .also , is a hermitian matrix of the form \ ] ] and , . in the case of vectors of operators, the notation refers to the transpose of the vector of adjoint operators and in the case of matrices , this notation refers to the complex conjugate transpose of a matrix . in the case of vectors of operators, the notation refers to the vector of adjoint operators and in the case of complex matrices , this notation refers to the complex conjugate matrix .also , the notation denotes the adjoint of an operator .the matrix is assumed to be known and defines the nominal quadratic part of the system hamiltonian .furthermore , we assume the uncertain non - quadratic part of the system hamiltonian is defined by a formal power series of the form which is assumed to converge in some suitable sense . here , , and is a known scalar operator defined by \left[\begin{array}{c}a \\ a^\#\end{array}\right ] = \tilde e \left[\begin{array}{c}a \\a^\#\end{array}\right];\end{aligned}\ ] ] i.e. , the vector is a known complex vector .the term is referred to as the perturbation hamiltonian .it is assumed to be unknown but is contained within a known set which will be defined below .we assume the coupling operator vector is known and is of the form \left[\begin{array}{c}a \\ a^\#\end{array}\right].\ ] ] here , , are known matrices .also , we write & = & n \left[\begin{array}{c}a \\ a^\#\end{array}\right ] \\ & = & \left[\begin{array}{cc}n_{1 } & n_{2}\\ n_{2}^\ # & n_{1}^\#\end{array}\right ] \left[\begin{array}{c}a \\ a^\#\end{array}\right].\end{aligned}\ ] ] the annihilation and creation operators and are assumed to satisfy the canonical commutation relations : ,\left[\begin{array}{l } a\\a^\#\end{array}\right]^\dagger\right ] & \stackrel{\delta}{=}&\left[\begin{array}{l } a\\a^\#\end{array}\right ] \left[\begin{array}{l } a\\a^\#\end{array}\right]^\dagger \nonumber \\ & & - \left(\left[\begin{array}{l } a\\a^\#\end{array}\right]^\ # \left[\begin{array}{l } a\\a^\#\end{array}\right]^t\right)^t\nonumber \\ & = & j\end{aligned}\ ] ] where ] , and the quantity is defined as [ t1 ] consider an uncertain open nonlinear quantum system defined by and a non - quadratic cost function such that is of the form ( [ h1 ] ) , is of the form ( [ l ] ) and .also , assume that defined in ( [ w ] ) is such that ( [ wbound ] ) is satisfied .furthermore , assume that there exists a constant such that the lmi ( [ lmi ] ) has a solution .then the cost satisfies the bound : \right ) + \zeta + \sqrt{\delta_3}|\mu|\ ] ] where and in order to prove this theorem , we require the following lemmas . [ l0 ] consider an open quantum system defined by and suppose there exists a non - negative self - adjoint operator on the underlying hilbert space such that + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l + w(z , z^ * ) \leq \lambda\ ] ] where and are real numbers .then for any system state , we have we will consider quadratic `` lyapunov '' operators of the form \left[\begin{array}{c}a \\ a^\#\end{array}\right]\ ] ] where is a positive - definite hermitian matrix of the form .\ ] ] hence , we consider a set of non - negative self - adjoint operators defined as [ l1 ] given any , then \right ] = \left[z^*,[z^*,v]\right]^ * = \mu\ ] ] where the constant is defined as in ( [ mu0 ] ) . [ l2 ] given any , then & = & [ v , z ] w_{1}^ * -w_{1}[z^*,v]\nonumber \\ & & + \frac{1}{2}\mu w_{2}^*-\frac{1}{2}w_{2}\mu^*\end{aligned}\ ] ] where and the constant is defined as in ( [ mu0 ] ) .[ l3 ] given and defined as in ( [ l ] ) , then m \left[\begin{array}{c}a \\ a^\#\end{array}\right ] ] = } \nonumber \\ & & \left[\left[\begin{array}{cc}a^\dagger & a^t\end{array}\right]p \left[\begin{array}{c}a \\ a^\#\end{array}\right],\frac{1}{2}\left[\begin{array}{cc}a^\dagger & a^t\end{array}\right]m \left[\begin{array}{c}a \\ a^\#\end{array}\right]\right ] \nonumber \\ & = & \left[\begin{array}{c}a \\ a^\#\end{array}\right]^\dagger \left [ pjm - mjp \right ] \left[\begin{array}{c}a \\ a^\#\end{array}\right].\end{aligned}\ ] ] also , +\frac{1}{2}[l^\dagger , v]l = } \nonumber \\ & = & \tr\left(pjn^\dagger\left[\begin{array}{cc}i & 0 \\ 0 & 0 \end{array}\right]nj\right ) \nonumber \\ & & -\frac{1}{2}\left[\begin{array}{c}a \\n jp+pjn^\dagger j n\right ) \left[\begin{array}{c}a \\ a^\#\end{array}\right].\end{aligned}\ ] ] furthermore , ,\left[\begin{array}{cc}a^\dagger & a^t\end{array}\right]p \left[\begin{array}{c}a \\ a^\#\end{array}\right]\right ] = 2jp\left[\begin{array}{c}a \\ a^\#\end{array}\right].\ ] ] _ proof of theorem [ t1 ] . _it follows from ( [ z ] ) that we can write \left[\begin{array}{c}a \\ a^\#\end{array}\right]\nonumber \\ & = & \tilde e^\ # \sigma \left[\begin{array}{c}a \\ a^\#\end{array}\right].\end{aligned}\ ] ] also , it follows from lemma [ l3 ] that = 2 \tilde e^\ # \sigma jp\left[\begin{array}{c}a \\ a^\#\end{array}\right].\ ] ] furthermore , = [ z^*,v]^* ] since is self - adjoint .therefore , for - \frac{1}{\tau_1 } \imath w_{1}\right ) \left(\tau_1[v , z]- \frac{1}{\tau_1}\imath w_{1}\right)^*\nonumber \\ & = & \tau_1 ^ 2[v , z][z^*,v]+\imath[v , z]w_{1}^*\nonumber \\ & & -\imath w_{1}[z^*,v]+ \frac{1}{\tau_1 ^ 2}w_{1 } w_{1}^*\end{aligned}\ ] ] and hence {1}^*+\imath w_{1}[z^*,v]}\nonumber \\ &\leq & \tau_1 ^ 2[v , z][z^*,v]+\frac{1}{\tau_1 ^ 2}w_{1 } w_{1}^*.\ ] ] also , for and hence also , it follows from ( [ sector4b ] ) that if we let , it follows from ( [ ineq3b ] ) and ( [ sector2b ] ) that furthermore , it follows from ( [ sector4a ] ) and ( [ sector4c ] ) that and combining these equations with ( [ wbound ] ) , it follows that substituting ( [ ineq3a ] ) , ( [ ineq3c ] ) , and ( [ sector2a ] ) into ( [ ineq1a ] ) , it follows that + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l+w(z , z^*)}\nonumber \\ & \leq & -\imath[v,\frac{1}{2}\left[\begin{array}{cc}a^\dagger & a^t\end{array}\right]m \left[\begin{array}{c}a \\ a^\#\end{array}\right]]\nonumber \\ & & + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l\nonumber \\ & & + \tau_1 ^ 2[v , z][z^*,v ] \nonumber \\ & & + w(z , z^*)+\frac{1}{\tau_1 ^ 2}w_{1 } w_{1}^ * + \sqrt{\delta_3}|\mu|.\end{aligned}\ ] ] hence , if , it follows from ( [ sector2d ] ) that + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l+w(z , z^ * ) \nonumber \\ & \phantom{-}\leq -\imath[v,\frac{1}{2}\left[\begin{array}{cc}a^\dagger & a^t\end{array}\right]m \left[\begin{array}{c}a \\ a^\#\end{array}\right]]\nonumber \\ & \phantom{-= } + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l+ \tau_1 ^ 2[v , z][z^*,v ] \nonumber \\ & \phantom{-=}+\left(\frac{1}{\gamma_1 ^ 2 } + \left(\frac{1}{\tau_1 ^ 2}-1\right)\right ) z z^ * \nonumber \\ & \phantom{-=}+ \delta_1 + \left(\frac{1}{\tau_1 ^ 2}-1\right)\delta_2+\sqrt{\delta_3}|\mu|. \end{aligned}\ ] ] similarly , if , it follows from ( [ sector2d ] ) that + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l+w(z , z^ * ) \nonumber \\ & \phantom{-}\leq -\imath[v,\frac{1}{2}\left[\begin{array}{cc}a^\dagger & a^t\end{array}\right]m \left[\begin{array}{c}a \\ a^\#\end{array}\right]]\nonumber \\ & \phantom{-= } + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l+ \tau_1 ^ 2[v , z][z^*,v ] \nonumber \\ & \phantom{-=}+\left(\frac{1}{\tau_1 ^ 2\gamma_1 ^ 2 } + \frac{1}{\gamma_0 ^ 2}\left(1-\frac{1}{\tau_1 ^ 2}\right)\right ) z z^ * \nonumber \\ & \phantom{-=}+ \frac{1}{\tau_1 ^ 2}\delta_1 + \left(1-\frac{1}{\tau_1 ^ 2}\right)\delta_0+\sqrt{\delta_3}|\mu| .\end{aligned}\ ] ] hence , + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l+w(z , z^ * ) \nonumber \\ & \phantom{-}\leq -\imath[v,\frac{1}{2}\left[\begin{array}{cc}a^\dagger & a^t\end{array}\right]m \left[\begin{array}{c}a \\ a^\#\end{array}\right]]\nonumber \\ & \phantom{-= } + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l+ \tau_1 ^ 2[v , z][z^*,v ] \nonumber \\ & \phantom{-=}+\kappa z z^ * \nonumber \\ & \phantom{-=}+\zeta+\sqrt{\delta_3}|\mu|\end{aligned}\ ] ] where is defined in ( [ kappa ] ) and is defined in ( [ zeta ] ) .then it follows from ( [ dissip1a ] ) that + \frac{1}{2}l^\dagger[v , l]+\frac{1}{2}[l^\dagger , v]l + w(z , z^*)\nonumber \\ & \phantom{- } \leq \tilde \lambda + \zeta + \sqrt{\delta_3}|\mu|.\end{aligned}\ ] ] from this , it follows from lemma [ l0 ] with that the bound ( [ cbound ] ) is satisfied . note that the problem of minimizing the bound on the right hand side of ( [ cbound ] ) subject to the constraint ( [ lmi ] ) can be converted into a standard lmi optimization problem which can be solved using standard lmi software ; e.g. , see .to illustrate the main result of this paper , we consider an illustrative example consisting of a josephson junction in an electromagnetic resonant cavity .this system was considered in the paper using a model derived from a model presented in .the system is illustrated in figure [ f1 ] . in the paper , a model for this system of the form considered in section [ sec : systems ] is derived and we consider the same model but with simplified parameter values for the purposes of this illustration .that is , we consider a hamiltonian of the form ( [ h1 ] ) where \ ] ] and where .hence , .\ ] ] also , we consider a coupling operator vector of the form ( [ l ] ) .\ ] ] in addition , we consider a non - quadratic cost function of the form ( [ w ] ) where hence , we can set and in ( [ wbound ] ) .a plot of the function versus for a real scalar is shown in figure [ f1a ] .furthermore , we calculate from this it follows that and hence , ( [ sector4a ] ) is satisfied with and .also , and hence , ( [ sector4c ] ) is satisfied with and .moreover , and hence ( [ sector4b ] ) is satisfied with .we now apply theorem [ t1 ] to find a bound on the cost ( [ w ] ) .this is achieved by solving the corresponding lmi optimization problem . in this casea solution to the lmi problem is found with \ ] ] and .this leads to a cost bound ( [ cbound ] ) of .m. yanagisawa and h. kimura , `` transfer function approach to quantum control - part i : dynamics of quantum feedback systems , '' _ ieee transactions on automatic control _ ,48 , no . 12 , pp . 21072120 , 2003 .a. j. shaiju and i. r. petersen , `` a frequency domain condition for the physical realizability of linear quantum systems , '' _ ieee transactions on automatic control _ , vol .57 , no . 8 , pp . 2033 2044 , 2012 .m. r. james , i. r. petersen , and v. ugrinovskii , `` a popov stability condition for uncertain linear quantum systems , '' in _ proceedings of the 2013 american control conference _ , washington , dc , june 2013 . , `` robust stability of quantum systems with nonlinear dynamic uncertainties , '' in _ proceedings of the 52nd ieee conference on decision and control _ , florence , italy , december 2013 , to appear , accepted 19 july 2013 .
|
this paper presents a robust performance analysis result for a class of uncertain quantum systems containing sector bounded nonlinearities arising from perturbations to the system hamiltonian . an lmi condition is given for calculating a guaranteed upper bound on a non - quadratic cost function . this result is illustrated with an example involving a josephson junction in an electromagnetic cavity .
|
to help students learn effectively instructors should become familiar with their students level of expertise at the beginning of a course .instruction should be designed to build on what students already know to ensure that they acquire the desired expertise as determined by the goals of a course. physics experts often take for granted that introductory students will be able to distill the underlying physics principles of a problem as readily as experts can . however , beginning physics students are usually much more sensitive to the context and surface features of a physics problem than experts .if an instructor teaches the principle of conservation of angular momentum with the example of a spinning skater and gives an examination problem requiring the use of the same principle in the context of a collapsing neutron star under its own gravitational force , students may wonder what this astrophysics problem involving a neutron star has to do with introductory mechanics . without appropriate guidance, the spinning skater problem may look nothing like a neutron star problem to a beginning student even though both problems can be solved using the same physics principle .the difference between what instructors and students `` see '' in the skater and neutron star problems is due to the fact that physics experts view physical situations at a much more abstract level than beginning students who often are sidetracked by context - dependent features. a crucial difference between the problem solving strategies used by experts in physics and beginning students lies in the interplay between how their knowledge is organized and how it is retrieved to solve problems . categorizing various problems based on similarity of their solutions can be a useful tool for teaching and learning. in a classic study by chi et al. a categorization task was used to assess introductory physics students level of expertise in physics .introductory physics students were asked to group mechanics problems into categories based on the similarity of their solutions .they were also asked to explain the reasons for their groupings . unlike experts who categorize them based on the physical principles involved to solve them , introductory students categorized problems involving inclined planes in one category and pulleys in a separate category. in chi et al.s out - of - classroom study, 24 problems from introductory mechanics were given to eight introductory physics student volunteers ( novices ) and eight physics graduate student volunteers ( experts). there were no differences in the number of categories produced ( approximately 8.5 categories by each group on average ) and four of the largest categories produced by each student from both groups captured the majority of the problems ( 80% for experts and74% for novices ) .immediately after the first categorization , each student was asked to re - categorize the same problems .the second categorization matched the first categorization very closely .it was concluded that both experts and novices were able to categorize problems into groups that were meaningful to them. further analysis of the data in ref. showed that experts and novices group their problems in different categories based on their knowledge associated with the categories .physics graduate students ( experts ) were able to distill physics principles applicable in a situation and categorize the problems based on those principles .in contrast , novices based their categorization on the problem s literal features .for example , 75% , 50% , 50% , and 38% of the novices had springs , inclined plane , kinetic energy , and pulleys as one of their categories , respectively ; 25% of the experts used springs as a category but inclined plane , kinetic energy , and pulleys , were not chosen as categories by any of the experts .a categorization task can also be used as a tool to help students learn effective problem - solving strategies and to organize their knowledge hierarchically , because such tasks can guide students to focus on the similarity of problems based on the underlying principles rather than on the specific contexts .for example , introductory physics students with different levels of expertise can be given categorization tasks in small groups , and students can be asked to categorize problems and discuss why different problems should be placed in the same group without asking them to solve the problems explicitly .then there can be a class discussion about why some categorizations are better than others , and students can be given a follow - up categorization task to ensure individual accountability .one advantage of such an activity is that it focuses on conceptual analysis and planning stages of problem solving and discourages the plug and chug approach . without guidance ,students often implement a problem solution without thinking whether a particular principle is applicable. in this paper we report about the results of our study on the nature and level of understanding of physics graduate students about the initial physics knowledge of introductory students .we asked graduate students at the end of a course for teaching assistants to categorize problems based on the similarity of their solutions , both from their own perspective and from the perspective of an introductory physics students .we compared their categorizations with those performed by physics professors and introductory physics students .one surprising finding is the resistance of graduate students to categorizing problems from a typical introductory physics student s perspective with the claim that such a task is useless " , impossible " , and has no bearing " on their teaching assistant ( ta ) duties . based on our finding, we suggest that inclusion of such tasks can improve the effectiveness of ta training courses and faculty development workshops and help tas and instructors focus on issues related to teaching and learning .we will discuss the process and outcome of the categorization of 25 introductory mechanics problems by 21 physics graduate students enrolled in a ta training course at the end of the course .graduate students first performed the categorizations from their own perspective and later from the perspective of a typical introductory student .the goals of the study were to investigate the following issues : * how do graduate students enrolled in a ta training course categorize introductory physics problems from their own perspective ? *how do graduate students categorize the same problems from the perspective of a typical introductory physics student ?do they have an understanding of the differences between their physics knowledge structure and those of the introductory physics students ? * how does the categorization by the graduate students from their own perspective compare with the categorization by introductory physics students and physics faculty from their own perspective ?* how do introductory physics students in an in - class study categorize the introductory mechanics problems after instruction compared to the eight introductory student volunteers studied in ref . ?does the ability to categorize introductory mechanics problems by introductory physics students depend strongly on the nature and context of the questions that are asked ?the issues involved in a detailed comparison with ref . will be discussed elsewhere .all those who performed the categorization were provided the following instructions given at the beginning of the questions: your task is to group the 25 problems below based upon similarity of solution into various groups on the sheet of paper provided .problems that you consider to be similar should be placed in the same group .you can create as many groups as you wish .the grouping of problems should not be in terms of easy problems " , medium difficulty problems " and difficult problems " but rather it should be based upon the features and characteristics of the problems that make them similar . a problem can be placed in more than one group created by you . please provide a brief explanation for why you placed a set of questions in a particular group .you need not solve any problems .+ ignore the retarding effects of friction and air resistance unless otherwise stated .+ the sheet on which individuals were asked to perform the categorization of problems had three columns .the first column asked them to use their own category name for each of their categories , the second column asked them for a description of the category that explains why those problems may be grouped together , and the third column asked them to list the problem numbers for the questions that should be placed in a category .apart from these directions , students were not given any other hints about the category names they should choose .we were unable to obtain the questions in chi et al.s study except for a few that have been published .we therefore chose our own mechanics questions on sub - topics similar to those chosen in ref . .the context of the 25 mechanics problems varied and the topics included one- and two - dimensional kinematics , dynamics , work - energy , and impulse - momentum. many questions related to work - energy and impulse - momentum concepts were adapted from an earlier study and many questions on kinematics and dynamics were chosen from other earlier studies because the development of these questions and their wording had gone through rigorous testing by students and faculty members . some questions could be solved using one physics principle for example , conservation of mechanical energy , newton s second law , conservation of momentum. the first two columns of table 1 show the question numbers and examples of primary categories in which each question can be placed ( based upon the physics principle used to solve each question ) .questions 4 , 5 , 8 , 24 and 25 are examples of problems that involve the use of two principles for different parts of the problem. questions 4 , 8 , and 24 below can be grouped together in one category because they require the use of conservation of mechanical energy and momentum : * \(4 ) two small spheres of putty , a and b , of equal mass , hang from the ceiling on massless strings of equal length .sphere a is raised to a height as shown below and released .it collides with sphere b ( which is initially at rest ) ; they stick and swing together to a maximum height .find the height in terms of .+ * \(8 ) your friend dan , who is in a ski resort , competes with his twin brother sam on who can glide higher with the snowboard .sam , whose mass is 60 kg , puts his 15 kg snowboard on a level section of the track , 5 meters from a slope ( inclined plane ) .then , sam takes a running start and jumps onto the stationary snowboard .sam and the snowboard glide together till they come to rest at a height of 1.8 m above the starting level .what is the minimum speed at which dan should run to glide higher than his brother to win the competition ?dan has the same weight as sam and his snowboard weighs the same as sam s snowboard . *\(24 ) you are standing at the top of an incline with your skateboard .after you skate down the incline , you decide to abort " , kicking the skateboard out in front of you such that you remain stationary afterwards .how fast is the skateboard travelling with respect to the ground after you have kicked it ?assume that your mass is 60 kg , the mass of the skateboard is 10 kg , and the height of the incline is 10 cm .questions 5 and 25 below can be grouped together because they can be solved using conservation of mechanical energy and newton s second law : * \(5 ) a family decides to create a tire swing in their backyard for their son ryan .they tie a nylon rope to a branch that is located 16 m above the earth , and adjust it so that the tire swings 1 meter above the ground . to make the ride more exciting , they construct a launch point that is 13 m above the ground , so that they do nt have to push ryan all the time .you are their neighbor , and you are concerned that the ride might not be safe , so you calculate the maximum tension in the rope to see if it will hold .calculate the maximum tension in the rope , assuming that ryan ( mass 30 kg ) starts from rest from his launch pad .is it greater than the maximum rated value of 2500 n ? *\(25 ) a friend told a girl that he had heard that if you sit on a scale while riding a roller coaster , the dial on the scale changes all the time .the girl decides to check the story and takes a bathroom scale to the amusement park .there she receives an illustration ( see below ) , depicting the riding track of a roller coaster car along with information on the track ( the illustration scale is not accurate ) .the operator of the ride informs her that the rail track is smooth , the mass of the car is 120 kg , and that the car sets in motion from a rest position at the height of 15 m. he adds that point b is at 5 m height and that close to point b the track is part of a circle with a radius of 30 m. before leaving the house , the girl stepped on the scale which indicated 55 kg ( the scale is designed to be used on earth and displays the mass of the object placed on it ) . in the rollercoaster car the girl sits on the scale . according to your calculation , what will the scale show at point b ?although we had an idea about which categories created by individuals should be considered good or poor , we validated our assumptions with other experts .we randomly selected the categorizations performed by twenty introductory physics students and gave it to three physics faculty who had taught introductory physics recently and asked them to decide whether each of the categories created by individual students should be considered good , moderate , or poor .we asked them to mark each row which had a category name created by a student and a description of why it was the appropriate category for the questions that were placed in that category .if a faculty member rated a category created by an introductory student as good , we asked that he / she cross out the questions that did not belong to that category .the agreement between the ratings of different faculty members was better than 95% .we used their ratings as a guide to rate the categories created by everybody as good , moderate , or poor .a category was considered `` good '' only if it was based on the underlying physics principles .we typically rated both conservation of energy or conservation of mechanical energy as good categories .kinetic energy as a category name was considered a moderate category if students did not explain that the questions placed in that category can be solved using mechanical energy conservation or the work energy theorem .we rated a category such as energy as good if students explained the rationale for placing a problem in that category . if a secondary category such as friction or tension was the only category in which a problem was placed and the description of the category did not explain the primary physics principles involved , it was considered a moderate category .table 1 shows examples of the primary and secondary categories and one commonly occurring poor / moderate category for each question given in the categorization task .more than one principle or concept may be useful for solving a problem .the instruction for the categorizations told students that they could place a problem in more than one category . because a given problem can be solved using more than one approach , categorizations based on different methods of solution that are appropriate was considered good ( see table 1 ) .for some questions , conservation of mechanical energy may be more efficient , but the questions can also be solved using one- or two - dimensional kinematics for constant acceleration . in this paper, we will only discuss categories that were rated good .if a graph shows that 60% of the questions were placed in a good category by a particular group ( introductory students , graduate students , or faculty ) , it means that the other 40% of the questions were placed in moderate or poor categories . for questions that required the use of two major principles , those who categorized them in good categories either made a category which included both principles such as the conservation of mechanical energy and the conservation of momentum or placed such questions in two categories created by them one corresponding to the conservation of mechanical energy and the other corresponding to the conservation of momentum . if such questions were placed only in one of the two categories , it was not considered a good categorization .a histogram of the percentage of questions placed in good categories ( not moderate or poor ) is given in fig . 1 .this figure compares the average performance of 21 graduate students at the end of a ta training course when they were asked to categorize questions from their own perspective with 7 physics faculty and 180 introductory students who were given the same task .although this categorization by the graduate students is not on par with the categorization by physics faculty , the graduate students displayed a higher level of expertise in introductory mechanics than the introductory students and were more likely to group the questions based on physical principles .we note that in ref .the experts were graduate students and not physics professors. physics professors ( and sometimes graduate students ) pointed out multiple methods for solving a problem and specified multiple categories for a particular problem more often than the introductory students .introductory students mostly placed one question in only one category .professors ( and sometimes graduate students ) created secondary categories in which they placed a problem that were more like the introductory students primary categories .for example , in the questions involving tension in a rope or frictional force , many faculty and some graduate students created these secondary categories called tension or friction , but also placed those questions in a primary category , based on a fundamental principle of physics .introductory physics students were much more likely to place questions in inappropriate categories than the faculty or graduate students , for example , placing a problem that was based on the impulse - momentum theorem or conservation of momentum in the conservation of energy category . for questions involving two major physics principles , for example ,question 4 related to the ballistic pendulum , most faculty and some graduate students categorized it in both the conservation of mechanical energy and conservation of momentum categories in contrast to the introductory students who either categorized it as an energy problem or as a momentum problem .the fact that introductory students only focused on one of the principles involved to solve question 4 is consistent with an earlier study in which students either noted that this problem can be solved using conservation of mechanical energy or conservation of momentum but not both. many of the categories generated by the three groups were the same , but there was a major difference in the fraction of questions that were placed in good categories by each group .what introductory students chose as their primary categories were often secondary categories created by the faculty .rarely were there secondary categories made by the faculty , for example , apparent weight , that were not created by students .there were some categories such as ramps , and pulleys , that were made by introductory physics students but not by physics faculty or graduate students .the percentage of introductory students who selected ramps , pulleys or even springs as categories ( based mainly upon the surface features of the problem rather than based upon the physics principle required to solve the problem ) is significantly less ( less than 15% for each of these categories ) than in the study of ref . .this difference could be due to the fact that ours was an in - class study with a large number of students and the categorization task was given a few weeks after instruction in all relevant concepts .in contrast , in ref .there were only eight student volunteers and they might not have taken introductory mechanics recently .another reason for the difference could be due to the difference in questions that were given to students in the two studies . in our studyintroductory students sometimes categorized questions 3 , 6 , 8 , 12 , 15 , 17 , 18 , 22 , 24 , and 25 as ramp problems , questions 6 and 21 as spring problems ( question 21 was categorized as a spring problem by introductory students who associated the bouncing of the rubber ball with a spring - like behavior ) and question 17 as a pulley problem .the lower number of introductory students making spring or pulley as a category in our study could be due to the fact that there are fewer questions than in ref .that involve springs and pulleys . however , ramp was a much less popular category for introductory students in our study than in ref . in which 50% of the students created this category and placed at least one problem in that category ( although may questions can potentially be categorized as ramp problems even in our study ) .some introductory physics students created the categories speed and kinetic energy if the question asked them explicitly to calculate those physical quantities .the explanations provided by the students as to why a particular category name , for example , speed , is most suitable for a particular problem were not adequate ; they wrote that they created this category because the question asked for the speed .graduate students were less likely than introductory students to create such categories and were more likely to classify questions based on physical principles , for example , conservation of mechanical energy ( or conservation of energy which was taken to be a good category with proper explanation ) or kinematics in one dimension .even if a problem did not explicitly ask for the `` work done '' by a force on an object , faculty and graduate students were more likely to create and place such questions which could be solved using work - energy theorem or conservation of mechanical energy in categories related to these principles .this task was much more challenging for the introductory physics students who had learned these concepts recently .for example , it was easy to place question 3 in a category related to work because the question asked students to find the work done on an object . placing question 7 in the work - energy category was more difficult because students were asked to find the speed . figures 25 show histograms for questions 15 , 21 , 23 , and 24 respectively of some common categories created by different percentages of introductory students , graduate students , and physics faculty .( the categorization by graduate students from a typical introductory physics student s point of view will be discussed in sec .v. ) as expected , physics faculty performed most expert - like categorization for each of the problem based upon the physics principles required to solve it followed by graduate students . for question 21( see figure 3 ) all faculty created an impulse category while only 5 out of 7 faculty also categorized it as a question related to momentum . for this question categorization by all facultywas considered good . on the other hand, some graduate students and introductory students placed this problem only in energy or force categories that were not considered good .after the graduate students had submitted their own categorizations , they were asked to categorize the same questions from the perspective of a typical introductory physics student .a majority of the graduate students had not only served as tas for recitations , grading , or laboratories , but had also worked during their office hours with students one - on - one and in the physics resource room at the university of pittsburgh .the goal of this task was to assess whether the graduate students were familiar with the level of expertise of the introductory students whom they were teaching and whether they realized that most introductory students do not necessarily see the same underlying principles in the questions that they do .the graduate students were told that they were not expected to remember how they used to think 45 years ago when they were introductory students .we wanted them to think about their experience as tas in introductory physics courses while grouping the questions from an introductory students perspective .they were also asked to specify whether they were recitation tas , graders , or laboratory tas that semester .the categorization of questions from the perspective of an introductory physics student met with widespread resistance .many graduate students noted that the task was useless or meaningless and had no relevance to their ta duties . although we did not tape record the discussion with the graduate students , we took notes immediately following the discussion .the graduate students often asserted that it is not their job to `` get into their students heads . ''other graduate students stated that the task was `` impossible '' and `` can not be accomplished . ''they often noted that they did not see the utility of understanding the perspective of the students .some graduate students explicitly noted that the task was `` silly '' because it required them to be able to read their students minds and had no bearing on their ta duties . not a single graduate student stated that they saw merit in the task orsaid anything in favor of why the task may be relevant for a ta training course .the discussions with graduate students also suggest that many of them believed that effective teaching merely involves knowing the content well and delivering it lucidly .many of them had never thought about the importance of knowing what their students think for teaching to be effective .it is surprising that most graduate students enrolled in the ta training course were so reluctant or opposed to attempting the categorization task from a typical introductory student s perspective .this resistance is intriguing especially because the graduate students were given the task at the end of a ta training course and most of them were tas for introductory physics all term .it is true that it is very difficult for the tas ( and instructors in general ) to imagine themselves as novices .however , it is possible for tas ( and instructors ) to familiarize themselves with students level of expertise by giving them pre - tests at the beginning of a course , listening to them carefully , and by reading literature about student difficulties , for example , as part of the ta training course .after 1520 minutes of discussion we made the task more concrete and told graduate students that they could consider categorizing from the perspective of a relative whom they knew well after he / she took only one introductory mechanics course if that was the only exposure to the material they had .we also told them that they had to make a good faith effort even if they felt the task was meaningless or impossible .figure 6 shows the histogram of how the graduate students categorized questions from their own perspective and from the perspective of a typical introductory student / relative who has taken only one physics course .figure 7 shows the histogram of how the graduate students categorized questions from the perspective of a typical introductory student / relative in comparison to the categorization by introductory students .figure 6 shows that the graduate students recognized that the introductory physics students do not understand physics as well as graduate students and hence they re - categorized the questions in worse categories when performing the categorization from the perspective of a typical introductory physics student ( also see figs .however , if we look at questions placed in each category , for example , conservation of momentum , there are sometimes significant differences between the categorization by graduate students from an introductory students perspective and by introductory students from their own perspective .this implies that while graduate students may have realized that a typical introductory student / relative who has taken only one physics course may not perform as well as a physics graduate student on the categorization task , overall they were not able to anticipate the frequency with which introductory students categorized each problem in the common less - expert - like categories .the reluctance of tas to re - categorize the questions from introductory students perspective raises the question of what should the graduate students learn in a ta training class . in a typical ta training class, a significant amount of time is devoted to writing clearly on the blackboard , speaking clearly and looking into students eyes , and grading students work fairly .there is a lack of discussion about the fact that teaching requires not only knowing the content but understanding how students think and implementing strategies that are commensurate with students prior knowledge and expertise .after the graduate students had completed both sets of categorization tasks , we discussed the pedagogical aspects of perceiving and evaluating the difficulty of the questions from the introductory students perspective .we discussed that pedagogical content knowledge , which is critical for effective teaching , depends not only on the content knowledge of the instructor , but also on the knowledge of what the students are thinking .the discussions were useful and many students explicitly noted that they had not pondered why accounting for the level of expertise and thinking of their students was important for devising strategies to facilitate learning .some graduate students noted that they will listen to the introductory students and read their written responses more carefully in the future .one graduate student noted that after this discussion he felt that , similar to the difficulty of the introductory students in categorizing the introductory physics questions , he has difficulty in categorizing questions in the advanced courses he has been taking .he added that when he is assigned homework / exam questions , for example , in the graduate level electricity and magnetism course in which they were using the classic book by jackson , he often does not know how the questions relate to the material discussed in the class even when he carefully goes through his class notes .the student noted that if he goes to his graduate course instructor for hints , the instructor seems to have no difficulty making those connections to the homework .the spontaneity of the instructor s connection to the lecture material and the insights into those questions suggested to the student that the instructor can categorize those graduate - level questions and explain the method for solving them without much effort .this facility is due in part because the instructor has already worked out the questions and hence they have become an exercise .other graduate students agreed with his comments saying they too had similar experiences and found it difficult to figure out how the concepts learned in the graduate courses were applicable to homework problems assigned in the courses .these comments are consistent with the fact that a graduate student may be an expert in the introductory physics material related to electricity and magnetism but not necessarily an expert in the material at the jackson level course .such difficulty is not surprising considering that a handful of fundamental physics principles are applied in diverse contexts . solving questions with different contextsinvolves transferring relevant knowledge from the context in which it was learned to new contexts .the mathematical tools required to solve the questions in advanced problems may increase the mental load while solving questions and make it more difficult to discern the underlying physics principle involved .we found that graduate students perform better at categorizing introductory mechanics questions than introductory students but not as well as physics faculty .when asked to categorize questions from a typical introductory physics student s perspective , graduate students were very reluctant and many explicitly claimed that the task was useless .this study raises important issues regarding the content of ta training courses and faculty professional development workshops and the extent to which these courses should allocate time to help participants learn about pedagogical content knowledge in addition to the usual discussions of logistical issues related to teaching . asking the graduate students and faculty to categorize questions from the perspective of students may be one way to draw instructor s attention to these important issues in the ta training courses and faculty professional development workshops .we are grateful to jared brascher for his help in data analysis .we thank f. reif , r. p. devaty , p. koehler and j. levy for useful discussions .we thank all the students and faculty who performed the categorization task and an anonymous reviewer for very helpful comments and advice .we thank nsf for award due-0442087 .10 j. d. bransford and d. schwartz , `` rethinking transfer : a simple proposal with multiple implications , '' review res . educ . * 24 * , 61100 ( 1999 ). f. reif , `` millikan lecture 1994 : understanding and teaching important scientific thought processes , '' am .* 63 * , 17 ( 1995 ) .f. reif , `` scientific approaches to science education , '' phys .today * 39 ( 11 ) * , 4854 ( 1986 ) .d. maloney , `` research in problem solving : physics , '' in _ handbook of research on the teaching and learning of science _ , edited by d. gabel ( macmillan , new york , 1994 ) . c. singh , when physical intuition fails , " am .j. phys , * 70 * ( 11 ) , 11031109 ( 2002 ) . d. j. ozimek , p. v. engelhardt , a. g. bennett , and n. s. rebello , `` retention and transfer from trigonometry to physics , '' proceedings of the physics education research conference 2004 , edited by j. marx , p. heron and s. franklin , aip conf. proc . * 790 * 173176 ( 2005 ) .d. schwartz , d. sears , and j. chang , `` reconsidering prior knowledge , '' in _ thinking with data _ , edited by m. lovett and p. shah ( erbaum , mahwah , nj , 2007 ) .g. posner , k. strike , p. hewson , and w. gertzog , `` accommodation of a scientific conception : towards a theory of conceptual change , '' science educ .* 66 * ( 2 ) , 211227 ( 1982 ) .f. reif , `` teaching problem solving a scientific approach , '' phys . teach . *19 * , 310316 ( 1981 ) .j. h. larkin , and f. reif , `` understanding and teaching problem solving in physics , '' eur .* 1(2 ) * , 191203 ( 1979 ) .j. larkin , `` cognition of learning physics , '' am .* 49 * ( 6 ) , 534541 ( 1981 ) .j. larkin , j. mcdermott , d. simon , and h. simon , `` expert and novice performance in solving physics problems , '' science * 208 * , 13351362 ( 1980 ) .a. h. schoenfeld , _ mathematical problem solving _ ( academic press , ny , 1985 ) ; a. h. schoenfeld , `` learning to think mathematically : problem solving , metacognition , and sense - making in mathematics , '' in _ handbook for research on mathematics teaching and learning _ , edited by d. grouws ( mcmillan , ny , 1992 ) , chap .15 , pp . 334370; a. h. schoenfeld , `` teaching mathematical thinking and problem solving , '' in _ toward the thinking curriculum : current cognitive research _ , edited by l. b. resnick and b. l. klopfer ( ascd , washington , dc , 1989 ) , pp .83103 ; a. schoenfeld and d. j. herrmann , `` problem perception and knowledge structure in expert novice mathematical problem solvers , '' j. explearning , memory , and cognition * 8 * , 484494 ( 1982 ). m. t. h. chi , p. j. feltovich , and r. glaser , `` categorization and representation of physics knowledge by experts and novices , '' cog .sci . * 5 * , 121152 ( 1981 ) .r. dufresne , j. mestre , t. thaden - koch , w. gerace , and w. leonard , `` knowledge representation and coordination in the transfer process , '' in _ transfer of learning from a modern multidisciplinary perspective _ , edited by j. p. mestre ( information age publishing , greenwich , ct , 2005 ) , pp .155215 .p. t. hardiman , r. dufresne and j. p. mestre , `` the relation between problem categorization and problem solving among novices and experts , '' memory and cognition * 17 * , 627638 ( 1989 ) .the mechanics questions are available on epaps .see epaps document no . xxx . c. singh and d. rosengrant , `` multiple - choice test of energy and momentum concepts* 71 * ( 6 ) , 607617 ( 2003 ) . c. singh , `` interactive video tutorials for enhancing problem solving , reasoning , and meta - cognitive skills of introductory physics students , '' in _ proceedings of the phys .conference , madison , wi _ , edited by s. franklin , k. cummings and j. marx , aip conf .proc . * 720 * 177180 ( 2004 ) . c. singh , e. yerushalmi , and bat sheva eylon , `` physics learning in the context of scaffolded diagnostic tasks ( ii ) : the preliminary results , '' in _ proceedings of the phys .conference , greensboro , nc _ , edited by l. hsu , l. mccullough , and c. henderson , aip conf .proc . * 951 * , 3134 ( 2007 ) .j. d. jackson , _ classical electrodynanics _ , third edition , ( wiley , academic press , new york , 1998 ) ..[table1 ] examples of the primary and secondary categories and one commonly occurring poor / moderate category for each of the 25 questions [ cols="<,<,<,<",options="header " , ]
|
we describe how graduate students categorize introductory mechanics problems based on the similarity of their solutions . graduate students were asked at the end of a teaching assistant training class to categorize problems from their own perspective and from the perspective of typical introductory physics students whom they were teaching . we compare their categorizations with the categorizations by introductory physics students and physics faculty who categorized the same problems . the utility of categorization as a tool for teaching assistant training and faculty development workshops is discussed .
|
since the pioneering paper , the so - called kdv limit of atomic chains with nearest neighbor interactions often called fermi - pasta - ulam or fpu - type chains has attracted a lot of interest in both the physics and the mathematics community , see for a recent overview .the key observation is that in the limiting case of long - wave - length data with small amplitudes the dynamics of the nonlinear lattice system is governed by the korteweg - de vries ( kdv ) equation , which is a completely integrable pde and hence well understood . for rigorous results concerning initial value problems we refer to and to for similar result in chains with periodically varying masses . of particular interestare the existence of kdv - like solitary waves and their stability with respect to the fpu dynamics .both problems have been investigated by friesecke and pego in the seminal four - paper series , see also for simplifications in the stability proof and concerning the existence of periodic kdv - type waves .the more general cases of two or finitely many solitary waves have been studied in and , respectively . in this paperwe generalize the existence result from and prove that chains with interactions between further than nearest - neighbors also admit kdv - type solitary waves .the corresponding stability problem is beyond the scope of this paper and left for future research .we consider an infinite chain of identical particles which interact with up to neighbors on both sides .assuming unit mass , the equations of motion are therefore given by where denotes the position of particle at time .moreover , the potential describes the interactions between nearest - neighbors , between the next - to - nearest - neighbors , and so on .a traveling wave is an exact solution to which satisfies where the parameters and denote the prescribed background strain and background velocity , respectively .moreover , is an additional scaling parameter which will be identified below and becomes small in the kdv limit .a direct computation reveals that the wave speed as well as the rescaled wave profile must solve the rescaled traveling wave equation where the discrete differential operators are defined by note that does not appear in due to the galilean invariance of the problem and that the solution set is invariant under the addition of constants to .it is therefore natural to interpret as an equation for the rescaled velocity profile ; the corresponding distance or strain profile can then be computed by convoluting with the rescaled indicator function of an interval , see formula below . for andfixed there exist depending on the properties of many different types of traveling waves with periodic , homoclinic , heteroclinic , or even more complex shape of the profile , see for instance and references therein . in the limit , however , the most fundamental waves are periodic and solitary waves , for which is either periodic or decays to as . in this paperwe suppose this condition can always be ensured by elementary transformations and split off both the linear and the quadratic terms from the force functions .this reads or , equivalently , with . in order to keep the presentation as simple as possible, we restrict our considerations to solitary waves the case of periodic profiles can be studied along the same lines and rely on the following standing assumption .[ mainassumption ] for all , the coefficients and are positive .moreover , is continuously differentiable with and for some constants and all with .note that the usual requirements for are and but the case can be traced back to the case by a simple reflection argument with respect to the strain variable .below we discuss possible generalizations of assumption [ mainassumption ] including cases in which the coefficients come with different signs .the overall strategy for proving the existence of kdv - type solitary waves in the lattice system is similar to the approach in but many aspects are different due to the nonlocal coupling .in particular , we base our analysis on the velocity profile instead of the distance profile , deviate in the justification of the key asymptotic estimates , and solve the final nonlinear corrector problem by the banach fixed - point theorem .a more detailed comparison is given throughout the paper .as for the classical case , we prescribe a wave speed that is slightly larger than the sound speed and construct profile functions that satisfy and decay for .more precisely , we set i.e. , the small parameter quantifies the supersonicity of the wave .note that the subsonic case is also interesting but not related to solitary waves , see discussions at the end of [ sect : prelim ] and the end of [ sect : proof ] .the asymptotic analysis from [ sect : prelim ] reveals that the limiting problem as is the nonlinear ode where the positive constants and depend explicitly on the coefficient and , see formula below . this equation admits a homoclinic solution , which is unique up to shifts ( see [ sect : proof.1 ] ) and provides via a solitary wave to the kdv equation for we start with the ansatz and derive in [ sect : proof ] the fixed point equation }}\end{aligned}\ ] ] for the corrector , where the operator is introduced in .the definition of requires to invert a linear operator , which is defined in and admits a singular limit as .the linear leading order operator stems from the linearization of around the kdv wave and can be inverted on the space but not on due to the shift invariance of the problem .the first technical issue in our perturbative existence proof is to show that this invertibility property persists for small , see theorem [ lem : invertibilityofleps ] .the second one is to guarantee that is contractive on some ball in , see theorem [ thm : fixedpoints ] .our main findings are illustrated in figure [ fig0 ] and can be summarized as follows , see also corollary [ cor : summary ] . for any sufficiently small exists a unique even and nonnegative solution to the rescaled traveling wave equation with such that holds for some constant independent of , where is the unique even solution to . the asymptotic analysis presented below can for the price of more notational and technical effort be applied to a wider class of chains .specifically , we expect that the following generalizations are feasible : 1 . we can allow for provided that the coefficients , and decay sufficiently fast with respect to ( say , exponentially ) .2 . some of the coefficients and might even be negative . in this case, however , one has to ensure that the contributions from the negative coefficients are compensated by those from the positive ones .a first natural condition is which ensures that uniform states are stable under small amplitude perturbations and that the sound speed from is positive .a further minimal requirement is because otherwise the leading order problem see and below degenerates and does not admit exponentially decaying homoclinic orbits .the non - quadratic contributions to the forces might be less regular in the sense of for some constants and exponents .the paper is organized as follows : in [ sect : prelim ] we introduce a family of convolution operators and reformulate as an eigenvalue problem for .afterwards we provide singular asymptotic expansions for a linear auxiliary operator , which is defined in and plays a prominent role in our method . [ sect : proof ] is devoted to the proof of the existence theorem .we first study the leading order problem in [ sect : proof.1 ] and show afterwards in [ sect : proof.2 ] that the linear operator is invertible . in [ sect : proof.3 ] we finally employ banach s contraction mapping principle to construct solutions to the nonlinear fixed problem and conclude with a brief outlook . a list of all important symbols is given in the appendix .in this section we reformulate the nonlinear advance - delay - differential equation as an integral equation and provide asymptotic estimates for the arising linear integral operators . for any , we define the operator by and regard as an equation for the rescaled velocity profile .notice that can be viewed as the convolution with the rescaled indicator function of the interval } ] we therefore get and follows immediately .the derivation of is similar . __ : now let be arbitrary .by parseval s theorem and employing that holds for some constant and all we find and this implies .the estimate can by proven analogously since we have for all ._ _ : let be arbitrary but fixed . since is self - adjoint , see lemma [ lem : propertiesoperatora ] , and in view of we readily demonstrate and this implies . on the other hand , the estimate ensures that . we therefore have and combining this with the weak convergence we arrive at since is a hilbert space .as already outlined above , we introduce for any given the operator which appears in if we collect all linear terms on the left hand side , insert the wave - speed scaling , and divide the equation by . we further define the operator which can thanks to lemma [ lem : limitoperatora ] be regarded as the formal limit of as . in fourier space , these operators correspond to the symbol functions which are illustrated in figure [ fig2 ] and satisfy for any fixed .this convergence , however , does not hold uniformly in since is a singular perturbation of . using the uniform positivity of these symbol functions ,we easily demonstrate the existence of the inverse operators where maps actually into the sobolev space and is hence smoothing because decays quadratically at infinity .the inverse of , however , is less regularizing because remains bounded as . in order to obtain asymptotic estimates for , we introduce the cut - off operator by defining its symbol function as follows one of our key technical results is the following characterization of , which reveals that admits an almost compact inverse . for , a similar but slightly strongerresult has been given in ( * ? ? ?* corollary 3.5 ) using a careful fourier - pole analysis of the involved integral operators . for , however , the symbol functions possess more poles in the complex plane and hence we argue differently .[ lem : inversofb ] for any , the operator respects the even - odd parity and is both self - adjoint and invertible on .moreover , there exists a constant such that holds for all and all . here, denotes the usual norm in . in view of , and lemma [ lem : propertiesoperatora ] , it remains to show . using the properties of the function ,see figure [ fig1 ] , we readily demonstrate consequently , we get for all , and hence for some positive constant . moreover , noting that and using parseval s theorem we estimate as well as so follows immediately . there exists another useful characterization of , which relies on the non - expansive estimate , see lemma [ lem : propertiesoperatora ] .[ lem : vonneumann ] we have where the series on the right hand converges for any . in the first stepwe regard all operators as defined on and taking values in .we also use the abbreviation and notice that and imply since the operator norm of computed with respect to the -norm satisfies the von neumann formula provides in the sense of an absolutely convergent series of -operators . in the second stepwe generalize this result using the estimates from lemma [ lem : propertiesoperatora ] . in particular , the right - hand side in is well - defined for any since lemma [ lem : propertiesoperatora ] ensures .[ cor : invarianceproperties ] the operator respects for both and the nonnegativity , the evenness , and the unimodality of functions . for ,all assertions follow from the representation formula in lemma [ lem : vonneumann ] and the corresponding properties of the operators , see lemma [ lem : propertiesoperatora ] .for we additionally employ the approximation results from lemma [ lem : limitoperatora ] as well as the estimates from lemma [ lem : inversofb ] .note that all results concerning are intimately related to the supersonicity condition . in a subsonicsetting , one can still establish partial inversion formulas but the analysis is completely different , cf . for an application in a different context .in view of the wave - speed scaling and the fixed point formulation , the rescaled traveling wave problem consists in finding solutions to the operator equation }}+{{\varepsilon}}^2 { \mathcal{p}}_{{\varepsilon}}{{\left[{w_{{\varepsilon}}}\right]}}\,,\end{aligned}\ ] ] where the linear operator has been introduced in .moreover , the nonlinear operators }}:= \sum_{m=1}^m { { \beta}}_mm^3 { \mathcal{a}}_{m{{\varepsilon}}}{{\left({{\mathcal{a}}_{m{{\varepsilon}}}w}\right)}}^2\,,\qquad { \mathcal{p}}_{{\varepsilon}}{{\left[{w}\right]}}:= \frac{1}{{{\varepsilon}}^6}\sum_{m=1}^m m { \mathcal{a}}_{m{{\varepsilon } } } \psi_m^\prime{{\left({m { { \varepsilon}}^2 { \mathcal{a}}_{m{{\varepsilon } } } w}\right)}}\end{aligned}\ ] ] encode the quadratic and cubic nonlinearities , respectively , and are scaled such that the respective formal -expansions involve nontrivial leading order terms . in particular , we have }}\quad\xrightarrow{\;\;{{\varepsilon}}\to0\;\;}\quad { \mathcal{q}}_0{{\left[{w}\right]}}:={{\left({\sum_{m=1}^m { { \beta}}_mm^3}\right ) } } w^2\,,\end{aligned}\ ] ] for any fixed , see .note also that always admits the trivial solution . in what followswe solve the leading order problem to obtain the kdv wave , transform via the ansatz into another fixed point equation , and employ the contraction mapping principle to prove the existence of a corrector for all sufficiently small . in , the last step has been solved using a operator - valued variant of the implicit function theorem . passing formally to limit in , we obtain the leading order equation }}\,,\end{aligned}\ ] ] which is the ode with parameters in particular , the leading order problem is a planar hamiltonian ode with conserved quantity and admits precisely one homoclinic orbit as shown in figure [ fig3 ] .[ lem : leadingorder ] there exists a unique solution to , which is moreover smooth , pointwise positive , and exponentially decaying .moreover , the -kernel of the linear operator with is simple and spanned by the odd function .the existence and uniqueness of follow from standard ode arguments and the identity holds by construction .moreover , the simplicity of the -kernel of the differential operator can be proven by the following wronski - type argument : suppose for contradiction that are two linearly independent kernel functions of such that , where the ode combined with implies that and are continuous functions with and we conclude that as . on the other hand , we easily compute and obtain the desired contradiction .since is smooth , it satisfies up to small error terms .in particular , the corresponding linear and the quadratic terms almost cancel due to .[ lem.epsresidual ] there exists a constant such that }}-{\mathcal{b}}_{{\varepsilon}}w_0}{{{\varepsilon}}^2 } \,,\qquad s_{{\varepsilon } } : = { \mathcal{p}}_{{\varepsilon}}{{\left[{w_0}\right]}}\end{aligned}\ ] ] holds for all .we first notice that lemma [ lem : propertiesoperatora ] ensures and in view of assumption [ mainassumption ] we find thanks to the smoothness of , lemma [ lem : limitoperatora ] provides a constant such that holds for , and this implies and hence }}-{\mathcal{q}}_0{{\left[{w_0}\right]}}}\big\|}_2= { \left\|{\sum_{m=1}^m { { \beta}}_m m^3 { \mathcal{a}}_{m{{\varepsilon}}}{{\left({{\mathcal{a}}_{m{{\varepsilon}}}w_0}\right)}}^2-{{\left({\sum_{m=1}^{m}{{\beta}}_m m^3}\right ) } } w_0 ^ 2}\right\|}_2\leq c{{\varepsilon}}^2.\end{aligned}\ ] ] therefore , and since satisfies , we get where the second inequality stems from the definitions of and , see and .lemma [ lem : limitoperatora ] also yields and combining this with and the identity we arrive at the desired estimate for is now a direct consequence of .for completeness we mention that can be verified by direct calculations and that formulas for the spectrum of can , for instance , be found in ; see also ( * ? ? ?* lemma 4.2 ) . for any , we define the linear operator on by where is the unique even kdv wave provided by lemma [ lem : leadingorder ] .this operator appears naturally in the linearization around as } } = - { { \varepsilon}}^2 r_{{\varepsilon}}+ { { \varepsilon}}^2 { \mathcal{l}}_{{\varepsilon}}v - { { \varepsilon}}^4 { \mathcal{q}}_{{\varepsilon}}{{\left({v}\right)}}\end{aligned}\ ] ] holds due to the linearity of and the quadraticity of .[ lem : propertiesofl ] for any , the operator is self - adjoint in and respects the even - odd parity .moreover , we have for any . since is smooth and even , all assertions follow directly from the properties of and , see and lemma [ lem : limitoperatora ] .our perturbative approach requires to invert the operator on the space see the fixed point problem in theorem [ thm : fixedpoints ] below and in view of lemma [ lem : leadingorder ] one easily shows that has this properties .the singularly perturbed case , however , is more involved and addressed in the following theorem , which is actually the key asymptotic result in our approach .notice that the analogue for is not stated explicitly in although it could be derived from the asymptotic estimates therein .[ lem : invertibilityofleps ] there exists such that for any the operator is continuously invertible on .more precisely , there exists a constant which depends on but not on such that holds for all and any . :our strategy is to show the existence of a constant such that holds for all and all sufficiently small , because this implies the desired result .in fact , ensures that the operator has both trivial kernel and closed image .the symmetry of gives and due to the closed image we conclude that is not only injective but also surjective .moreover , the -uniform continuity of the inverse is a further consequence of .now suppose for contradiction that such a constant does not exist .then we can choose a sequence } ] . in what follows we write with and observe that these definitions imply we also set and combine lemma [ lem : propertiesoperatora ] with the smoothness of to obtain moreover , by construction we have so the estimate is provided by lemma [ lem : inversofb ] .: inserting , , and into gives and hence thanks to we also infer from the estimate where denotes the norm in .since is compactly embedded into , we conclude that the sequence is precompact in . on other hand , the weak convergence combined with and implies and in summary we find strongly in by standard arguments .this even implies as vanishes outside the interval . : since the functions are supported in , the functions are supported in .moreover , we have for any given . therefore , and using we estimate so lemma [ lem : propertiesoperatora ] gives and hence due to and . : combining with gives and passing to the limit we get thanks to , , , and .this , however , contradicts the normalization condition . in particular, we have shown the existence of a constant as in and the proof is complete . setting , the nonlocal traveling wave equation is equivalent to }}+{{\varepsilon}}^2\,{\mathcal{n}}_{{\varepsilon}}{{\left[{v_{{\varepsilon}}}\right]}}\end{aligned}\ ] ] with }}:= \frac { { \mathcal{p}}_{{\varepsilon}}{{\left[{w_0+{{\varepsilon}}^2 v}\right]}}-{\mathcal{p}}_{{\varepsilon}}{{\left[{w_0}\right]}}}{{{\varepsilon}}^2}\,,\end{aligned}\ ] ] where , and , have been introduced in and , respectively .since can be inverted for all sufficiently small , we finally arrive at the following result .[ existence and uniqueness of the corrector ] [ thm : fixedpoints ] there exist constants and such that the nonlinear operator with } } : = { \mathcal{l}}_{{\varepsilon}}^{-1}{{\big(r_{{\varepsilon}}+ s_{{\varepsilon}}+ { { \varepsilon}}^2\ , { \mathcal{q}}_{{\varepsilon}}{{\left[{v}\right]}}+{{\varepsilon}}^2\ , { \mathcal{n}}_{{\varepsilon}}{{\left[{v}\right]}}\big)}}\end{aligned}\ ] ] admits for any a unique fixed point in the set .our strategy is to demonstrate that the operator maps contractively into itself provided that is sufficiently large and sufficiently small ; the desired result is then a direct consequence of the banach fixed - point theorem . within this proofwe denote by any generic constant that is independent of and .we also observe that holds for any and , and recall that is provided by lemma [ lem.epsresidual ] ._ _ : for we find }}}\big|}\leq{{\varepsilon}}^2 \sum_{m=1}^m { { \beta}}_m m^3 { \left\|{{\mathcal{a}}_{m{{\varepsilon}}}v}\right\|}_\infty { \mathcal{a}}_{m{{\varepsilon}}}^2{\left|{v}\right|}\leq { { \varepsilon}}^{3/2}{{\left({\sum_{m=1}^m \beta_mm^{5/2 } d}\right)}}{\mathcal{a}}_{m{{\varepsilon}}}^2{\big|{v}\big|}\,,\end{aligned}\ ] ] where we used the estimate , and in view of we obtain }}}\big\|}_2 \leq { { \varepsilon}}^{3/2 } c d { \left\|{{\mathcal{a}}_{m{{\varepsilon}}}^2v}\right\|}_2 \leq { { \varepsilon}}^{3/2 } c d { \left\|{v}\right\|}_2\leq{{\varepsilon}}^{3/2 } c d^2.\end{aligned}\ ] ] in the same way we verify the estimate }}-{{\varepsilon}}^2 { \mathcal{q}}_{{\varepsilon}}{{\left[{v_2}\right]}}}\big\|}_2&\leq { \left\|{{{\varepsilon}}^2\sum_{m=1}^m { { \beta}}_m m^3 { { \big({\left\|{{\mathcal{a}}_{m{{\varepsilon}}}v_2}\right\|}_\infty+ { \left\|{{\mathcal{a}}_{m{{\varepsilon}}}v_1}\right\|}_\infty\big)}}{\mathcal{a}}_{m{{\varepsilon}}}^2{\big|{v_2-v_1}\big|}}\right\|}_2 \\&\leq { { \varepsilon}}^{3/2}cd{\left\|{{\mathcal{a}}_{m{{\varepsilon}}}^2{\left|{v_2-v_1}\right|}}\right\|}_2\leq { { \varepsilon}}^{3/2}cd{\left\|{v_2-v_1}\right\|}_2\end{aligned}\ ] ] for arbitrary . _ _ : for we set and employ to estimate due to the intermediate value theorem as well as the properties of we get }}-{{\varepsilon}}^2{\mathcal{n}}_{{\varepsilon}}{{\left[{v_1}\right]}}}\big|}&\leq \sum_{m=1}^m m{\left|{\frac{\psi^\prime_m{{\big(z_{m,{{\varepsilon}},2}\big)}}-\psi^\prime_m{{\big(z_{m,{{\varepsilon}},1}\big)}}}{{{\varepsilon}}^6}}\right| } \\ & \leq \sum_{m=1}^m \frac{m{{\gamma}}_m \zeta _ { m,{{\varepsilon}}}^2 { \left|{z_{m,{{\varepsilon}},2}-z_{m,{{\varepsilon}},1}}\right|}}{{{\varepsilon}}^6 } \\ &\leq \sum_{m=1}^m \frac{m^2{{\gamma}}_m \zeta _ { m,{{\varepsilon}}}^2 { \left|{{\mathcal{a}}_{m{{\varepsilon}}}v_2 - { \mathcal{a}}_{m{{\varepsilon}}}v_1}\right|}}{{{\varepsilon}}^2 } \\&\leq { { \varepsilon}}^2{{\left({c+{{\varepsilon}}^{3/2}d}\right)}}^2{{\left({\sum_{m=1}^m { { \gamma}}_m m^{4}}\right ) } } { \mathcal{a}}_{m{{\varepsilon}}}{\big|{v_2-v_1}\big|}\end{aligned}\ ] ] and hence }}-{{\varepsilon}}^2{\mathcal{n}}_{{\varepsilon}}{{\left[{v_1}\right]}}}\big\|}_2\leq { { \varepsilon}}^2c { { \left({c+{{\varepsilon}}^{3/2}d}\right)}}^2{\big\|{v_2-v_1}\big\|}_2\,\end{aligned}\ ] ] after integration .a particular consequence is the estimate }}}\big\|}_2\leq { { \varepsilon}}^2cd { { \left({c+{{\varepsilon}}^{3/2}d}\right)}}^2\end{aligned}\ ] ] for any , where we used that }}=0 ] can be continued for as long as the linearization of the traveling wave equation around provides an operator that can be inverted on the space . since the shift symmetry always implies that is an odd kernel function of , the unique continuation can hence only fail if the eigenvalue of the linearized traveling wave operator is not simple anymore .unfortunately , almost nothing is known about the spectral properties of the operator for moderate values .it remains a challenging task to close this gap , especially since any result in this direction should have implications concerning the orbital stability of . for has also been shown in ( * ? ? ? * propositions 5.5 and 7.1 ) that the distance profile is unimodal ( ` monotonic falloff ' ) and decays exponentially for .for , it should be possible to apply a similar analysis to the velocity profile but the technical details are much more involved .it remains open to identify alternative and more robust proof strategies .for instance , if one could show that the waves from corollary [ cor : summary ] can be constructed by some variant of the abstract iteration scheme }}+{{\varepsilon}}^2{\mathcal{p}}_{{\varepsilon}}{{\left[{w}\right]}}}\right)}}\,,\end{aligned}\ ] ] the unimodality of would be implied by the invariance properties of and , see lemma [ lem : propertiesoperatora ] and corollary [ cor : invarianceproperties ] .a similar argument could be used for the exponential decay because maps a function with decay rate to a function that decays with rate and since the von neumann formula from lemma [ lem : vonneumann ] provides corresponding expressions for ; see for a similar argument to identify the decay rates of front - like traveling waves . in this contextwe further emphasize that only supersonic waves can be expected to decay exponentially .for subsonic waves with speed , the linearization of the traveling wave equation predicts tails oscillations and hence non - decaying waves , see for a similar analysis with non - convex interaction potentials .lcllllllll & , & linear and quadratic coefficients in force terms & & + & & bounds for higher order force terms & & + & & speed of the wave & & + + & & position profile & & + & & velocity profile & & + & & corrector to the velocity profile & & + & , & residual terms with respect to & & + + & , & convolution operator and its symbol function & & , + & , & auxiliary operator and its symbol function & & , + & , & cut - off in fourier space & & + & & quadratic terms in & & + & & cubic and higher order terms in & & + & & quadratic combination of and & & + & & linear terms in & & + & & remainder terms in & & + & & fixed point operator for & & + + & , & velocity profile and speed of the kdv wave&&lemma [ lem : leadingorder ] + & , & formal limits of , & & , + & , & formal limits of , & &the authors are grateful for the support by the deutsche forschungsgemeinschaft ( dfg individual grant he 6853/2 - 1 ) and the austrian science fund ( fwf grant j3143 ) .m. chirilus - bruckner , ch .chong , o. prill , and g. schneider .rigorous description of macroscopic wave packets in infinite periodic chains of coupled oscillators by modulation equations ., 5(5):879901 , 2012 . a. hoffman and c. e. wayne . a simple proof of the stability of solitary waves in the fermi - pasta - ulam model near the kdv limit . ininfinite dimensional dynamical systems _ , volume 64 of _ fields inst ._ , pages 185192 .springer , new york , 2013 .g. schneider and c. e. wayne .counter - propagating waves on fluid surfaces and the continuum limit of the fermi - pasta - ulam model . in _ international conference on differential equations , vol . 1 , 2 ( berlin , 1999 ) _ , pages 390404 . world sci .publ . , river edge , nj , 2000 .
|
we consider atomic chains with nonlocal particle interactions and prove the existence of near - sonic solitary waves . both our result and the general proof strategy are reminiscent of the seminal paper by friesecke and pego on the kdv limit of chains with nearest neighbor interactions but differ in the following two aspects : first , we allow for a wider class of atomic systems and must hence replace the distance profile by the velocity profile . second , in the asymptotic analysis we avoid a detailed fourier pole characterization of the nonlocal integral operators and employ the contraction mapping principle to solve the final fixed point problem . keywords : _ asymptotic analysis _ , _ kdv limit of lattice waves _ , + _ hamiltonian lattices with nonlocal coupling _ msc ( 2010 ) : 37k60 , 37k40 , 74h10
|
the study of inhomogeneous cosmological models is a well motivated and justified endeavor ( see for reviews ) .these models provide more freedom in discussing very early or very late evolution of the irregularities in the universe .their study also complements perturbation approaches .it is worth mentioning that there are a few hundreds of inhomogeneous cosmological models that reproduce a metric of the friedmann - lemaitre - robertson - walker ( flrw ) class of solutions when their arbitrary constants or functions take certain limiting values .they become then , in that limit , compatible with the almost homogeneous and almost isotropic observed universe .this shows the richness of these studies .a difficulty that is encountered in this models is that the null geodesic equation is not integrable in general . in this paper , we explore the alternative of using null ( observational ) spherical coordinates in which the radial null geodesic equation of interest is solved by construction .however , when considering null coordinates and a given metric for the spacetime some subtleties arise regarding the frame of reference used . in order to explore this point we will use in this paper the approach described in ishak and lake where the velocity field is calculated from the metric and not put in by hand .conveniently , this approach allows one to explore non - comoving frames of reference , an important point for this paper . surprisingly , littlework has been done in non - comoving coordinates despite some interesting features particular to them .notably , there are models which are separable only in a non - comoving coordinate system .moreover , exact solutions to einstein s equations in a non - comoving frame usually have a rich kinematics with shear , acceleration and expansion .such solutions are relatively rare in the comoving frame , see also a recent discussion in . another point discussed in that comoving coordinates do not cover all the spacetime manifold for a specified energy momentum tensor .finally , it is worth mentioning that it is often difficult to do the mathematical transformation of a given solution from non - comoving coordinates to comoving ones , and even when the passage is made , there is no guarantee that the solution will continue to have a simple or explicit form . in the present paper ,we use the spherical null bondi metric to present models where a non - comoving frame is proven necessary .we explicitly demonstrate how a comoving frame leads to severe limitations .furthermore , we use the dust models in the non - comoving frame to outline a fitting procedure where observational data can be used to integrate explicitly for the metric functions .using observational coordinates is particularly useful when one wants to compare directly an inhomogeneous model to observational data .such an interesting program had been nicely developed in refs . where the authors used a general metric that can be written as a flrw metric plus exact perturbations .the spherically symmetric dust solutions were considered in ref .the authors also developed and used a fluid - ray tetrad formalism in order to derive a fitting procedure where observations can be used to solve the einstein s field equations .after some necessary revisions , this program has been relaunched recently .we consider here in our work the spherically symmetric case but using the bondi metric in a non - comoving frame .also , we do nt use the fluid - ray tetrad formalism but the inverse approach to einstein s equations developed in . in the following section, we set the notation and recall some useful results . in section iii , we discuss observational coordinates and explain the cosmological construction around our world - line .we also discuss the physical meaning of the functions that appear in the metric used here .we provide in section iv perfect fluid models in a non - comoving frame . in sectionv , we show how dust models are not possible in a comoving frame .we describe dust models in a non - comoving frame and outline a fitting procedure in section vi and summarize in section vii .we set here the notation and summarize results to be used in this paper . in ref . , warped product spacetimes of class were considered .these can be written in the form where , and .although very special , these spaces include many of interest , for example , _ all _ spherical , plane , and hyperbolic spacetimes . for , we write with and functions of only . consider a congruence of unit timelike vectors ( velocity field ) with an associated unit normal field ( in the tangent space of ) satisfying .it was shown in that is uniquely determined from the zero flux condition where is the einstein tensor of the spacetime .the explicit forms for and were written out for canonical representations of , including the null ( bondi ) type of coordinates that we use in the present paper . with , and ,it was shown in that the condition is a necessary condition for a perfect fluid source , and that in some cases , this condition is also sufficient .for example , in , equation ( [ giso ] ) was used to derive an algorithm which generates all regular static spherically symmetric perfect fluid solutions of einstein s equations . in this paper, we are interested in perfect fluid sources so it is important to recapitulate the following results from . consider a fluid with anisotropic pressure and shear viscosity but zero energy flux ( non - conducting ) .the energy - momentum reads where is the energy density and is the shear associated with ; is the phenomenological shear viscosity ; and are the pressures respectively parallel and perpendicular to .when and the shear term vanishes the fluid is called perfect .it was shown in that in the case where we have and is a freely specified function .the procedure to impose a perfect fluid source in this degenerate case is to impose the condition ( [ giso ] ) and also necessarily set . for other choices of ,the fluid is viscous .we consider in the present paper the null coordinate system .these are called observational ( or cosmological ) coordinates as we can construct them around our galaxy world line c as indicated in figure [ coordinates ] .the trajectory defined by , and constant is a radial null geodesic and each hyper - surface of constant is a past light cone of events on c. we choose and to represent the vertex `` here and now '' .the coordinate is then set by construction to be the area distance as explained further and is related to the luminosity distance by ( see e.g. ) .finally , and are the spherical coordinates on the celestial sphere .the geometry of the models is represented by the general spherical bondi metric in advanced coordinates : where , and .the radial ( and constant ) ingoing null geodesic equation =constant is solved by construction ( see appendix [ appnull ] ) .the components of the mixed einstein tensor for ( [ metric ] ) are given in appendix [ appeinstein ] and the structure of the weyl tensor is discussed in appendix [ appweyl ] .regularity of the metric and the weyl invariants requires that and are at ( e.g. see equation ( [ w ] ) ) .it follows that also , we can use the freedom in the null coordinate to normalize it by setting . as we will write further in this paper ( see equation ( [ normalization ] ) )this means that we require that measures the proper time along our galaxy world line c. whereas the meaning of the metric function is very well known , we are not aware of any previous literature where an interpretation for was given .the function represents the effective gravitational ( geometrical ) mass ( e.g. , , , and ) and is given by where is the mixed angular component of the riemann curvature tensor . for the physical meaning of the function , it turns out to be useful to study the kinematics of null rays .these usually include the optical shear , vorticity and rate of expansion , respectively defined by }k^{\alpha;\beta } , \\\theta_{optical } & \equiv & \frac{1}{2}k^{\alpha}_{;\alpha } \label{opticalexpansion}\end{aligned}\ ] ] where \label{nullvector}\ ] ] is the null 4-vector tangent to the congruence of null geodesics .the physical meaning of the optical scalars can be understood in the following way : if an opaque object is displaced an infinitesimal distance from a screen ( perpendicularly to the beam of light ) , it will cast on the screen a shadow that is expanded by , rotated by and sheared by . as expected from the spherical symmetry of the geometry ,the non - vanishing optical scalar for the null congruence is the optical rate of expansion , from which we find we identify from equation ( [ cvalue ] ) that is a measure of the reciprocal of the expansion of null rays .we consider an observer moving with a fluid for which the streamlines are given by the general radial timelike vector $ ] .we assume that such a velocity field exists for which the energy - momentum tensor takes the perfect fluid form where and are respectively the energy - density and isotropic pressure associated with . the velocity fieldis simply determined from the zero flux condition ( [ zeroflux ] ) and is given by {\frac{1}{\big{(}1-\frac{2m(r , v)}{r}\big{)}^2 + \frac{4m^{\bullet}(r , v)}{rc'(r , v ) } } } \label{u1b}\ ] ] {\big{(}1-\frac{2m(r , v)}{r}\big{)}^2+\frac{4m^{\bullet}(r , v)}{rc'(r , v ) } } } \label{u2b}\ ] ] where and .the associated unit normal vector field ( and ) is given by .\ ] ] interestingly , the velocity field has shear , acceleration and expansion rate scalar in general .it follows from the metric ( [ metric ] ) that , as defined in ( [ delta ] ) , is not zero and that for a perfect fluid source we must impose the condition ( [ giso ] ) and set . with and , condition ( [ giso ] ) reads where and the metric ( [ metric ] ) along with the metric constraint ( [ bondipfcondition ] ) represent a perfect fluid model with and this section , we specialize to models in a comoving frame of reference .we show how this frame fails in the realization of the cosmological construction proposed .with the metric function the requirement of comoving coordinates reads hence , the necessary and sufficient condition for a comoving frame is it follows from ( [ u1b ] ) and ( [ comovdustconst1 ] ) that and . with this velocity fieldthe shear tensor vanishes , therefore , the necessary and sufficient condition for a perfect fluid model is equation ( [ giso ] ) , which can be written as for a perfect fluid source in this frame the pressure is a function of both and while the energy density is only a function of , and the 4-acceleration has the non - vanishing components and a caveat in this frame ( comoving ) is that the expansion scalar vanishes and the model is not suitable for describing an expanding universe . the present matter dominated ( as opposed to radiation dominated ) universe is well approximated by a zero - pressure model , commonly referred to as `` dust '' . in this case ( comoving ) and the zero pressure conditions follow from equations ( [ p1s ] ) and ( [ p2s ] ) as and with ( [ comovdustconst1 ] ) , equations ( [ g2bondicomovdust ] ) and ( [ gg1bondicomovdust ] ) read and integrating ( [ g2bondicomovdust2 ] ) gives which when put into ( [ gg1bondicomovdust2 ] ) gives with ( from ) the zero pressure model reduces to the following two cases : + i ) and the spacetime reduces to the minkowski flat spacetime ( ) , or , + ii) ( is constant ) and the spacetime reduces to schwarzschild vacuum in eddington - finkelstein coordinates ( , . ) + therefore , a dust model is not possible in a comoving frame using the observational coordinates and the bondi metric ( [ metric ] ) .we turn in the following section to a non - comoving frame for dust models .we are interested in building dust models using spherical observational coordinates and a non - comoving frame . in a 1 + 3 decomposition of the spacetime, these models are given by the general lematre - tolman - bondi solution .in this non - comoving case in general , so we must set and impose the zero pressure conditions ( [ g2bondicomovdust ] ) and ( [ gg1bondicomovdust ] ) which can be written as and ( [ gg1bondicomovdust ] ) can be written as the metric ( [ metric ] ) with constraints ( [ dustconst1 ] ) and ( [ dustconst2 ] ) represents a class of inhomogeneous dust models in spherical observational coordinates . using equation ( [ energydensity ] ) ,the energy density is given by this result can also be obtained from the effective gravitational mass equation ( [ effectivemass ] ) .the velocity field follows from equations ( [ u1b ] ) and ( [ u2b ] ) : or equivalently by using ( [ dustconst1 ] ) and we verified that the acceleration 4-vector field vanishes as the dust fluid is moving geodesically .interestingly , the velocity field remains with non - vanishing shear and expansion rate .it is a well known result that a cosmological model which satisfies the einstein equations with a perfect fluid source , a barotropic equation of state i.e. ( including ) , which is conformally flat ( ) and has non - zero expansion is a lemaitre - friedmann - robertson - walker model ( lfrw ) , and . for dust models in the non - comoving frame, the condition ( see appendix [ appweyl ] ) reduces to therefore , the metric ( [ metric ] ) along with the constraints ( [ dustconst1 ] ) , ( [ dustconst2 ] ) and ( [ cflatdust ] ) represents the homogeneous and isotropic ( lfrw ) limit of the models .we are interested here in more general inhomogeneous models .the light emitted with a wavelength from a point on the light cone is observed at the vertex `` here and now '' ( see figure [ coordinates ] ) with a wavelength .the redshift is then given by ( see e.g. , ) where is the normalized timelike velocity vector field and is the null vector as given previously by ( [ nullvector ] ) .it follows that where is given by ( [ u1b ] ) .it follows from the regularity condition ( [ regul ] ) and the timelike normalization condition evaluated at ( observer ) that where in the last step we used the freedom in the null coordinate to set .finally , for the dust case , equation ( [ redshift1 ] ) gives or equivalently by using the constraint ( [ dustconst1 ] ) where we have set in equations ( [ dustredshift1 ] ) and ( [ dustredshift2 ] ) .the coordinate in the model is set by construction to be the observer area distance which is defined by where is the cross - sectional area of an emitting object , and is the solid angle subtended by that object at the observer .the area distance is related to the luminosity distance by .the luminosity distance can be determined by comparing the observed luminosity of an object to its known intrinsic luminosity : where is the observed ( measured ) flux of light received and is the object s intrinsic luminosity .another observable of interest is the source number counts as a function of the redshift .an observer at the vertex `` here and now '' will count on the light cone a number of sources between redshifts and in a solid angle .it follows that where is the number density of sources and is a fractional number indicating the efficiency of the counts ( completeness ) .this number corrects for errors in source selection and detection , see e.g. . for simplicity, we can assume that the necessary adjustments for the dark matter can be incorporated via .the energy - density follows where is the average rest mass for the counted sources .as discussed earlier , the approach used allows us to integrate explicitly the model given observational data . as a first step , we rearrange the model equations .we combine equations ( [ dustrhomass ] ) evaluated at with equation ( [ dustredshift1 ] ) and use to obtain which integrates to integrating ( [ dustrhomass ] ) for gives }\label{integratedm1}\ ] ] where we also used .now , the observations provided as polynomial functions and fitted to the data can be used to integrate explicitly for and .the steps for the fitting algorithm are as follows : * express cosmological data as polynomial functions for : + i ) the energy - density from galaxy number counts .many projects are accumulating very large amounts of data .see , for example , for the `` sloan digital sky survey '' .+ ii ) the observer area distance from `` standard candles '' projects in which it is possible to measure the redshift and the distance independently .the accumulating data from the supernovae cosmology projects are very promising .see , for example , for the `` high - z sn search '' project , for the `` supernova cosmology project '' and for `` the supernova acceleration probe ( snap ) '' project .* invert the function to obtain .* this can in turn be used to write the energy - density polynomial function as .* now , with and expressed as functions of ( and not ) , integrate equation ( [ integratedc1 ] ) over to obtain on the light cone .* with determined , integrate equation ( [ integratedm1 ] ) over to obtain on the light cone .* finally , with and determined , use equations ( [ dustconst1 ] ) and ( [ dustconst2 ] ) to integrate over v. the level of difficulty of this last step can be monitored using the analytical forms used for and and remains a tractable problem while integrating the null geodesic equation in the standard 1 + 3 form of the ltb models is not tractable ( see e.g. ) , and one has to recourse to numerical integrations .moreover , the fitting procedure has the interesting feature of incorporating the observations in the process of integrating explicitly for the metric functions .it is worth mentioning that in principle the information on our light cone can not determine its future evolution uniquely .we need to make the reasonable assumption that there will be no future events in the cosmic evolution that will invalidate the entire data obtained from our light cone , see e.g. .furthermore , one must keep in mind the usual limitation of the underlying models used here as they are spherically symmetric around our worldline and more general inhomogeneous models should be considered in future studies of fitting procedures .we expressed inhomogeneous cosmological models in null spherical non - comoving coordinates using the bondi spherical metric .a known difficulty in using inhomogeneous models is that the null geodesic equation is not integrable in general .our choice of null coordinates solves the radial null geodesic by construction .we identified the general meaning of the metric function to be the reciprocal of the optical expansion .we used an approach where the velocity field is uniquely calculated from the metric rather than put in by hand .conveniently , this allowed us to explore models in a non - comoving frame of reference . in this frame, we find that the velocity field has shear , acceleration and expansion rate in general . in this set of coordinates, we showed that a comoving frame is not compatible with expanding perfect fluid models and dust models are simply not possible in this frame .we described then perfect fluid and dust models in a non - comoving frame .the framework developed allows one to outline a fitting procedure where observational data can be used directly to integrate explicitly for the models .the author thanks kayll lake and roberto sussman for useful discussions .this work was supported by the natural sciences and engineering research council of canada ( nserc ) .portions of this work were made possible by use of _ grtensorii _ .the paths of light rays are described by null geodesic trajectories under the eikonal assumption .the geodesic trajectories are determined by solving the null geodesic equation where is a null vector ( ) tangent to the null geodesics , where is an affine parameter . for the bondi metric ( [ metric ] ), the 4 equations ( [ nullgeo ] ) are all satisfied for .the expressions for the components of the mixed einstein tensor are as follows where , , and .we note that these components are related by structure of the weyl tensor is usually explored to derive the conformally flat case of a cosmological solution ( i.e. ) .this can reveal the limits of the model s parameters for which it reduces to a lemaitre - friedmann - robertson - walker ( lfrw ) model .the non - vanishing components of the mixed weyl tensor for the metric ( [ metric ] ) are related and given by where } } & \nonumber \end{aligned}\ ] ] the condition for conformal flatness of the models is therefore .a. molina and j.m.m .senovilla ( editors ) , _ inhomogeneous cosmological models _ ( world scientific , singapore , 1995 ) .d. kramer , h. stephani , e. herlt and m. maccallum , _ exact solutions of einstein s field equations _( cambridge university press , cambridge , 1980 ) , see also the recent second edition 2003 .w. c. hernandez and c. w. misner , astrophys .j. * 143 * , 452 ( 1966 ) .m. e. cahill and g. c. mcvittie , j. math .phys , 11 , * 1360 * ( 1970 ) .e. poisson and w. israel , phys .rev d * 41 * , 1796 ( 1990 ) .t. zannias , phys .rev . d*41 * , 3252 ( 1990 ) s. hayward , phys .rev . d*53 * , 1938 ( 1996 ) ( gr - qc/9408002 ) .this is a package which runs within maple .it is entirely distinct from packages distributed with maple and must be obtained independently .the grtensorii software and documentation is distributed freely on the world - wide - web from the address http://grtensor.org
|
we use null spherical ( observational ) coordinates to describe a class of inhomogeneous cosmological models . the proposed cosmological construction is based on the observer past null cone . a known difficulty in using inhomogeneous models is that the null geodesic equation is not integrable in general . our choice of null coordinates solves the radial ingoing null geodesic by construction . furthermore , we use an approach where the velocity field is uniquely calculated from the metric rather than put in by hand . conveniently , this allows us to explore models in a non - comoving frame of reference . in this frame , we find that the velocity field has shear , acceleration and expansion rate in general . we show that a comoving frame is not compatible with expanding perfect fluid models in the coordinates proposed and dust models are simply not possible . we describe the models in a non - comoving frame . we use the dust models in a non - comoving frame to outline a fitting procedure .
|
a multiple sequence alignment ( msa ) of a family of proteins provides us with valuable information to characterize the protein family in terms of patterns of amino acid residues at alignment sites .the usefulness of analyzing the residue compositions in the msa has led to the development of a class of sequence profile methods such as psi - blast and profile hidden markov models ( hmm ) , which can be used to detect distantly related proteins , to obtain high - quality alignments , and to improve structure prediction as well as to characterize functional and structural roles of the conservation pattern . in the sequence profile methods, it is assumed that the residue composition of each site is independent of other sites . with this crude assumption ,the conservation of residues are explained in terms of their functional and structural roles .however , to further understand the mechanism of these roles in the context of protein sequences , one needs to drop the assumption of site independence .in fact , there seems to be no way for a residue to `` know '' that it is in a particular position in the sequence to play a particular functional or structural role other than by its interactions with other residues in the sequence ( or with other molecules in the biological system ) .therefore , to understand what makes particular residues important at each site , one needs to study the correlations between different sites .correlations between distant sites in a msa can be quantified by identifying correlated substitutions .they have been exploited to gain further insights of structures and functions of proteins . however , the apparent correlations observed in a msa are only a result of intricate interactions between residues as in the underlying native structures of proteins .recently , there have been a number of successful attempts to extract direct correlations which are in fact found to be in excellent agreement with the residue - residue contacts in native structures to the extent that the three - dimensional structures can be actually ( re)constructed .one drawback of the direct - coupling analysis ( as well as other direct correlation methods ) is that it takes into account only those alignment sites that are well aligned ( the `` core '' sites ) , and ignores insertions .the primary difficulty in the treatment of insertion is that they are of variable lengths , which makes the system size variable and hence greatly complicates the problem .when one is interested in some universal properties of a protein family such as their approximate three - dimensional fold , insertions may be irrelevant. however , when one is interested in a particular member of the family , the existence of some insertions may be important .in fact , insertions , which may be regarded as `` embellishments '' to a conserved structural core , are deemed to be an effective strategy for proteins to diversify and specialize their functions .some insertions are also known to play critical roles in protein oligomerization . of more fundamental concernis that ignoring insertions in a msa means ignoring the polypeptide chain structure , which implies theoretical as well as practical consequences .theoretically , it is questionable to ignore such a strong interaction as the peptide bond in order to accurately describe the sequence and structure of proteins .practically , in order to identify new members of a family by aligning their sequences to some msa - derived model incorporating direct correlations , a consistent treatment of polypeptide sequences is necessary . in this paper ,i present a new statistical model of the msa that incorporates both direct correlations and insertions .the main objective of this model is incorporation of long - range correlations into multiple - sequence alignment , rather than improving contact prediction by incorporating insertions .as will be apparent from the formulation , this model is a generalization of the direct - coupling analysis that is based on the principle of maximum entropy .this model can be regarded as a finite , quasi - one - dimensional , multicomponent , and heterogeneous lattice gas model where the `` particles '' are amino acid residues . in the following , the `` lattice gas model '' refers to this model .the lattice system consists of two kinds of lattice sites , corresponding to the core ( matching or deletion ) or the insert , that are connected in a similar , but distinctively different , manner as in the profile hmm model .while long - range interactions are treated by using a mean - field approximation , short - range interactions are treated rigorously so that the partition function is obtained analytically by a transfer matrix method .one notable feature of this model is that its partition function literally accounts for all the possible alignments with all the possible protein sequences , including infinitely long ones .based on this model , various virtual experiments can be performed by changing the `` temperature '' of the system or by manipulating the `` chemical potentials '' associated with the particles ( residues ) at each site .the paper is organized as follows . in section [ sec : theory ] , some basic quantities are defined and the lattice gas model of the multiple sequence alignment is formulated . section [ sec : materials ] provides the details of numerical methods and data preparation .section [ sec : results ] gives the results of virtual experiments by increasing the temperature or by introducing alanine point mutants .in section [ sec : discussion ] , limitations , implications as well as possible extensions of the present model are discussed .the lattice structure of the model .the squares marked with correspond to core ( matching / deletion ) sites , the diamonds marked with correspond to insert sites .the edges between sites indicates bonded interactions .see figure [ fig : alignment ] for concrete examples.,width=302 ] a msa may be regarded as a matrix of symbols in which each row is a protein sequence possibly with gaps and each column is an alignment site .some columns may contain few gaps so the residues in such positions may be relatively important for the protein family . here, i informally define a `` core '' ( matching / deletion ) site as an alignment site which are relatively well aligned .the remaining sites are defined to be insert sites .core sites are ordered from the n - terminal to the c - terminal , and denoted as with being the number of core sites . for convenience ,the terminal core sites and are appended to indicate the start and end of the alignment , as in the profile hmm . to each core site , either one of 20 amino acid residues or a gap ( deletion ) may be assigned , and the latter is treated as the 21-st type of residue .an insert site between two core sites and is denoted as .all the gap symbols ignored at an insert site . in the following , the ( ordered ) sets of core and insert sites are denoted as and , respectively , and their union as .in addition , let us define a set of amino acid residues allowed for an insert site as ( 20 amino acid residue types ) , and that for a core site as ( 20 amino acid residues and deletion ) for and ( deletion only ) for the terminal sites . for one protein sequence in the msa , at most one residue may correspond to each core site whereas any number of residues may correspond to an insert site . in this sense , residues behave like fermions on core sites and like bosons on insert sites .the set of core and insert sites comprise a quasi - one - dimensional lattice structure as shown in figure [ fig : model ] . in this lattice structure ,two sites are connected if two consecutive residues in a protein sequence ( possibly including gap symbols ) can be assigned . if two sites are directly connected , they are defined to be a bonded or short - range pair .the self - connecting loop in each insert site indicates that it makes a bonded pair with itself .thus , an insertion may be indefinitely long , manifesting its boson - like character . based on this lattice system , an alignment of a particular protein sequence in the msa may be represented as a sequence of length consisting of ordered pairs of a lattice site and a residue of : ( `` matchings '' to the terminal sites are also included ) . here , each with and .a whole msa consisting of sequences is a set of such aligned sequences : .figure [ fig : alignment ] shows some concrete examples of this representation of alignment .example of a multiple sequence alignment ( based on ) .each row corresponds to a protein sequence ( s1, , s7 ) and each column to an alignment site . below the horizontal line ,each alignment site is annotated as to whether it corresponds to a core ( matching or deletion ) site ( `` o '' ) or an insert site ( `` i '' ) . indicated below these `` o''/ `` i '' symbols are the position of lattice sites .figure [ fig : model ] . )the size of the lattice model based on this msa is .insert sites other than , and are not explicit in this msa .for example , the alignment of the sequence s2 in this figure is represented as where the first and last pairs represent the start and end of the alignment , respectively . as another example , the alignment of sequence s7 is ., width=302 ] using the above representation , let us define some quantities that characterize an alignment in a given msa .for a given lattice model and its alignment with a protein sequence , the number of the residue type at the lattice site is defined as this quantity is referred to as the single - site count .similarly , the number of a pair of residue types and on a bonded pair of lattice sites and occupied by two consecutive alignment sites is defined as which is referred to as the bonded pair count .the single - site counts and bonded pair counts are the two fundamental stochastic variables in the present theory . for later convenience ,let us define the non - bonded pair counts as for .note that the non - bonded pair counts may be defined for residues residing on neighboring lattice sites as well as on the same ( ) site .the terms `` bonded '' and `` non - bonded '' here are meant to describe the connectivity along the polypeptide sequence rather than that along the lattice system ( a pair of residues in neighboring lattice sites may be either bonded or non - bonded depending on the given alignment ) . from these definitions , several relations follow .first , by the fermion - like character of the core site , we have for each between bonded pair counts and single - site count , we have where and .lastly , between non - bonded pair counts and single - site count , we have where .i would like to statistically characterize the given msa in terms of the above quantities . to do so , suppose that the probability of an alignment is known for the lattice model .then , the expectation values of these numbers are defined as follows : which are referred to as single - site ( number ) densities , bonded pair ( number ) densities , and non - bonded pair ( number ) densities , respectively .these number densities naturally satisfy the relations analogous to eqs .( [ eq : rel1])-([eq : relnb2 ] ) . to determine the form of , the principle of maximum entropyis employed with the constraints that the densities are equal to those observed in the given msa .the entropy is given as let us denote the densities estimated from the given msa as , , and ( see section [ sec : materials ] for the method to obtain these quantities ) .the following lagrangian , consisting of the entropy ( eq . [ eq : entropy ] ) and the constraints for the densities , is maximized : \nonumber\\ & & + \frac{1}{2}\sum_{s , s'}\sum_{a , b}k_{ss'}(a , b ) [ { n}_{ss'}^{nb}(a , b ) - \bar{n}_{ss'}^{nb}(a , b)]\nonumber\\ & & + \sum_{s , a}\mu_{s}(a)[{n}_{s}(a ) - \bar{n}_{s}(a)]\end{aligned}\ ] ] where , and are undetermined multipliers , and the summation is over bonded pairs .we have also introduced the `` temperature '' parameter .solving leads to the boltzmann distribution : }{\xi},\ ] ] where is the normalization constant or the partition function defined by ,\label{eq : partition - function}\ ] ] and is the `` energy '' of the system given as from this expression of the energy function , we can interpret as the chemical potential imposed on the particle ( amino acid residue ) at the site , and and as bonded and non - bonded coupling parameters , respectively .the problem of obtaining the probability distribution is thus reduced to computing the partition function . in the following, the non - bonded interactions are considered only between core sites ( i.e. , core - insert and insert - insert pairs are discarded ) for a technical reason ( see the subsection `` determining the matrix '' below ) . in this subsection, i assume that the parameters and are fixed . to treat the long - range interactions ,a mean - field approximation is applied .then , the partition function can be computed by a transfer matrix method .let us define the mean field acting on the residue type on the site : \ ] ] where is subtracted for convenience , but this does not essentially change the system s behavior ( it simply shifts the chemical potential which can be compensated by ; see eq . [eq : tmat ] and section [ sec : gauge ] ) .next , let us define the transfer matrices between a bonded pair of sites and as .\nonumber\\ & & \label{eq : tmat}\end{aligned}\ ] ] to alleviate the expressions for the partial partition functions , a bracket notation is introduced .first , define a set of standard basis vectors : and corresponding to each residue type on each site .these vectors satisfy the following orthonormal properties : where is the -dimensional identity matrix .for each site , i define the partial partition functions and that count the statistical weight of all possible alignments starting from the start site and terminating at and , respectively .similarly , partial partition functions and account for all possible alignments `` starting '' from the end site and `` terminating '' at and .any ( complete ) alignment starts at the start site and ends at the end site , and these sites are formally treated as `` deletion ( ` - ` ) . ''therefore , the boundary conditions are given as based on this setting , the recursion formulae for partial partition functions are given as in the forward ( n- to c - terminal ) direction , and in the backward ( c- to n - terminal ) direction . here , each transfer matrix is viewed as a matrix with . by expanding eq .( [ eq : forwardi ] ) , we have where ( the 20-dimensional identity matrix ) .similarly , we have thus , and indeed include contributions from infinitely long insertions . the inverse matrix exists if the spectral radius of is less than 1 .using eqs .( [ eq : itii ] ) and ( [ eq : itii2 ] ) , the recursions can be explicitly solved as where finally , the total partition function is obtained as let us now compute the expected densities . from the definition of the partition function ( eq . [ eq : partition - function ] ) , the following equalities hold for single - site and bonded pair densities : by explicitly calculating the left - hand sides of these equations using eq .( [ eq : ztot ] ) , we have , for and , it is readily proved that these expressions satisfy the relations between bonded pair and single - site densities ( eqs .[ eq : relb1 ] [ eq : relb2 ] ) .it is also possible to derive an analytical expression for the expected non - bonded pair densities from that is , where however , eq . ( [ eq : marginalnb ] )is not used in practice for the reason described below ( section [ sec : kmat ] ) .this expression should be considered as an artifact of the present approximation on the one - dimensional lattice system .in fact , under the mean - field approximation , one should have , but this does not hold for eq .( [ eq : marginalnb ] ) .several `` thermodynamic functions '' are defined for quantifying the stability of the system under perturbations .first , the free energy function should be regarded as a grand potential because alignments of varying lengths are considered in the ensemble .this free energy is a measure of the likelihood of alignments expressed in terms of the number densities . by rearranging eq .( [ eq : boltzmann ] ) and averaging over all alignments , the free energy can be decomposed as where , and are the internal energy , entropy and gibbs energy of the system .the internal energy of the system is given as where and are bonded and non - bonded energies , respectively , defined ( under the mean - field approximation ) by .\end{aligned}\ ] ] these correspond to the first two terms on the right - hand side of eq .( [ eq : energy ] ) .the internal energy represents the mean `` direct '' interactions ( bonded and non - bonded ) between sites .the gibbs energy is defined as and this quantity represents the work exerted by the chemical potential to maintain the single - site densities .finally , the entropy is given as which is equivalent to the entropy in eq .( [ eq : entropy ] ) and thus is a measure of randomness of the alignments .the temperature is set to 1 and the chemical potentials are set to 0 for all when the parameters ( and ) are determined .this state is referred to as the reference state in the following . the relations among the densities ( eqs .[ eq : rel1][eq : relnb2 ] ) indicate that not all the parameters , , , and , are independent .when determining or changing the model parameters , we may therefore fix some of them to arbitrary values without losing generality . from the normalization condition ( eq . [ eq : rel1 ] ) of core sites , it is always possible to set for all the sites ( `` '' stands for the deletion ) . from this and the relations eqs .( [ eq : relb1 ] ) and ( [ eq : relb2 ] ) , it is always possible to set although there are other degrees of freedom that can be also fixed , they are not relevant to the present study so i will not fix them .furthermore , at the reference state , i set all to zero .this is possible because any values of may be absorbed into when determining the parameters ( c.f . , eq .[ eq : tmat ] ) . following the convention of morcos et al . , i also set for all and .i have downloaded the msa s and profile hmm s for the globin ( pf00042 ) and ( immunoglobulin ) v - set ( pf07686 ) domains from the pfam database ( version 28 ) . for the globin domain ,the full alignment of 17,947 amino acid sequences were used . for the v - set domain ,the full alignment of of 23,976 sequences was used .in addition , i have downloaded 17 families from the top 20 largest pfam families with the model length of less than 300 sites .for these 17 families , the representative set of alignments ( with 75% sequence identity cutoff ) were used due to the large size of the alignments . in the present study ,the lattice structure of a msa was derived from the corresponding pfam model .that is , each core site corresponds to a profile hmm match state , and each insert site to a profile hmm insert state .the simplest way to estimate the single - site , bonded and non - bonded pair densities from a msa of sequences is to approximate for all the sequences . in practice , i used pseudo - counts as well as sequence weights as in morcos et al. to improve the robustness of the estimates .let there be aligned sequences , , in a given msa and suppose the structure of the lattice system has been set .the observed densities are defined as follows : ,\\ \bar{n}_{ss'}^s(a , b ) & = & c\left[\frac{\gamma}{2q_sq_{s'}}+\sum_{t=1}^m\frac{n_{ss'}^s(a , b|\mathbf{x}^t)}{m_t}\right],\\ \bar{n}_{ss'}^l(a , b ) & = & c\left[\frac{\gamma}{q_sq_{s'}}+\sum_{t=1}^m\frac{n_{ss'}^l(a , b|\mathbf{x}^t)}{m_t}\right]\end{aligned}\ ] ] where , , is the pseudo - count , is the number of sequences in the msa that are highly homologous ( % sequence identity ) to the sequence , and with .note that these estimated densities satisfy the relations analogous to eqs .( [ eq : rel1])([eq : relnb2 ] ) .flow chart for determining the matrix parameters ., scaledwidth=50.0% ] as mentioned above , the temperature is set to unity ( ) in the process of parameter determination . to determine , eq .( [ eq : marginal2c ] ) is rearranged to have \ ] ] where it is assumed and for all and ( see section [ sec : gauge ] ) .setting is possible because the expected number densities are set to the observed values ( see eq .[ eq : mean - field ] ) . by replacing with the observed value , one can iteratively update the values of and compute the partition function until this equation actually holds . in practice ,a relaxation parameter is introduced to improve the stability of convergence .thus , from the -th step of iteration , the next updated value is obtained by the following scheme .,\\ j_{ss'}^{(\nu + 1)}(a , b ) & = & ( 1 - \alpha ) j_{ss'}^{(\nu)}(a , b ) + \alpha j_{ss'}'(a , b).\end{aligned}\ ] ] i found the values were effective . determining necessitates a special treatment due to the requirement that the spectral radius of the transfer matrix must be less than 1 ( see eq .[ eq : itii ] ) . in order to force to be invertible , a parameter introduced such that .( [ eq : marginal2c ] ) for becomes let us define the `` loop length '' as and denote its observed counterpart by . by imposing have which is a self - consistent equation for .thus , first is set to a sufficiently large value and compute the partition function and expected densities .then , is updated by eq .( [ eq : lambda ] ) , and by using the updated value of , we again compute the partition function and expected densities .this process is repeated until the value of converges .after the convergence of for all , is updated as in eq .( [ eq : marginal2c ] ) without including . in this way, the contribution of is incorporated into the updated value of , and will eventually converge to 1 , and hence may be omitted in later calculations .the overall procedure for determining the matrix is shown in fig .[ fig : detj ] . in this procedure , the given data are the observed densities and initial values for and .after the partition function and expected densities are computed , is iteratively updated .after has converged , is updated .convergence is checked based on the difference of the expected bonded pair densities from their observed values : when the root mean squeare between the two densities is less than , the iteration is stopped . in this study , only those between core sites are taken into account for non - bonded interactions . including non - bonded interactions with insert sites is numerically unstable because the spectral radius of may easily exceed 1 . noting the gauge fixing ( eq . [ eq : kozero ] ) , we first determine viewed as a matrix ( consisting of blocks of submatrices ) by discarding the rows and columns including deletion .then , by fixing the values of , we determine the matrices .let the observed covariance matrix of single - site counts be : in a similar manner as in morcos et al. , one could apply the plefka expansion to the grand potential ( eq . [ eq : free - energy ] ) with as the reference state .however , i found that thus obtained made the system unstable under very weak perturbations .this behavior is perhaps due to the incompatibility of the mean - field approximation with the one - dimensional system ( see the remark at the end of section [ sec : expdensity ] ) . in order to cope with this problem ,i employ the following gaussian ( harmonic ) approximation . by assuming the single - site densitiesare gaussian random variables yielding the observed covariance , the non - bonded coupling is given as which is identical to that derived by morcos et al . using the plefka expansion , except for the diagonal blocks ( i.e. , ) .unlike their case ( where the diagonal blocks are defined to be zero ) , i use the expression for as in eq .( [ eq : k ] ) including the diagonal blocks .the system was again found to be unstable when the diagonal blocks ( and those for bonded pairs ) of were set to zero .this approximation makes the matrix negative semi - definite so that the observed single - site densities are the most stable ones and there are no other optima as far as non - bonded pairs are concerned . to obtain a self - consistent solution for the recursion equation ( eqs .[ eq : forwardo][eq : backwardi ] ) with a given set of parameters , and , we first set the mean - field for all and .then compute the partition function and the expected densities and update by eq .( [ eq : mean - field ] ) .this process is repeated until convergence . in practice , however , i do not use this self - consistent solution ( see below ) .note that our partition function is that of a grand canonical ensemble so the total number of particles ( residues ) can vary .in practice , however , it is preferable to fix the sequence length for comparing different conditions to be meaningful. this can be achieved by adjusting the chemical potentials .first , let us define the sequence length as the number of particles in the system : note that the deletion is not included here ( i.e. , when ) .let denote the target sequence length ( a constant ) . at every step of self - consistent calculation ,update the chemical potential of each residue ( except for deletion ) by where is a small positive constant ( ) .flow chart for obtaining the self - consistent solution with fixed sequence length ( and fixed single - site densities ) . , scaledwidth=50.0% ] in virtual alanine scanning experiments , the single - site densities of particular sites is specified .given densities for all for a particular site can be specified by adjusting the chemical potentials at every iteration of the self - consistent calculations : \ ] ] where is a positive constant ( ) . for the case of core sites ,it is always possible to set by subtracting this value from those of other residue types of the same site .when the sequence length is to be fixed as well , both eqs .( [ eq : lenupdate ] ) and ( [ eq : mu - update ] ) are applied .a measure of site conservation is the site entropy defined by for the reference state .the more well - conserved a site , the lower the value of the site entropy .the difference between the reference state and a perturbed state is measured by the kullback - leibler divergence : and the total divergence is defined by now study the behavior of the lattice gas model of multiple sequence alignment by varying temperature or by `` mutating '' a site .i mostly focus on the effect of non - bonded interactions in the following . for this purpose ,i compare the system including both the bonded and non - bonded interactions ( referred to as the `` '' system in the following ) with that including only the bonded interactions ( the `` -only '' system ) .the calculations for the -only system were performed by simply discarding the mean - field , which is justified due to the present definition of the mean - field ( eq . [ eq : mean - field ] ) .all the calculations in the following are based on the `` fixed - length '' solution , and the sequence length ( eq . [ eq : slen ] ) was constrained to that of the reference state .note that the present model does not exhibit phase transition due to the gaussian approximation of the non - bonded pair interactions .that is , the matrix is negative semi - definite so that there exists one and only one minimum for the non - bonded interactions ( i.e. , at the observed single - site densities ) .nevertheless , solving the self - consistent equation with varying temperatures helps to understand the behaviors of interactions . at high temperatures ,all the interactions are effectively weakened .this can be regarded as an idealization of uniform random mutations along the protein sequences of the given family . by observing the residue compositions perturbed by increased temperature, we can see which sites are more robust under the perturbations .the globin domains are found in a wide variety of organisms ranging from bacteria to higher eukaryotes .two of the most famous family members are myoglobins and hemoglobins both of which bind the heme prosthetic group .structurally , globins belong to the class of all- proteins , the lattice gas model of the globin domain consisted of 110 core sites ( excluding the termini ) and 111 insert sites .the self - consistent equation was solved for temperature ranging from to . above the latter temperature , the solutioncould not be obtained stably because the spectral radius of some exceeded 1 . as the temperature increases , the free energy ( grand potential , eq .[ eq : free - energy ] ) increases up to around and then it starts to decrease ( figure [ fig : t - globin]a ) . decomposing the free energy ( eq . [ eq: f - decomp ] ) shows that both the internal energy ( figure [ fig : t - globin]b ) and entropy ( figure [ fig : t - globin]c ) increase with temperature . on the other hand , the gibbs energy ( eq . [ eq : gibbs - energy ] ) monotonically decreases with increasing temperature ( figure [ fig : t - globin]d ) , indicating that the sequence length tends to be longer for higher temperaturethis can be understood from the definition of the transfer matrix .since is required , holds for all ( ) so the increased temperature potentially allows a larger number of residues to reside at insert sites . in order to fix the sequence length ,the chemical potential must be negative , and hence the negative gibbs energy .the behaviors of the and -only systems appear similar regarding the free energy , internal energy , entropy and gibbs energy . to see the effect of non - bonded interactions more closely ,the internal energy was decomposed into bonded interactions and non - bonded interactions for the system ( figure [ fig : t - globin]e ) .it appears that the increase in non - bonded energy is more than an order of magnitude smaller ( figure [ fig : t - globin]e , blue line ) compared to that of bonded energy ( figure [ fig : t - globin]e , magenta line ) .furthermore , the divergence ( difference of residue distributions from the reference state ) shows a relatively large difference between the and -only systems ( figure [ fig : t - globin]f ) .thus , the non - bonded interactions are very stable under increased temperatures , and they greatly stabilize the residue composition .a closer examination of each site ( at ) shows that the magnitude of the divergence of the -only system is about three times as large as that of the system ( figure [ fig : t - globin - div ] ) .the broad peaks of the divergence roughly correspond to regions of -helices .furthermore , with non - bonded interactions , finer peaks match the periodicity of the helices ( 3 to 4 residues ) whereas such periodicity is not observed with the -only system .thus , non - bonded interactions seem not only to stabilize the residue composition , but to make the composition more specific to the structure of the domain .the v - set domains are found in many proteins the representative members of which are immunoglobulin variable domains .the lattice gas model of this domain consists of 114 core sites ( excluding the termini ) and 115 insert sites .structurally , they belong to the all- class having a -sandwich structure .the same procedures were applied to the v - set domain as the globin domain . in this case , however , self - consistent solutions could be obtained only for temperatures .this may be due to a long insertion allowed at the insert site ( average length of 23.5 residues ) .other than this limitation , the results were found to be qualitatively similar to the case of globins ( figure [ fig : t - vset]a - d ). however , the free energy decrease is more pronounced for the system , compared to the case of the globin .again , while the increase in temperature hardly changes the non - bonded energy ( figure [ fig : t - vset]e ) , the difference of the total divergence between the and -only systems is significant .a close examination of individual sites at also indicates that inclusion of non - bonded interactions greatly suppresses the divergence , and broad peaks roughly correspond to secondary structure elements ( in this case , -strands ) . with the non - bonded interactions , finer peaks appear to match with the periodicity of -strands ( 2 residues )therefore , the conclusion drawn for the globin domain applies also to the v - set domain .that is , the non - bonded interactions act to stabilize the residue composition as well as to make composition more specific to the structure of the domain .as opposed to global perturbations such as increased temperature , local perturbations helps us to examine the contribution of individual sites .local perturbations can be imposed by biasing the residue composition at a site of interest . in this subsection, the composition of a particular core site was biased in such a way that single - site density was set to 0.95 for alanine and to 0.0025 for all other residue types ( including the `` deletion '' residue type ) .this residue composition can be achieved by adjusting the chemical potential .when the site is constrained in this way , the corresponding equilibrium state is referred to as the `` a mutant '' in the following . comparing the free energy difference between the and -only systems, it is immediately noticed that the ranges of are very different between the two ; the former being an order of magnitude larger than the latter . while a large number of alanine mutants for both the and -only systems ( 82 and 101 , respectively , out of 110 ) exhibit ( i.e. , favorable mutants ) , the former ( ) shows a larger number of unfavorable ( ) alanine mutants .apart from the absolute values , the two systems appear to be correlated except for the region from the site 40 to 50 where secondary structures are sparse ( c.f ., figure [ fig : t - globin - div ] ) .in addition , they seem to be negatively correlated with site entropy ( figure [ fig : a - globin - p]a ) : highly conserved sites tend to have high values ( correlation coefficients , cc , were -0.60 and -0.57 for the and -only systems , respectively ) .thus , despite the great difference in magnitudes , the system and -only system appear to be similar in terms of free energy difference . behind this apparent similarity , however , exist different mechanisms , as we shall see in the following . while internal energy difference , , also shows a similar correlation as ( figure [ fig : a - globin]b ) , entropy difference exhibits different , somewhat opposite , trends ( figure [ fig : a - globin]c ) .in fact , the relations between the internal energy and entropy are completely different between the and -only systems ( figure [ fig : a - globin - p]b ) . while and are linearly and positively correlated ( cc = 0.99 ) for the -only system , they relation is more complicated for the system : a positive correlation for ( cc=0.65 ) and a negative correlation for ( cc=-0.69 ) .the region corresponds to that spanned by the -only system , and therefore is considered to be the region where local ( bonded ) interactions are dominant in .this in turn indicates that a large increase in nonlocal ( non - bonded ) interactions greatly restricts the residue composition throughout the globin domain .in fact , unlike the case for temperature scanning ( figure [ fig : t - globin ] ) , the perturbation by a point mutation induces a large increase in non - bonded energy that is comparable with that of bonded energy in the system ( figure [ fig : a - globin]e ) .the gibbs energy difference , , reveals a sharp contrast between the two systems ( figures [ fig : a - globin]d and [ fig : a - globin - p]c ) .the gibbs energy differences of the system are clustered below , but has a long tail towards higher values ( skewness was 1.1 ) . on the other hand ,those for the -only system are more or less symmetrically distributed around ( skewness was -1.4 ) .the correlation between and site entropy is evident for the system ( cc = -0.71 ) , but is nearly absent for the -only system ( cc = -0.18 ) ( figure [ fig : a - globin - p]c ) .the total divergence shows a trend similar to the gibbs energy difference in that its values are clustered at lower values and has a long tail towards higher values for the system , and that such is not the case for the -only system ( figure [ fig : a - globin]f ) .although in the both systems the total divergence is well correlated with site entropy , the correlation is higher for the system ( cc = -0.78 ) than for the -only system ( cc= -0.71 ) ( figure [ fig : a - globin - p]d ) . in the -only system ,each mutation perturbs the residue compositions only locally around the mutated site , whereas in the system , a mutation at one site perturbs many sites across the the entire domain . as a result, the contrast between the effects of mutations at highly conserved sites and less conserved sites is higher for the system than for the -only system . in the globin domain ,the two most highly conserved residues are phenylalanine ( phe ) at site 38 ( ) and histidine ( his ) at site 91 ( ) .the alanine mutants at these sites show large differences in ( fig .[ fig : a - globin - p]a ) , ( the two points with the largest in fig .[ fig : a - globin - p]b ) and ( fig .[ fig : a - globin - p]c ) . according to a detailed study by ota et al . , these two residue are conserved for different reasons : phe at site 38 ( `` cd1 '' in ) is conserved for structural stability whereas his at site 91 ( `` f8 '' ) is conserved for the heme - binding function at the cost of structural stability . while it is reasonable to observe that the mutant of the structurally conserved phe significantly disturbs the system , the present result suggests that the mutant of the functionally conserved his is also maintained by a significant amount of interactions with other sites .this may indicate the importance of structural scaffold to maintain protein function .the case for the v - set domain is mostly similar to that for the globin domain ( figures [ fig : a - vset ] and [ fig : a - vset - p ] ) .however , there are some marked differences to be noted .first , the free energy differences due to alanine mutations take both positive and negative values for the system , but only negative values for the -only system .the positive values for the former corresponds to relatively well - conserved sites , as can be seen in figure [ fig : a - vset - p]a .in fact , the correlation between and site entropy is significantly higher for the system ( cc = -0.72 ) than for the -only system ( cc = -0.52 ) .second , while the correlation between internal energy and entropy differences is linear and positive for the -only system ( cc = 0.96 ) as was the case with the globin , that for the system of the v - set domain shows only a negative trend for the entire range of ( cc = -0.92 ) .third , the contrast of the gibbs energy difference is far more pronounced ( figure [ fig : a - vset]d , the skewness was 1.8 for and -0.37 for -only ) and its correlation with site entropy is very high for the system ( cc = -0.80 ) whereas it is negligible for the -only system ( cc = -0.08 ) ( figure [ fig : a - vset - p]c ) .similarly , as for total divergence , the system shows sharper contrast ( figure [ fig : a - vset]f ) and higher correlation with site entropy ( cc = -0.80 , figure [ fig : a - vset - p]d ) than the -only system ( cc = -0.67 ) .thus , compared to the case with the globin , the differences between the and -only systems are more pronounced .this may be due to the difference in the structures of these domains .the globin domain has an all- fold in which local interactions in -helices are prominent , whereas the v - set domain has an all- fold in which nonlocal interactions between -strands are prominent .this difference may be reflected in the non - bonded interactions of the lattice gas model , hence the pronounced difference between the and -only systems . to confirm the observations made above , alanine scanningwas performed for 17 pfam families that are the largest in the number of family members and are of model length of less than 300 sites .the free energy difference , , tends to have more positive values for the system than for the -only system ( fig .[ fig : top17a ] , cf . figs .[ fig : a - globin]a and [ fig : a - vset]c ) .the skewness ( i.e. , the standardized third moment ) of consistently have positive values for the system whereas it can be either positive or negative for the -only system ( fig .[ fig : top17]b ) .the negative correlation between site entropy and was also clear for the system whereas such was not the case for the -only system ( fig .[ fig : top17]c ) .thus , the trend that the non - bonded interaction enhances correlation with sequence conservation seem to hold generally .one of the fundamental assumptions of the present lattice gas model is that alignment sites can be classified into core sites and insert sites .although this classification may be ambiguous to some extent , once the classification is made , the lattice structure is uniquely determined . while the lattice structure reflects the chemical structure of polypeptide chains , interactions between the lattice sites are not limited to those that are local along the chain. the principle of maximum entropy allows the model to treat bonded ( local ) and non - bonded ( nonlocal ) interactions in a coherent manner . in comparison , the profile hmm shares a similar lattice structure as the lattice gas model , but it can not treat nonlocal interactions due to its assumption of the markov process along the lattice structure . on the other hand , the direct - coupling analysis ( as applied to contact prediction ) , which casts a msa as a potts model , simply ignores insert sites so that it can not faithfully represent polypeptide chains .threading methods or conditional random field models can combine the polypeptide structure with nonlocal interactions , but such integration is often _ad hoc _ because there are no well - defined rules or principles for determining the relative contributions of various interactions .it is possible to treat a msa without classifying its columns into cores and inserts if one ignores the possibility of adding new sequences in the future .in fact , this approach is adopted by the gremlin method by balakrishnan et al . that is based on the markov random fields ( the present lattice gas model also belongs to this class of statistical models ) . in practice , however , they discarded columns with excessive gaps .such a preprocessing seems to be required because alignments within an insertion are often meaningless .this does not necessarily mean , however , that the existence of the insertion is meaningless . in any case , discarding columns of a msa will lose the information about the linear chain structure of protein sequences as well as the possibility of adding new sequences without changing the core structure of the msa .the present lattice gas model resolves the shortcomings of these previous models as both bonded and non - bonded interactions as well as insertions naturally emerge from a single framework .the main tricks here are the classification of core and insert sites and the use of residue counts , and , as fundamental variables rather than the raw alignment sequences ( ) .these are especially important for treating insert sites where any number of residues are allowed to exist .the lattice gas model can compute the probability of an entire alignment , and what has been conventionally regarded as the probability of residue occurrence at sites should be regarded as the expected number of residues at the sites . from a theoretical point of view, the present formulation of the lattice gas model offers an interesting perspective regarding the interplay between local and nonlocal interactions . as can be seen from the relations eqs .( [ eq : relb1])([eq : relnb2 ] ) , or more precisely , from the analogous relations that hold for the number densities , local and nonlocal interactions are not independent of each other , but are related via single - site densities . in this sense, local and nonlocal interactions must be consistent with each other , and the consistency is inherently embedded in a ( well - curated ) msa . in the conventional formulation of the direct - coupling analysis , only the relations corresponding to eqs .( [ eq : relnb1 ] ) and ( [ eq : relnb2 ] ) are present because the chain structure is absent .since the parameters conjugate to the single - site densities are external fields ( chemical potentials in the present case ) which are not intrinsic to the system , the relations eqs .( [ eq : relnb1 ] ) and ( [ eq : relnb2 ] ) alone do not address the consistency between local and nonlocal interactions . in this study ,i have adopted the gaussian approximation for the non - bonded coupling parameters ( eq . [ eq : k ] ) as well as the mean - field approximation ( eq . [ eq : mean - field ] ) for computing the partition function .this approach has its advantages and disadvantages .the advantages are that the parameters are readily obtained and that the partition function can be computed analytically and efficiently .these enable us to study the system under various perturbations relatively easily .a major disadvantage is that it is not possible to determine the matrix self - consistently .i therefore resorted to the gaussian approximation by implicitly assuming that each site is independent of other sites , which is not fully consistent with the lattice structure of the system .the reason for this inconsistency is likely to be that the assumption for the mean - field approximation ( i.e. , non - bonded interactions are relatively weak ; see references ) does not actually hold in the present case .due to this approximation , the system does not exhibit a phase transition that might be induced by increased temperatures or by mutations at potentially important sites .in addition , the gaussian approximation required that the diagonal blocks of the matrix , , be used as in eq .( [ eq : k ] ) , otherwise the reference state was found to be unstable .the diagonal blocks represent self - interactions , and hence , are purely site - specific quantities . in this sense , they obscure the mechanism by which the interactions of each site with other sites induce the residue composition of that site . overcoming these problems would require the direct maximization of the lagrangian ( eq . [ eq : lagrangian ] ) with respect to the parameters without diagonal ( and bonded pair ) blocks .it is also possible to apply other approximate methods such as pseudo - likelihood maximization . despite these limitations in the treatment of non - bonded interactions, the present results already provided some interesting observations regarding the role of non - bonded interactions .an increased temperature exerts a global and unbiased perturbation on the system . in this case, it was found that non - bonded energy did not significantly change compared to the bonded energy ( figures [ fig : t - globin]e and [ fig : t - vset]e ) .this implies that the residue compositions at each site adapt to the perturbation in a cooperative manner so that they stay stable .this in turn suggests , at least within the limitation of the approximations , that a protein family can accommodate a diverse variety of amino acid sequences as far as the pattern of correlations between sites is conserved .on the other hand , the virtual alanine scanning revealed a more conspicuous effect of non - bonded interactions .alanine mutations at well - conserved sites disturbed the system to a greater extent as measured by free energy , gibbs energy and total divergence ( figures [ fig : a - globin - p ] and [ fig : a - vset - p ] ) , and the relation between internal energy and entropy changes was completely different from those of -only systems .in particular , the observation that many or most of the free energy changes were negative for the -only system ( figures [ fig : a - globin]a and [ fig : a - vset]a ) suggests that residue conservation can not be explained without considering nonlocal ( non - bonded ) effects . the interactions in the lattice gas modeloriginate solely from the statistics of a msa .they are therefore not directly related to physical interactions .however , it has been demonstrated that the matrix as used in this study is a good predictor of physical contacts in native protein structures . to further confirm this, the present results showed that the effect of non - bonded ( statistical ) interactions was more pronounced in the v - set domain ( an all- fold , involving more nonlocal physical interactions ) than in the globin domain ( an all- fold , involving less nonlocal interactions ) . in addition , the -only system showed relatively better correlations with conservation for the globin than for the v - set domain , indicating that the bonded interactions also reflect physical local interactions to some extent .this point is also supported by the correlation , albeit weak , between divergence and secondary structures ( figures [ fig : t - globin - div ] and [ fig : t - vset - div ] ) .thus , the lattice gas model provides a means to connect the information in amino acid sequence with the underlying three - dimensional structure of the domain .this connection can not be addressed directly in conventional sequence analysis methods such as the profile hmm .in fact , the very existence of long - range correlations indicates that msa s can not be modeled as a purely one - dimensional system where long - range correlations simply can not exist .considering this fact , it is surprising that conventional multiple sequence alignment methods , inherently based on the one - dimensional system , can produce msa s with long - range correlations .this may be a manifestation of the consistency principle indicated above .there are a few possible extensions and applications of the present lattice gas model . in the present form ,the model is autonomous in the sense that it does not require an input or target sequence for computing various statistical quantities ( once the observed statistical quantities are obtained ) .nevertheless , it is readily possible to align the model with a particular amino acid sequence to compute a partition function and therefore other quantities conditioned on that input sequence . in this way, the lattice gas model may be used for detecting remote homologs .the present results ( e.g. , figures [ fig : t - globin - div ] and [ fig : t - vset - div ] ) suggest that inclusion of non - bonded interactions would increase the specificity of the alignment . furthermore, the model can be aligned with a `` sequence '' of a given length with unspecified amino acid residues to compute the partition function that is conditioned on all the amino acid sequences of that length . in this way , one can enumerate those sequences that are compatible with the model . in other words ,the model may be used for designing optimal sequences for a given protein family .such applications may be pursued in the future to open new possibilities in protein sequence analysis .the author thanks kentaro tomii and sanzo miyazawa for reading the initial manuscript and providing some references and comments , haruki nakamura for the support during the development of this work , and nobuhiro go for a critical advice on the consistency principle .this work was supported in part by a grant - in - aid `` platform for drug discovery , informatics , and structural life sciences '' from the mext , japan .
|
the multiple sequence alignment ( msa ) of a protein family provides a wealth of information in terms of the conservation pattern of amino acid residues not only at each alignment site but also between distant sites . in order to statistically model the msa incorporating both short - range and long - range correlations as well as insertions , i have derived a lattice gas model of the msa based on the principle of maximum entropy . the partition function , obtained by the transfer matrix method with a mean - field approximation , accounts for all possible alignments with all possible sequences . the model parameters for short - range and long - range interactions were determined by a self - consistent condition and by a gaussian approximation , respectively . using this model with and without long - range interactions , i analyzed the globin and v - set domains by increasing the `` temperature '' and by `` mutating '' a site . the correlations between residue conservation and various measures of the system s stability indicate that the long - range interactions make the conservation pattern more specific to the structure , and increasingly stabilize better conserved residues .
|
for a three - node relay network with a single pair of communication nodes ( cns ) and a single relay node ( rn ) , two - way relay ( twr ) communication , where relays receive signals from two transmitters simultaneoulsy and then send signals to the two receivers , doubles the spectral efficiency of one - way relay ( owr ) communications .the concept of the twr communication has been extended to multi - node interference - limited relaying networks .recently , a combined technique of network coding and interference alignment ( ia ) was adopted to _ interfering _ twr networks in order to reduce the effect of interference .on the other hand , there have been few schemes that consider a general interfering twr network which consists of pairs of cns and rns , also known as interfering twr networks . in ,rankov and wittneben showed that the amplify - and - forward ( af ) relaying protocol with interference - neutralizing beamforming can achieve the optimal dof of the half - duplex interfering twr network if for a given .however , the scheme in requires global csi at all nodes and full collaboration amongst all rns .the authors of considered the achievable degrees - of - freedom of interfering owr networks , where the number of cns and rns are the same . in particular ,the interference neutralization technique of was combined with the interference alignment technique to achieve the optimal dof of the interfering owr network .however , the scheme in can not be applied to the general interfering twr network with arbitrary numbers of and .in addition , the scheme in works only with global csi assumption at each node .we investigate the achievable dof of the interfering twr network with local csi at each node and without collaboration among nodes in the network .three - types of relay protocols are considered : i ) af , ii ) decode forward ( df ) , and iii ) compute forward ( cf ) with lattice codes .for each source - destination pair , one of rns is selected to help them , and thus , an opportunistic rn selection ( ors ) technique is proposed to mitigate interference .the proposed ors technique minimizes the sum of received interference at all nodes , and thereby maximizes the achievable dof of the network .we show that the proposed ors technique with af or cf relaying asymptotically achieves the optimal dof as the number of rns , , increases by rendering the overall network interference - free . in particular , for given signal - to - noise ratio ( snr ) and , we derive a sufficient condition on required to achieve the optimal dof for af and cf relaying , which turns out to be defined by implies that . ] . on the other hand, it is shown that the dof with df relaying is bounded by half of the optimal dof .simulation results show that the proposed ors technique outperforms the conventional max - min - snr rn selection technique even in practical communication environments .consider the time - division dupex ( tdd ) half - duplex interfering twr network composed of pairs of cns and rns , as depicted in fig .[ fig : system ] .each pair of the cns attempts to communicate with each other through a single selected rn , and no direct paths between the cns are assumed , i.e. , separated twr network .the two sets of cns at one and the other sides are referred to as group 1 and 2 , respectively , as shown in fig . [fig : system ] . the channel coefficient between the -th cn in group , , and rn is denoted by , , , assuming tdd channel reciprocity .it is assumed that each channel coefficient is an identically and independently distributed ( i.i.d . )complex gaussian random variable with zero mean and unit variance .in addition , channel coefficients are assumed to be invariant during the time slots , i.e. block fading . in the first time slot , denoted by time 1 ,the cns transmit their signals to the rns simultaneously . in the second time , time 2, the selected rns broadcast their signals to all cns .the transmit symbol at the -th cn in group in time 1 is denoted by .the maximum average transmit power at the cn is defined by , and thus the power constraint is given by suppose that rn is selected to serve the -th pair of cns .then , the transmit symbol at rn is denoted by , which includes the information of both and , and the power constraint is given by that is , the symmetric snrs are assumed .if we denote the achievable rate for transmitting and receiving by , the total dof is defined by where and is the received noise variance .from the pilots from the cns in group 1 and 2 , rn , , estimates the channels and , .subsequently , rn calculates the total interference levels ( tils ) , which account for the sums of received interference in time 1 at rn and leakage of interference that it generates in time 2 .as seen from fig .[ fig : system ] , the til at rn for the case where it serves the -th pair of cns , , is given by the rn selection , we extend the distributed rn selection algorithm used in for the owr network with a single pair of source and destination . upon calculating , , rn initiates up to different back - off timers , which are respectively proportional to , if , where is the maximum allowable interference .specifically , rn initiates the back - off timers given by where is the maximum back - off time duration . after the back - off time , if no rns have been assigned to the -th pair of cns , rn announces to serve the -th pair of cns to all the cns and rns in the network and terminates the selection . upon acknowledging this announcement , all other unselected rns deactivate the timers corresponding to the -th pair of cns , i.e. , , , , to exclude the consideration of the selected cns . in this way, the rn with the smallest til value can be selected in a distributed fashion for each . through the proposed rn selection ,we assume without loss of generality that rn is selected to serve the -th pair of cns .since the rn selection is done only if , the total time required to select rns for all cns is not greater than . noting that is independent for different or and has a continuous distribution , the probability of a collision between , s , , is arbitrarily small .thus , can be chosen arbitrarily small compared to the block length .the efficiency for the achievable rate is lower - bounded by , which tends to 1 by choosing to be arbitrarily small compared to which is relatively large in general .note that the outage takes place if any rn can not be assigned for one or more pairs of cns because there was no rn with til smaller than during the selection process . in the sequel, we derive a condition on to make the rn selection always successful for any given .in addition , we shall find practical values of for given through numerical simulations , which makes the outage probabilities be almost zero .in time 1 , the cns transmit their signals to the rns , and the received signal at rn is expressed as where accounts for the additive white gaussian noise ( awgn ) at rn with zero mean and the variance . upon receiving , rn generates the transmit symbol from where is a discrete memoryless encoding function . in time 2 ,rn then broadcasts , and the received signal at the -th cn in group , , is written by where is the awgn with zero mean and the variance . with the side information of , the -th cn in group retrieves the symbol transmitted from the other side from where and is a discrete memoryless decoding function . the encoding and decoding functions , and , respectively , differ from relaying protocols , i.e. , af , df , and cf .we shall specify them in the sequel in terms of dof achievability results .the overall procedure of the proposed scheme is illustrated in fig .[ fig : overall ] for the case of and .from ( [ eq : yr ] ) and ( [ eq : ycn ] ) , the sum of received interference at rn in time 1 and at the -th pair of cns in time 2 , normalized by the noise variance , is expressed as the following lemma establishes the condition for required to decouple the network with constant received interference even for increasing interference - to - noise - ratio ( inr ) . in particular ,even though there exist a mismatch between the til of ( [ eq : eta ] ) calculated at rn with the local csi and the sum of received interference in ( [ eq : delta_i ] ) , we shall show in the proof of the following lemma that the proposed ors based on the til of ( [ eq : eta ] ) can minimize the sum of received interference at all nodes , thereby maximizing the achievable dof .[ * * decoupling principle * * ] [ lemma : decoupling ] for any , define as using the proposed ors , we have if from the fact that , in the high snr regime can be rewritten by where ( [ eq : p_d3 ] ) follows from the fact that s are independent for different . since the channel coefficients are independent complex gaussian random variables with zero mean and unit variance , is a central chi - square random variable with degrees - of - freedom .consequently , the cumulative density function of is given by where is the gamma function and is the lower incomplete gamma function .in addition , from ( * ? ? ?* lemma 1 ) , upper and lower bounds on for are given by where the probability in ( [ eq : p_d3 ] ) represents the case where at the -th rn selection , a rn is assigned to the -th pair of cns if and only if there exists at least one rn with the til smaller than amongst unselected rns . if we denote the set of indices of the unselected rns at the -th rn selection by , it follows that \textrm{pr}\left\ { \eta_{i,\textrm{r}(i)}<\frac{\epsilon\textrm{snr}^{-1}}{k}\right\ } & = 1-\textrm{pr}\left\ { \eta_{i,\textrm{r}(j)}>\frac{\epsilon\textrm{snr}^{-1}}{k},\forall j\in\mathcal{r}_{i}\right\ } \\ & = 1-\left(1-f_{\eta}\left(\frac{\epsilon\textrm{snr}^{-1}}{k}\right)\right)^{n-{\color{black}\pi(i)}+1}\pagebreak[0]\\ & \ge1-\left(1-f_{\eta}\left(\frac{\epsilon\textrm{snr}^{-1}}{k}\right)\right)^{n - k+1}\pagebreak[0]\\ & \ge1-\frac{\left(1-c_{1}\left(\epsilon / k\right)^{2(k-1)}\cdot\textrm{snr}^{-2(k-1)}\right)^{n}}{\left(1-c_{2}\left(\epsilon / k\right)^{2(k-1)}\cdot\textrm{snr}^{-2(k-1)}\right)^{(k-1)}}\pagebreak[0]\label{eq : eta_lb}\end{aligned}\ ] ] where ( [ eq : eta_lb ] ) follows from ( [ eq : f_bounds2 ] ) . from the following bernoulli s inequality ,\,\ , n\in\mathbb{n},\ ] ] for sufficiently large snr to satisfy , the last term of can be bounded by therefore , for increasing snr , the term tends to 0 if and only if in the numerator of the right - hand side of tends to infinity , i.e. , . in such a case , from, we get otherwise , the term in tends to 1 so that is unbounded . from ( [ eq : p_d3 ] ) , ( [ eq : eta_lb ] ) , and, we have if and only if for any , which proves the lemma . from lemma [ lemma : decoupling ] ,the interfering twr network becomes isolated twr networks with limited interference level even for increasing inr , if . in the proposed scheme , the dimension extension of the time / frequency domain in the conventional ia technique is replaced by the dimension extension in the number of users. now the following theorem is our main result on the dof achievability .[ theorem : scaling ] using the proposed ors scheme , the af , lc - cf , and df schemes achieve respectively , with high probability if sections [ sub : amplify - and - forward ] , [ sub : lattice - code - aided - compute - and - f ] , and [ sub : decode - and - forward ] .in addition , section [ sub : comparison - of - the ] provides comprehensive comparisons among the af , lc - df , and df schemes in terms of the dof achievability .note that the overall procedure of the scheduling metric calculation , rn selection , and communication protocol is analogous for all the three schemes , and the only difference appears in the encoding function in for constructing at the rn and the decoding function in for retrieving and at the cns . in the af scheme, the relay retransmits the received signal with a proper amplification . specifically , from the received signal in ( [ eq : yr ] ) , rn generates the transmit signal from where is the amplifying coefficient defined such that the power constraint is met .thus , can be obtained from \label{eq : gamma_def}\ ] ] inserting into yields the received signal at the -th cn in group , , given by the cn then subtracts the known interference signal from to get where .note here that unlike the df or lc - cf scheme , the -th pair of cns should have the knowledge of the effective channel . from, the achievable rate for is given by with , lemma [ lemma : decoupling ] gives us for any with probability .thus , for any , the achievable rate is bounded by where in , it is assumed that zero rate is achieved unless the condition holds as in lemma [ lemma : decoupling ] .inserting into gives us }\label{eq : r_af_final-1}\end{aligned}\ ] ] while inserting into yields .thus , we have and hence }\label{eq : eq : r_af_final-1 - 2}\end{aligned}\ ] ] therefore , the achievable dof for the af scheme is given by }{\lim_{\textrm{snr}\rightarrow\infty}\log\textrm{snr}}\\ & = \frac{\sum_{i=1}^{k}\sum_{n=1}^{2}1\cdot\lim_{\textrm{snr}\rightarrow\infty}\frac{1}{2}\log\left(1+i^{\prime}\cdot\textrm{snr}\right)}{\lim_{\textrm{snr}\rightarrow\infty}\log\textrm{snr}}\label{eq : dof_af_dervation1}\\ & = \frac{\sum_{i=1}^{k}\sum_{n=1}^{2}\left[\lim_{\textrm{snr}\rightarrow\infty}\frac{1}{2}\log\left(\textrm{snr}\right)+\lim_{\textrm{snr}\rightarrow\infty}\frac{1}{2}\log\left(\frac{1}{\textrm{snr}}+i^{\prime}\right)\right]}{\lim_{\textrm{snr}\rightarrow\infty}\log\textrm{snr}}\\ & = \frac{\sum_{i=1}^{k}\sum_{n=1}^{2}\left[\lim_{\textrm{snr}\rightarrow\infty}\frac{1}{2}\log\left(\textrm{snr}\right)+\frac{1}{2}\log\left(0+\hat{i}\right)\right]}{\lim_{\textrm{snr}\rightarrow\infty}\log\textrm{snr}}\label{eq : eq : dof_af_derivation2}\\ & = k,\label{eq : dof_af_derivation_final}\end{aligned}\ ] ] where and follow from lemma [ lemma : decoupling ] and , respectively .on the other hand , the cut - set outer bound , for which no inter - node interference is assumed , yields the upper bound .therefore , the achievable dof with the af scheme is , which proves the theorem for .the lc - cf scheme is a generalized version of the modulo-2 network coding , in which and where {2} ] falls into one of the lattice points in some lattice .taking the modulo- to the received signal in , the rn obtains {\lambda}=\left[h_{1(i),\textrm{r}(i)}x_{1(i)}+h_{2(i),\textrm{r}(i)}x_{2(i)}+i_{\textrm{r}(i)}+z_{\textrm{r}(i)}\right]_{\lambda},\ ] ] and retrieves the estimate of {\lambda} ] , and then the -th cn in group obtains in time 2 following the two procedures : i ) estimating from via lattice decoding , ii ) obtaining with known and from {\lambda} ] and . in time 2 , the achievable rate is determined when estimating from as with , lemma [ lemma : decoupling ] gives us with probability .in addition , the maximum rate of is bounded by the minimum of the two bounds in and .thus , for , the maximum rate is given by ^{+},\frac{1}{2}\log\left(1+\frac{\left|h_{\tilde{n}(i),\textrm{r}(i)}\right|^{2}p}{\left|i_{\tilde{n}(i)}\right|^{2}+n_{0}}\right)\right\ } \label{eq : r_gn_def-1}\\ { \color{black } } & \ge\min\left\ { \mathcal{p}_{c}\cdot\frac{1}{2}\log\left(\tau_{n(i)}+\frac{|h_{n(i),\textrm{r}(i)}|^{2}p}{(1+\epsilon)n_{0}}\right),\mathcal{p}_{c}\cdot\frac{1}{2}\log\left(1+\frac{|h_{\tilde{n}(i),\textrm{r}(i)}|^{2}}{1+\epsilon}\textrm{snr}\right)\right\ } \label{eq : rn_lc_2 - 1}\\ & { \color{black}=}\min\left\ { \mathcal{\mathcal{p}_{c}}\cdot\left(\frac{1}{2}\log(\textrm{snr})+o_{1}(\textrm{snr})\right){\color{black},}\mathcal{p}_{c}\cdot\left(\frac{1}{2}\log(\textrm{snr})+o_{2}(\textrm{snr})\right)\right\ } , \label{eq : rn_lc_2_2}\end{aligned}\ ] ] where and . therefore , with , inserting to ( [ eq : dof_def ] ) and following the analogous derivation from to give us , which proves theorem [ theorem : scaling ] . [ remark : subopt_lc ] in the df scheme , each of and is successively decoded at rn in time 1 from .that is , is decoded first regarding the rest of the terms in , , as a noise term , and then is subtracted from to decode . on the other hand , can be decoded first regarding as a noise term , and then subtracted . for this successive decoding ,the rates and are given by the multiple - access channel rate bound as follows : in time 2 , from individually decoded and , the network coding is used to construct at the rn as in the lc - cf scheme .thus , the achievable rates for time 2 are given again by ( [ eq : r_gn_def ] ) .combining , , and together , we obtain the maximum sum - rate as from lemma [ lemma : decoupling ] , with , we have with probability .in such a case , the maximum sum - rate is bounded by }\nonumber \\ & \left.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\underbrace{\frac{1}{2}\log\left(1+\frac{\left(|h_{1(i),\textrm{r}(i)}|^{2}+|h_{2(i),\textrm{r}(i)}|^{2}\right)}{\epsilon+1}\textrm{snr}\right)}_{{\color{black}\triangleq\delta_{2}}}\right\ } { \color{black}\pagebreak[0]}\label{eq : df_bound_final0}\\ & = { \color{black}\mathcal{p}_{c}\cdot\min\left\ { \underbrace{\log\left(1+\frac{\min\left\ { \left|h_{1(i),\textrm{r}(i)}\right|^{2},\left|h_{2(i),\textrm{r}(i)}\right|^{2}\right\ } } { \epsilon+1}\textrm{snr}\right)}_{\triangleq\delta_{1}},\delta_{2}\right\ } } { \color{black}\pagebreak[0]}\end{aligned}\ ] ] applying to ( [ eq : dof_def ] ) and following the analogous derivation from to , we can only achieve , even under the interference - limited condition , i.e. , . since the af scheme only performs power scaling at the rns ,it is the simplest for the optimal dof of the network .however , the cn - to - cn effective channel gain should be known by the cns , and the scheme suffers from the noise propagation , particularly in the low snr regime .the df scheme requires the minimum of the csi , and the conventional simple coding scheme can be used as in the af scheme . since the noise at the rns is removed from the decoding at the rss , it does not propagate the noise at the rss .nevertheless , the scheme only achieves the half of the optimal dof .the lc - cf scheme attains benefits from both af and df schemes , i.e. , the optimal dof and removal of the noise at the rns through decoding . on the other hand ,the scheme requires lattice encoding and decoding , but the design of an optimal lattice code for given channel gains requires an excessive computational complexity .the suboptimal design of lattice codes can be considered .for comparison , two baseline schemes are considered : max - min - snr and random selection schemes . in the max - min - snr scheme , rn selectionis done such that the minimum of the snrs of the two channel links between the serving rn and two cns is maximized at each selection .figure 4 show the sum - rates versus snr for and ( a ) or ( b ) . with fixed and small ,the max - min - snr schemes outperform the proposed ors schemes in the low snr regime , where the noise is dominant compared to the interference . however , the sum - rates of the proposed schemes exceed those of the max - min schemes as the snr increases , because the interference becomes dominant than the noise . as a consequence ,there exist a crossover snr point for each case . as seen from fig .4 , these crossover points becomes low as grows , since the proposed schemes exploit more benefit as increases .the proposed schemes outperform the max - min - snr schemes for the snr greater than 7 db with as shown in fig .[ fig : rates_snr_n50 ] .figure [ fig : rates_n ] shows the sum - rates versus when and snr is 20 db .it is seen that the proposed ors scheme greatly enhances the sum - rate of the max - min - snr scheme for all the cases .the lc - cf scheme exhibits the highest sum - rates amongst the three relay schemes for mid - to - large regime , whereas it slightly suffers from the rate loss due to in ( [ eq : r_lc_def ] ) in the small regime .the sum - rate of the proposed af scheme becomes higher than that of the df scheme as increases , because the af achieves higher dof , as shown in theorem [ theorem : scaling ] .h. j. yang , y. c. choi , n. lee , and a. paulraj , `` achievable sum - rate of the two - way relay channel in cellular systems : lattice code - aided linear precoding , '' _ ieee j. selec_ , vol . 30 , no . 8 , pp .13041318 , sept .2012 .z. xiang , j. mo , and m. tao , `` degrees of freedom of two - way relay channel , '' in _ ieee globecom - communication theory symposium _ ,anaheim , ca , dec .2012 , [ online ] .available at http://arxiv.org/abs/1208.4048 .t. gou , s. jafar , c. wang , s .- w .jeon , and s .- y .chung , `` aligned interference neutralization and the degrees of freedom of the 2 x 2 x 2 interference channel , '' _ ieee trans .inf . theory _ ,58 , no . 7 , pp .43814395 , july 2012 .i. shomorony and a. s. avestimehr , degrees of freedom of two - hop wireless networks : everyone gets the entire cake " , _ ieee trans .inf . theory _, accepted , [ online ] .available at http://arxiv.org/abs/1210.2143 .s. h. chae , b. c. jung , and w. choi , `` on the achievable degrees - of - freedom by distributed scheduling in an ( n , k)-user interference channel , '' _ ieee trans ._ , vol .61 , no . 6 , pp .25682579 , jun . 2013 .k. gomadam , v. r. cadambe , and s. a. jafar , `` a distributed numerical approach to interference alignment and applications to wireless interference networks , '' _ ieee trans .inf . theory _57 , no . 6 , pp . 33093322 , june 2011. b. c. jung , d. park , and w .- y .shin , `` opportunistic interference mitigation achieves optimal degrees - of - freedom in wireless multi - cell uplink networks , '' _ ieee trans ._ , vol . 60 , no . 7 , pp19351944 , july 2012 .h. j. yang , j. chun , y. choi , s. kim , and a. paulraj , `` codebook - based lattice - reduction - aided precoding for limited - feedback coded mimo systems , '' _ ieee trans ._ , vol .60 , no . 2 ,510524 , feb .
|
achievable degrees - of - freedom ( dof ) of the _ large - scale _ interfering two - way relay network is investigated . the network consists of pairs of communication nodes ( cns ) and relay nodes ( rns ) . it is assumed that and each pair of cns communicates with each other through one of the relay nodes without a direct link between them . interference among rns is also considered . assuming local channel state information ( csi ) at each rn , a distributed and opportunistic rn selection technique is proposed for the following three promising relaying protocols : amplify forward , decode forward , and compute forward . as a main result , the asymptotically achievable dof is characterized as increases for the three relaying protocols . in particular , a sufficient condition on required to achieve the certain dof of the network is analyzed . through extensive simulations , it is shown that the proposed rn selection techniques outperform conventional schemes in terms of achievable rate even in practical communication scenarios . note that the proposed technique operates with a distributed manner and requires only local csi , for practical wireless systems . degrees - of - freedom ( dof ) , interfering two - way relay channel , two - way channel , local channel state information , relay selection .
|
all porous media , whether random or ordered , granular or networked , heterogeneous or homogeneous , are typified by the geometric and topological complexity of the pore - space .this pore - space plays host to a wide range of fluid - borne processes including transport , mixing and dispersion , chemical reaction and microbiological activity , all of which are influenced by the flow structure and transport properties .pore - scale fluid mixing plays a key role in the control of both fluid - fluid reactions ( e.g. redox processes ) and fluid - solid reactions ( e.g. precipitation - dissolution processes ) , which are of importance for a range of subsurface operations , including co sequestration , contaminant remediation or geothermal dipoles management . whilst pore - scale flows are often smooth and steady ( typically stokesian or laminar ) , the inherent topological complexity of the pore - space renders upscaling transport and mixing processes a challenging task .because of their fundamental role in driving chemical reactions , mixing processes have received increasing attention in recent years in the context of porous media flows .two - dimensional laboratory experiments and theoretical and modeling studies have shown that upscaled chemical kinetics are not captured by classical macro - dispersion theories due to incomplete mixing at the pore scale .this points to a need for predictive theories for pore - scale concentration statistics which are couched in terms of the underlying medium properties .lamellar mixing theories , developed in the context of turbulent flows , have been applied and extended for the prediction of concentration statistics in two - dimensional ( 2d ) darcy scale heterogeneous porous media .a central element of this theory is to quantify the link between fluid stretching and mixing . in this context , linking the pore network topological properties to mixing dynamics is an essential step , which we explore in this study .while the topological constraints associated with the poincar - bendixson theorem limit fluid stretching in two - dimensional ( 2d ) steady flows to be algebraic , in three - dimensional ( 3d ) steady flows much richer behaviour is possible .indeed , the topological complexity inherent to all three dimensional random porous media has been shown to induce chaotic advection under steady flow conditions via a 3d fluid mechanical analogue of the baker smap .such chaotic lagrangian dynamics are well - known to rapidly accelerate diffusive mixing and scalar dissipation , yet have received little attention with respect to pore - scale flow . from the perspective of transport dynamics, the distribution of pore sizes and shapes , together with no - slip boundary conditions at the pore walls , are known to impart non - gaussian pore velocity distributions , which lead to a rich array of dispersion phenomena ranging from normal to super - diffusive . the continuous time random walk ( ctrw ) approach has been used to model this behaviours based on the transit time distributions over characteristic pore lengths , which reflect the distirbution of pore velocities .the interplay of wide transit time distributions and chaotic advection at the pore - scale impacts both macroscopic transport and dispersion as well as pore - scale dilution .pore - scale chaotic advection has been shown to significantly suppress longitudinal dispersion arising from the no - slip wall condition due to transverse mixing generating an analogue of the taylor - aris mechanism .conversely , the wide transit time distributions are expected to have a drastic impact on the dynamics of mixing in conjunction with chaotic advection as the transit times set the timescales over which significant stretching of a material fluid element occurs . while the impact of broad transit time distributions on the spatial spreading of transported elements is well understood , their control on mixing dynamics is still an open question .as shown in , lagrangian chaos generates ergodic particle trajectories at the pore - scale , and the associated decaying correlations allows the advection process to be modelled as a stochastic process . during advection through the pore - space ,fluid elements undergo punctuated stretching and folding ( transverse to the mean flow direction ) events at stagnation points , leading to persistent chaotic advection in random porous media .such dynamics have been captured in an idealized 3d random pore network model which comprises of a periodic network of uniform - sized pores which alternately branch and merge in the mean flow direction , leading to a high number density of stagnation points local to these pore junctions which generate fluid stretching . whilst highly idealized , this network model contains basic features common to all porous media , namely topology - induced chaotic advection and no - slip boundary conditions , and so represents the minimum complexity inherent to all porous media . in this paper, we develop a novel stretching ctrw which captures the advection and deformation of material elements through the porous matrix .closure of this ctrw model in conjunction with a 1d advection - diffusion equation ( ade ) describing diffusion transverse to highly striated , lamellar concentration distributions generated by pore - scale chaotic advection facilitates quantification of mixing and dispersion of scalar fields under the action of combined chaotic advection and molecular diffusion both within the pore space and across the macroscopic network .this formalism allows prediction of concentration pdf evolution within the pore - space and quantification of the impact of chaotic advection . to simplify exposition, we first consider the somewhat artificial case of a steady state pore - scale mixing and dispersion of a concentration field which is heterogeneous at the pore - scale but homogeneous at the macro - scale continuously injected across all pores in a plane transverse to the mean flow direction .these results are then extended to the more realistic situation of the evolution of a solvent plume which is continuously injected as a point source .we compare these predictions for three distinctly different porous networks , a random 3d porous network which gives rise to lagrangian chaos , an ordered 3d porous network which generates maximum fluid stretching , and an ordered 2d network which gives rise to non - chaotic dynamics .hence the impact of both network topology and structure upon pore - scale mixing and dispersion is quantified .these results form a quantitative basis for upscaling of pore - scale dynamics to macroscopic mixing and transport models , and establish the impacts of ubiquitous chaotic mixing in 3d random porous media .the paper is organized as follows ; the mechanisms leading to topological mixing in 3d porous media are briefly reviewed in [ sec : topomix ] , followed by a description of the 3d open porous network model in [ sec:3dmodel ] .fluid stretching in this model network is considered in [ section : stretch ] , from which a stretching ctrw model is derived in [ sec : ctrw ] .this model is then applied to quantify mixing in [ sec : mixing ] , and the implications for the evolution of the concentration pdf , mixing scale and dispersion are conisdered in [ sec : diffmix ] .the overall results are discussed in [ sec : discussion ] and finally conclusions are made in [ sec : conclusions ] .topological complexity is a defining feature of all porous media - from granular and packed media to fractured and open networks - these materials are typified by a highly connected pore - space within which the flow of continua arise .such topological complexity is characterised by the euler characteristic ( related to the topological genus as ) which measures the connectivity of the pore - space as where is the number of pores , the number of redundant connections and the number of completely enclosed cavities . for porous mediait is meaningful to consider the average number density of these quantities , where from computer tomography studies it is found that typically is large whilst , are small . hence the number density of the euler characteristic uniformly found to be strongly negative , reflecting the basic topological complexity which typifies all porous media .when a continuous fluid is advected through such media , a large number of stagnation points ( non - degenerate equilibrium points ) arise at the fluid / solid boundary as a direct result of this topological complexity .these stagnation points are zeros of the skin friction vector field on the 2d boundary of the fluid domain , where may be defined as here is the fluid velocity field , is the coordinate normal to the fluid boundary and , are orthogonal coordinates tangent to this boundary . whilst different definitions of the skin friction are possible , these are all equivalent on the boundary , as is the topology of the flow structure in the fluid domain .the poincar - hopf theorem provides a direct relationship between the nature of the critical points and the pore - space topology , such that the sum of the indices of critical points is related to the topological genus and euler characteristic as where the index equals -1 for saddle - type zeros , + 1 for node - type zeros and 0 for null zeros of the skin - friction field .hence represents a lower bound for the number density of stagnation points under steady 3d stokes flow , and these points impart significant fluid stretching into the local fluid domain .digital imaging studies measure the euler characteristic across a broad range of porous media , from granular to networked , and find to be strongly negative , with number densities of the order - 500 mm .this large number density of stagnation points imparts a series of punctuated stretching events as the fluid continuum is advected through the pore - space .a relevant question is what role these stretching events play with respect to transport and mixing in porous media ?stagnation points play a critical role with respect to the lagrangian dynamics of 3d steady flows , as it is at these points that the formal analogy between transport in steady 3d volume - preserving flows and 1 degree - of - freedom hamiltonian systems breaks down ( such that the steady 3d dynamical system can no longer be expressed as an analogous unsteady 2d system ) , and such points are widely implicated in the creation of non - trivial lagrangian dynamics . proposes that the stable and unstable manifolds which respectively correspond to the fluid contraction and stretching directions around stagnation points ( shown in figure [ fig : dist ] ) form the `` skeleton '' of the flow , a set of surfaces of minimal transverse flux which organize transport within the fluid domain .if these manifolds which project into the fluid bulk are two - dimensional ( hence co - dimension one ) , they form essentially impenetrable barriers which organise fluid transport and mixing . [cols="^,^,^,^ " , ] the pdf of can be estimated by approximating by its average , \end{aligned}\ ] ] where is the euler - mascheroni constant and and , see appendix [ app : levy ] . since is gaussian distributed with the mean and variance , we obtain for the pdf of the gaussian ^ 2}{2 \sigma{\ln \tau}^2(n ) } \right)}{\sqrt{2 \pi \sigma_{\ln \tau}^2(n)}}\end{aligned}\ ] ] with the mean and variance figure [ fig : history](b ) compares the pdf of obtained from evaluating according to with given by the ctrw to the approximation ( with ) , the accuracy of which increases with due to the central limit theorem . as a number of pores along the mean flow direction for ( dash - dotted ) , ( dashed ) and ( solid ) . ,the approximation ( [ eqn : gauss_approx_tau ] ) for provides an accurate solution to the two - step ctrw ( [ eqn : twostepctrw ] ) , which along with the distribution ( [ eq : pdf_rho ] ) for fully quantifies evolution of the concentration distribution , mixing and dilution in the 3d open porous network . in the following we apply this solution to determine evolution of the mixing scale , concentration pdf , maximum concentration , scalar variance and the onset of coalescence in the 3d random porous network .the mixing scale characterizes the distribution of lamellae widths at position as the average of which is well approximated by substitution of the approximation for as note that whilst is strictly infinite , the above average is dominated by the bulk of , which , as outlined in appendix [ app : levy ] can be well approximated by the moyal distribution .thus the average is understood to be the average of the equivalent moyal distribution given by . in order to perform the average over , we use a saddle point approximation which yields this is a remarkable result because although fluid stretching due to pore - scale chaos grows fluid elements exponentially with longitudinal pore number , the mixing scale does not converge to a constant batchelor scale with increasing , but rather increases asymptotically as this can be traced back to the broad distribution of arrival times between the couplets arising from the no - slip condition , which renders a distribution of stretching rates of variable strength . note also , that the characteristic waiting time between stretching events increases with increasing number of couplets , and thus , the stretching rate decreases .this is a characteristic of the pareto transit time distribution .figure [ fig : mixingscale ] illustrates the evolution of the average mixing scale given by for different peclt numbers .it assumes a minimum value at a characteristic pore number , at which point diffusive expansion and compression equilibrate . and ( right panel ) at downstream positions of ( solid ) , ( dash - dotted ) , ( long dashed ) and ( short dashed ) for . ,title="fig:",scaledwidth=45.0% ] and ( right panel ) at downstream positions of ( solid ) , ( dash - dotted ) , ( long dashed ) and ( short dashed ) for ., title="fig:",scaledwidth=45.0% ] whilst the 1d lamellar ade ( [ adestrip ] ) is only valid as long as lamellae are non - interacting , methods are available to predict evolution of the scalar concentration field in the presence of coalescence . whilst such prediction is beyond the scope of this paper , it is important to determine the onset of coalescence and hence the envelope of validity of the model .the onset of coalscence occurs when the mixing scale exceeds the average spacing between lamellae of length with an initial length in pores of average radius . from ( [ eqn : meanmixscale ] ) , the lamellae are non - interacting up to pore number where the linear contribution on the lhs of ( [ eqn : coalescence ] ) is due to exponential stretching of the lamellae , and the weaker nonlinear term is due to evolution of the mixing scale .the dimensionless maximum concentration as a function of pore number , is given by as in order to develop an analytic expression for the pdf of , we express as a function of , which is distributed according to as \right ) , \label{eqn : cmax:4}\end{aligned}\ ] ] hence the pdf of is then \right ) \right],\end{aligned}\ ] ] and the pdf of ] .the mean and variance of the gaussian pdf for 2d porous media are then , & & \sigma_{\ln \tau,\text{2d}}^2(n ) = 4 \sigma^2_{\ln \rho,\text{2d}}(n),\end{aligned}\ ] ] and from , we obtain the asympotic algebraic decay of average maximum concentration as .hence there exists a qualitative difference in fluid mixing between 2d and 3d porous media : in 2d media fluid stretching is constrained to be algebraic , leading to algebraic dilution , whereas exponential stretching is inherent to 3d porous media , yielding dilution which scales exponentially with longitudinal distance .( dash - dotted ) , ( long - dashed ) and ( solid ) .the thin lines indicate the pore mixing model , the thick lines , the pore mixing model with and .the inset illustrates the same plot in a semi - logarithmic scale ., scaledwidth=60.0% ] we derive the concentration pdf , mean and variance within the plume as a function of the longitudinal pore number .as such , this pdf is defined with respect to a support volume that is a subset of the fluid domain which excludes negligible concentrations beyond a minimum cutoff value . to determine this concentration pdf, we note that the concentration pdf across a single lamella for a given maximum concentration is obtained from through spatial mapping as where the concentration range under consideration is ] references the injected lamella at .note that the lamella segments distributed throughout a given pore ( at fixed ) comprise of contributions from different lamellae injected at . whilst the material coordinate refers to an individual lamella sheet , in the present context we interpret to denote the material coordinate along all of the lamella segments in a given pore , the union of which also span $ ] due to homogeneity of the injection protocol .whilst this simplification does not extend to inhomogeneous injection protocols such as a point or line source , the homogeneous injection results may readily be generalised to inhomogeneous protocols as per [sec : point_inject ] . at pore number the advection time , deformation and operational time all vary with the material coordinate along the lamella , and so we may parameterise these quantities in terms of .hence the 2d spatial gaussian concentration distribution ( [ adestrip ] ) over the entire lamella may be expressed in material coordinates as ,\label{eqn : spatial_conc}\ ] ] where the concentration variance .the concentration support is quantified by the area of the concentration field for which , which corresponds to the cutoff length from the lamellar backbone as },\ ] ] and so the concentration mean under the fluid support for is then \right\rangle \approx\frac{c_0 l_0}{a}.\label{eqn : avconc } \end{split}\ ] ] note that the integration over weighted by is equivalent to performing the average over the ensemble of the elementary lamellae due to the ergodicity of the system as discussed above . as a consequence of and ,the concentration support area evolves as where increases and decreases with as per the concentration support pdf ( [ eqn : cpdf ] ) .the concentration support can be determined by integration along all the lamella segments as } \right\rangle .\end{aligned}\ ] ] thus , it increases approximately as .the difference between the fluid- and concentration - support measures is clearly reflected by the behaviour of the means under these respective measures ; under the fluid - support the mean concentration is constant due to conservation , whereas under the concentration support the mean concentration within the plume is decreasing due to plume spreading and dilution .whilst the result ( [ eqn : avconc ] ) holds for all due to conservation of mass , the derivation above , ( [ eqn : ac ] ) and the pdf ( [ eqn : cpdf ] ) is only valid in the pre - coalescence regime where does not overlap . to calculate the fluid and concentration support concentration variances under find \approx\frac{c_0 ^ 2}{2\sqrt{\pi}\rho(\zeta)\sigma_\eta(\zeta ) } , \end{split}\ ] ] and so the second concentration moment is then where , due to stationarity between pores at fixed , the average of along the material coordinate is equivalent to the ensemble average . as the maximum concentration , then the concentration variances may be expressed directly in terms of the average maximum concentration as note that the mean maximum concentration is neither a fluid- or concentration- support measure but rather is averaged with respect to the 1d manifold comprised of the lamellar segments in the pore cross - section . whilst ( [ eqn : variance ] ) , ( [ eqn : concvariance ] ) yield negative concentration variances in the homogenization limit of large ( as ) , the derived model is valid in the pre - coalescence regime , and so is not expected to capture the late time dissipation dynamics .conversely , at earlier times when , , both measures of scalar variance evolve in direct proportion to the maximum concentration ( [ eqn : cmax:4 ] ) , yielding exponential scalar dissipation with pore number for 3d porous media and algebraic dissipation in 2d media . whilst the results above pertain to mixing of a concentration field which is heterogeneous at the pore - scale injected across all pores transverse to the mean flow direction , it is also instructive to consider how these results apply to dilution within a steady solvent plume arising from a continuously injected point source in the 3d random porous network , as illustrated in figure [ fig : pore_network ] . for a steady plume in homogeneous porous media , the meanmacroscopic concentration can be assumed to follow a gaussian distribution in the direction transverse to the mean flow .hence , the average lamellar length per pore varies with transverse radial distance and longitudinal distance from the injection source point can be approximated by the 2d gaussian distribution where the total lamellar length is , is the transverse areal porosity , and is the standard deviation associated with a pore branch or merger with centre - to - centre distance .such lateral spreading of the plume under continuous point - wise injection significantly retards the onset of coalescence , such that the condition ( [ eqn : coalescence ] ) is now whilst measures with respect to the concentration support such as the mean mixing scale , maximum concentration , concentration pdf , concentration mean and variance under the concentration support are the same as for point - wise or uniform injection , the concentration mean and variance under the fluid support are markedly different .following ( [ eqn : avconc ] ) and ( [ eqn : variance ] ) , these quantities within the plume vary with pore number and radial distance as hence the concentration distribution within the plume follows the 2d gaussian lamellar distribution ( [ eqn : lamellae ] ) , and the rate of scalar dissipation is given by dilution of the maximum concentration which decays exponentially with pore number in 3d random media .note that as , the region of validity of ( [ eqn : plumevar ] ) is significantly larger for the plume injection case . in general ( [ eqn : plumemean ] ) , ( [ eqn : plumevar ] ) hold for any macroscopic concentration arising from any injection protocol at in both heterogeneous or homogeneous media .the topological complexity inherent to three - dimensional porous media imparts chaotic advection and exponential fluid stretching under steady flow conditions .such complexity generates a large number density of saddle points in the skin friction field , rendering the associated stable and unstable manifolds which project into the fluid bulk two - dimensional .these 2d surfaces of minimal transverse flux control transport and mixing , where transverse intersection generates chaotic advection dynamics and persistent exponential fluid stretching as fluid elements are advected through the pore - space . in combination with molecular diffusion ,such chaotic advection significantly augments pore - scale mixing and dispersion but has received limited attention .all porous media ( both 2d and 3d , heterogenous and homogeneous ) admit no - slip boundaries which impart highly heterogeneous velocity distributions .these distributions determine the frequency of stretching events under advective flow , and the no - slip condition imparts arbitrarily long waiting times between stretching events .we show here that these two basic ingredients of pore - scale topological mixing and heterogeneous advective velocity may be integrated successfully in an analytically tractable stochastic theory that represents fluid deformation as a continuous time random walk ( ctrw ) .the kernels of this ctrw model are quantified via pore - scale computations of fluid deformation and transport in an model 3d random open porous network , and this ctrw model is subsequently coupled to a lamellar model of diffusive mixing to provide quantitative predictions of fluid mixing and dispersion .although algebraic deformations such as fluid shear also impact mixing , these mechanisms are asymptotically dominated by exponential stretching at the pore - scale . in the 3d porous network model presented here the no - slip boundary condition generates a pareto transit time distribution between stretching events , with ( [ psi ] ) .the model can be however readily generalized to other distributions , such as measured in fluid flow simulations through porous media reconstructed from micro - tomography imaging .chaotic advection arising from steady pore - scale advection generates a log - gaussian distribution of relative fluid elongation ( [ eq : pdf_rho ] ) transverse to the mean flow direction , the mean of which grows exponentially with longitudinal pore number as the lyapunov exponent .the interplay of exponential fluid stretching and pareto - distributed waiting times leads to an average mixing scale ( [ eqn : meanmixscale ] ) which does not converge with to a constant batchelor scale , but rather reaches a minimum at and scales asymptotically as .consequently , the average maximum concentration ( [ cmapprox ] ) and concentration pdf ( [ pca ] ) within lamellae evolve in a similar fashion , where dilution is negligible up to , but for the lamellae then broaden and significant dilution occurs . the impact of fluid deformation upon fluid mixing and dilution in 2d and 3d porous media is clearly illustrated in figure [ fig : cmcomp ] by the different scalings for the average maximum concentration ( [ cmapprox ] ) . in 2d porous media , algebraic fluid stretching leads to fluid mixing which scales algebraically ( [ params_2d ] ) with with pore number , whereas the exponential fluid stretching associated with chaotic mixing in random 3d porous media imparts exponential mixing ( [ cmapprox ] ) .this behaviour is directly reflected by evolution of the spatial concentration variance under both fluid - support ( [ eqn : variance ] ) and concentration - support ( [ eqn : concvariance ] ) measures .these results directly quantify the impact of chaotic mixing in 3d porous media in terms of the pore - scale stretching and advection dynamics . whilst this model is only valid up to the coalescence of lamellae as per ( [ eqn : coalescence ] ) , it may be extended as per to capture the coalescence regime where mixing is primarily controlled by a diffusive aggregation processes . as mixing dynamics are universally dependent on the rate of fluid deformation , we anticipate that the impact of different fluid stretching dynamics inherent to 2d and 3d random porous media shall persist throughout the coalescence regime. application of the ctrw model to a point source solute plume injected shows that chaotic advection again imparts exponential mixing , and the fluid - support concentration variance ( [ eqn : plumevar ] ) is the same as that for the uniform case rescaled by the mean pore concentration ( [ eqn : plumemean ] ) .this result is generic to any macroscopic concentration distribution , hence exponentially accelerated mixing persists in both heterogeneous and homogeneous media .likewise macroscopic longitudinal dispersion is also strongly augmented by chaotic advection .these results have significant implications for the development of macroscopic models of dispersion and dilution which recover the pore - scale mechanisms which arise from chaotic mixing in 3d porous media .the predictions of concentration pdf and mixing rates from the stretching ctrw model compare very well with fully resolved numerical simulations over a wide range of peclt numbers for the model 3d open porous network .for extension to real pore - scale architectures , the scalar deformation ctrw framework may be extended to quantify of tensorial fluid deformation via recent developments regarding the evolution of the deformation gradient tensor in 3d steady random flows .these developments facilitate statistical characterization of deformation and mixing at the pore - scale and the development of tensorial deformation ctrw models .three dimensional pore networks are characterized by i ) significant topological complexity inherent to all porous media and ii ) highly heterogeneous velocity distributions imparted by ubiquitous no - slip conditions at pore walls , further compounded by the distribution of pore sizes .the ubiquity of these mechanisms has significant implications for the prediction and understanding of fluid mixing and macroscopic dispersion in 3d random porous media .the interplay of exponential fluid stretching and broad velocity distributions arising from the no - slip condition generates significantly accelerated mixing via the production of highly striated , lamellar concentration distributions .we study these mechanisms in a model 3d open porous network which is homogeneous at the macroscale , and develop a ctrw model for fluid deformation and pore - scale mixing based upon high - resolution cfd simulation of stokes flow in the network model .predictions of this model agree very well with direct numerical simulations .analytic estimates of the mixing dynamics show that mixing and dilution under steady state conditions is controlled by the mean and variance of the fluid stretching rates ( quantified respectively by the lyapunov exponent and the variance ) and the peclt number , such that the mean concentration decays exponentially with longitudinal advection in 3d random porous media , whereas mean concentration variance decays algebraically in 2d porous media . whilst highly idealised , these basic mechanisms are universal to 3d porous media and so these results have significant implications for both modelling and understanding mixing in random media .the developed stretching ctrw model predicts mixing rates for general fluid stretching properties and transit time distributions .hence , we anticipate that it may be applicable to quantify mixing in a range of porous materials to decipher the role of network topology and structure upon pore scale mixing and thus upon upscaled dilution and mixing - limited reactions .the proposed framework may be extended to transient transport conditions , relevant for instance for pulse tracer injections , through the integration of longitudinal mixing processes .md acknowledges the support of the european research council ( erc ) through the project mhetscale ( contract no .617511 ) , and tlb acknowledges the support of the erc project reactivefronts , and agence nationale de la recherche project subsurface mixing and reaction .the pdf of given in can be written in laplace space as the pareto distribution is a levy - stable distribution , which means in particular that the long time behavior of is the same as the one of .the pareto distribution reads in dimensionless terms as its laplace transform is given by where is the exponential integral . for small ,this expression can be expanded as where the dots denote subleading contributions of order , is the euler constant .thus , we can write for small as . \end{aligned}\ ] ] inverse laplace transform of this expression , gives for the form ,\end{aligned}\ ] ] where \exp(-\lambda t ) .\end{aligned}\ ] ] denotes the landau distribution .it behaves for as .expression describes the density in the limit of large times or large . in order to test this approximation , we performed numerical random walk simulations for realizations of the random time .the obtained pdf is rescaled as ,\end{aligned}\ ] ] in the limit , we expect .figure [ fig : landau ] shows for and compared to , which is obtained by numerical inverse laplace transform .the maximum of is assumed at , as illustrated in figure [ fig : landau]b .a for ( blue ) and ( green ) obtained from random walk simulations for realizations of the stochastic process .the red line indicates the landau pdf obtained from numerical inverse laplace transform of .right : comparison of the ( red ) landau pdf defined by and ( green ) the approximation by the moyal distribution.[fig : landau],title="fig:",scaledwidth=45.0% ] b for ( blue ) and ( green ) obtained from random walk simulations for realizations of the stochastic process .the red line indicates the landau pdf obtained from numerical inverse laplace transform of .right : comparison of the ( red ) landau pdf defined by and ( green ) the approximation by the moyal distribution.[fig : landau],title="fig:",scaledwidth=45.0% ] we consider now the average , which is dominated by the bulk of the landau distribution . to this end, we note that the bulk of the landau distribution can be approximated by the moyal distribution \end{aligned}\ ] ] as \end{aligned}\ ] ] with and .thus , we may approximate in terms of as \right)\end{aligned}\ ] ] the mean of is then approximated by f_m(\langle x_m \rangle + x),\end{aligned}\ ] ] where is the mean of the moyal distribution .thus , we obtain approximately for . \end{aligned}\ ] ] the mean of the as approximated by the moyal distribution through is given by 2012 beyond passive : chaotic transport in stirred fluids .in _ advances in applied mechanics _erik van der giessen & hassan aref ) , _ advances in applied mechanics _ ,109 188 .elsevier .1988 _ on the topology of three - dimensional viscous flow structures near a plane wall : a classification of hyperbolic and non - hyperbolic singularities on the wall_. delft university of technology , faculty of aerospace engineering .
|
under steady flow conditions , the topological complexity inherent to all random 3d porous media imparts complicated flow and transport dynamics . it has been established that this complexity generates persistent chaotic advection via a three - dimensional ( 3d ) fluid mechanical analogue of the baker s map which rapidly accelerates scalar mixing in the presence of molecular diffusion . hence pore - scale fluid mixing is governed by the interplay between chaotic advection , molecular diffusion and the broad ( power - law ) distribution of fluid particle travel times which arise from the non - slip condition at pore walls . to understand and quantify mixing in 3d porous media , we consider these processes in a model 3d open porous network and develop a novel stretching continuous time random walk ( ctrw ) which provides analytic estimates of pore - scale mixing which compare well with direct numerical simulations . we find that chaotic advection inherent to 3d porous media imparts scalar mixing which scales exponentially with longitudinal advection , whereas the topological constraints associated with 2d porous media limits mixing to scale algebraically . these results decipher the role of wide transit time distributions and complex topologies on porous media mixing dynamics , and provide the building blocks for macroscopic models of dilution and mixing which resolve these mechanisms . = 1 lagrangian chaos , porous media , mixing , scalar transport
|
for the last two decades we have seen that information processing with quantum systems gives us a huge advantage from its classical counterparts .quantum entanglement which lies at the heart of quantum information theory is one of the key reason for such technological leap .it plays a significant role in computational and communicational processes like quantum key generation , secret sharing , teleportation , superdense coding , entanglement swapping , remote entanglement distribution , broadcasting and in many more tasks . in one hand quantum information theory allows us to do so many things that can not be done with classical information processing systems while at the same time it also forbids us from doing several operations which are otherwise possible with classical systems .one such fundamental impossibility in this context is the inability to clone quantum states .this can be put in form of a theorem called no cloning theorem " which states that there does not exist any unitary operation that will take two distinct non - orthogonal quantum states ( , ) into states , respectively .even though we can not copy an unknown quantum state perfectly but quantum mechanics never rules out the possibility of cloning the state at least in an approximate manner .probabilistic cloning is also another possibility where one can always clone an arbitrary quantum state perfectly with some non vanishing probability of success .+ the term broadcasting can be used in different perspectives like broadcasting of states , broadcasting of entanglement and more recently in broadcasting of quantum correlation .barnum et al .were the first to talk about the broadcasting of states where they showed that non - commuting mixed states do not meet the criteria of broadcasting .many authors showed by using sophisticated methods that correlations in a single bipartite state can be locally broadcast if and only if the states are classical in nature ( i.e. having classical correlation ) .when we refer broadcasting of an entangled state , we mean creating more pairs of less entangled state from a given entangled state .one way of doing this is by applying local cloning transformations on each qubit of the given entangled state .this can also be done by applying global cloning operations on the entangled state itself .in their paper buzek et al .showed that indeed the decompression of initial quantum entanglement is possible by doing local cloning operation .further , in a separate work bandyopadhyay et al . showed that universal quantum cloners with fidelity is greater than are only suitable as the non - local output states becomes inseparable for some values of the input parameter .in addition to this they proved that an entanglement is optimally broadcast only when optimal quantum cloners are used .they also showed that broadcasting of entanglement into more than two entangled pairs is not possible using only local operations .ghiu investigated the broadcasting of entanglement by using local optimal universal asymmetric pauli machines and showed that the inseparability is optimally broadcast when symmetric cloners are applied .in other works , authors investigated the problem of secretly broadcasting of three - qubit entangled state between two distant partners with universal quantum cloning machine and then the result is generalized to generate secret entanglement among three parties .various other works on broadcasting of entanglement depending on the types of qcms were also done in the later period . in another recent work we studied whether we can broadcast quantum correlation that goes beyond entanglement . in a recent work authorshave investigated the problem of broadcasting of quantum correlation that goes beyond the notion for general two qubit states .+ the problem of complementarity or mutually exclusiveness of quantum phenomenons was from the beginning of quantum mechanics . following the discovery of heisenberg uncertainty principle it was bohr who came up with the concept of complementarity in the following year .even in quantum information theory mutually exclusive aspects of physical phenomenon is not something new as there had been previous instances depicting the complementarity between the local and non local information of the quantum systems and between the correlation generated in dual physical processes like cloning and deletion .we do broadcasting when we require more number of entangled states in a network instead of a highly entangled state .this is mostly required when we require to do distributed information processing tasks .it is natural to expect in a bipartite situation that these newly born entangled states are less suitable in tasks like teleportation and super dense coding than the parent state .however it is not well known how their capability are controlled by our ability of broadcasting . in this workwe find out several complimentary relations manifesting the interdependence of their information processing capabilities and fidelity of broadcasting .we extend our investigation in a situation where instead of using cloning , we have used cloning transformations both locally and non locally .eventually we find out the fidelity of broadcasting and the change in the information processing capacities for different values of ( ) and investigate how these complimentary relations behave with the increase in number of copies ( ) . in section 2we briefly describe the standard procedure of broadcasting of quantum entanglement with the aid of both local and non local cloning machine . in section 3we present the complimentary relationship between the fidelity of broadcasting and the capability of information processing tasks like teleportation and super dense coding .first of all we provide numerical results in form of various plots for general two qubit mixed states .then we have considered particular examples like werner like states and bell diagonal state and show the complimentary phenomenon in each of these examples .in section 4 we separately study complimentary phenomenon with increased number of obtained copies by applying general optimal cloning transformation instead of cloning transforms .quantum cloning transformations can be viewed as a completely positive trace preserving map between two quantum systems , supported by an ancilla . in this section , firstly we revisit the gisin - massar ( g - m ) qcm which we will later use for our complementarity analysis with the entangled output states in the broadcasting process via local cloning .more particularly , it is an optimal state - independent qcm which creates identical copies from an input qubit .when , the g - m cloner reduces to the buzek - hillery ( b - h ) local state - independent optimal cloner .secondly , we extend the idea of g - m cloner to a state independent two dimensional nonlocal cloner like the b - h qcm in higher dimensions which copies a general two qubit input state into identical copies with an optimal fidelity . + * local state - independent cloning : *the unitary operator qcm is described as : where , denotes the initial state of the copy machine , denotes the blank copies , represent the ortho - normalized internal states of the qcm . here , the symmetric and normalized states are given by with qubits in the state and qubits in the orthogonal state .a combinatorial series calculation illustrates that this unitary operator acts on an arbitrary input state as : where is the internal state of our qcm with for all . with which transforms under rotation as a complex conjugate representation, we get a general expression for .when we recognize the machine states of the qcm with the states , then the states become . +* nonlocal state - independent cloning :* the nonlocal version ( ) of the above qcm is described as , where with we get ; and represent blank states .moreover , is the initial machine state , are the machine state after cloning , is the probability that there are errors out of cloned states . represents a normalized state in which states are in state and states are in state .similarly , the machine states after cloning can be represented as where , = no . of ways in which states can be chosen out of available basis set , where such that .in this section , we briefly discuss the principle of broadcasting of quantum entanglement ( inseparability ) with the help of both local and nonlocal cloning operations .let us start with a two qubit mixed state shared by two distant parties and which can be canonically expressed as : =\left\{\vec{x},\:\vec{y},\ : t\right\}\:\:\ : \mbox{(say ) , } \label{eq : mix}\end{aligned}\ ] ] where and are bloch vectors with and . here , ( = ) are elements of the correlation matrix ( {3 \times 3} ] , + = ] , + = $ ] are separable .we could have also chosen the diagonal pairs ( ) instead of choosing the pairs : as our desired pairs in the above definition . however , we refrain ourselves from choosing the pairs as the desired pairs . +in this section , we present the central idea of our work where we establish the complimentary relations between the fidelity of broadcasting process with the decremental change in the information processing abilities as a consequence of generation of lesser entangled pairs from an initially more entangled resource . as explained in the previous section , this entire process of broadcasting can happen by the use of either local or nonlocal cloning operations . in our complementarity analysis , we consider two of the most common quantum information processing protocols , namely teleportation and superdense coding .however , we conjecture that this complementary nature between broadcasting fidelity and information processing ability of an input entangled state will hold true for all known information processing protocols in the quantum world . after applying either of the cloning process, we separately calculate the change in the maximal teleportation fidelity and the superdense coding capacity of the entangled state to find that these values can not be arbitrary values .this change has a trade off with the broadcasting fidelity of the process .in fact the sum of these two quantities is a constant expression .interestingly this shows us that each of these quantities are complimentary in nature . in other words ,better is the fidelity of broadcasting lesser is the change in the information processing capabilities . in short ,if we broadcast well we preserve the capacity of the entangled state to be used as an useful resource .next , we briefly define + * teleportation fidelity : * + quantum teleportation is sending the quantum information belonging to one party to a distant party with the help of a resource entangled state .it is well known that all pure entangled states dimensions are useful for teleportation . however , the situation is not so trivial for mixed entangled states .there are also entangled states which can not be used as a resource for teleportation .however after suitable local operation and classical communication ( locc ) one can always convert them to states which become useful for teleportation .the extent to which a two qubit state can be used as a resource for teleportation is quantified by the maximal fidelity of teleportation ( ) . for a general two qubit mixed state ( given by eq . ) as a resource , we have the defined for it as , .\end{aligned}\ ] ] where are the eigenvalues of the matrix .a quantum state is said to be useful for teleportation when is more than which is the classically achievable limit of fidelity of teleportation .one such example of an useful resources is the werner state in dimensions for a certain range of its classical probabilities of mixing .other examples , of mixed entangled states as a resource for teleportation also exists . + * super dense coding capacity * + quantum super dense coding involves in sending of classical information from one sender to the receiver when they are sharing a quantum resource in form of an entangled state .more specifically , superdense coding is a technique used in quantum information theory to transmit classical information by sending quantum systems .it is quite well known that if we have a maximally entangled state in as our resource , then we can send bits of classical information . in the asymptotic case , we know one can send amount of bit when one considers non - maximally entangled state as resource .it had been seen that the number of classical bits one can transmit using a non - maximally entangled state in as a resource is , where is the smallest schmidt coefficient .however , when the state is maximally entangled in its subspace then one can send up to bits .+ in this subsection , we consider the most generalized two qubit mixed state as our initial resource given by eq .. then we apply generalized buzek hillery cloning transformation on this state to obtain four qubit states as output .the cloning is carried out both locally on individual qubits and nonlocally on both the qubits . then in each case we trace out the redundant qubits to obtain the newly generated entangled pairs ( for local cloning ) and ( for nonlocal cloning ) .for each of them , we compute the broadcasting fidelity which is given by , in the transition of the initial state to the final state through the decompression process .interestingly , we observe that the sum of both these quantities with the corresponding broadcasting fidelities is always bounded by a quantity depending on the initial state parameters .we plot these sums in subsequent figures for both local and nonlocal cloning transformations . in other wordswe show that these two quantities namely broadcasting fidelity ( ) and the change in these information processing capabilities ( ) are complimentary to each other .more particularly , an increase in one quantity will bring down the other .+ _ with local cloner : _+ in fig .[ fig : mostgenlocal ] we plot the sum a ) and b ) with the trace of the square of the initial state .the randomly generated state parameters are set of points on bloch sphere .each point represents a two qubit state .this figure corresponds to the situation when we have used local cloning transformation on respective qubits . in both of these casesthe respective sums are bounded . in ( a ) the sum of these quantitiescan never go beyond as is bounded by while the teleportation fidelity can never be more than . similarly in ( b ) the maximum dense coding capacity for a two qubit state is ( for bell states ) so here serves as an upper bound to this sum . as for a given state the sum is always fixed , we conclude that an increase in any one of the quantities will bring down the other . + _ with nonlocal cloner : _ + in fig .[ fig : mostgennonlocal ] we plot the sum a ) and b ) with the trace of the square of the initial state .however , in this situation we have used non local cloning transformations for the purpose of broadcasting . quite similarto the previous figure here also we observe that all these sums are respectively bounded by and .+ next we exemplify this complementarity with the help of two well known class of mixed states namely , werner - like and bell diagonal states . in each of these caseswe show that how broadcasting fidelity and the change in the information processing capabilities maintains a complementarity relationship with each other .+ * _ example a : werner - like states _ * + * broadcasting via local cloning : * + in this example , we consider the class of werner - like states as our resource states .these states can more formally be expressed as , , where is the bloch vector and the correlation matrix with the condition . + after applying the optimal local cloning process given by eq . , the nonlocal output states appear to be , . using peres - horodecki theorem , we discover that the broadcasting range is given by , where . + next we provide two different tables for detailed analysis of the above broadcasting range . in table[ tbl : werner_like_local_1 ] , we give the broadcasting range of the werner - like states in terms for the different values of the input state parameter . here , we also calculate the decremental effect caused to the maximal teleportation fidelity and superdense coding capacity as a result of the broadcasting process .the sum of each of these quantities with broadcasting fidelity for a given value of are provided in this table .there is a clear indication that sum of these quantities is constant for a given value of the input state parameters .
|
complementarity have been an intriguing feature of physical systems for a long time . in this work we establish a new kind of complimentary relations in the frame work of quantum information processing tasks . in broadcasting of entanglement we create many pairs of less entangled states from a given entangled state both by local and non local cloning operations . these entangled states can be used in various information processing tasks like teleportation and superdense coding . since these states are less entangled states it is quite intuitive that these states are not going to be as powerful resource as the initial states . in this work we study the usefulness of these states in tasks like teleportation and super dense coding . more precisely , we found out bounds of their capabilities in terms of several complimentary relations involving fidelity of broadcasting . in principle we have considered general mixed as a resource also separately providing different examples like a ) werner like states , b ) bell diagonal states . here we have used both local and non local cloning operations as means of broadcasting . in the later part of our work , we extend this result by obtaining bounds in form of complimentary relations in a situation where we have used cloning transformations instead of cloning transformations .
|
information - theoretic security has been a very active area of research recently .( see and for overviews of recent progress in this field . ) in particular , significant progress has been made in understanding the fundamental limits of multiple - input multiple - output ( mimo ) secret communication .more specifically , the secrecy capacity of the mimo gaussian wiretap channel was characterized in .the works and considered the problem of mimo gaussian broadcast channels with two confidential messages , each intended for one receiver but needing to be kept asymptotically perfectly secret from the other , and provided a precise characterization of the capacity region .the capacity region of the mimo gaussian broadcast channel with two receivers and two independent messages , a common message intended for both receivers and a confidential message intended for one of the receivers but needing to be kept asymptotically perfectly secret from the other , was characterized in .this paper presents two new results on mimo gaussian broadcast channels with confidential messages : * the problem of the mimo gaussian wiretap channel is revisited .a matrix characterization of the _ capacity - equivocation _ region is provided , which extends the result of on the secrecy capacity of the mimo gaussian wiretap channel to the general , possibly imperfect secrecy setting . * the problem of mimo gaussian broadcast channels with two receivers and _ three _ independent messages , a common message intended for both receivers , and two mutually confidential messages each intended for one of the receivers but needing to be kept asymptotically perfectly secret from the other ,is considered . a precise characterization of the capacity region is provided , generalizing the results of and which considered only two out of three possible messages . _notation_. vectors and matrices are written in bold letters .all vectors by default are column vectors .the identity matrices are denoted by , where a subscript may be used to indicate the size of the matrix to avoid possible confusion .the transpose of a matrix is denoted by , and the trace of a square matrix is denoted by .finally , we write ( or , equivalently , ) whenever is positive semidefinite . +consider a mimo gaussian broadcast channel with two receivers , one of which is a legitimate receiver and the other is an eavesdropper .the received signals at time index are given by & = & { \mathbf{h}}_r\mathbf{x}[m]+{\mathbf{w}}_r[m]\\ { \mathbf{z}}[m ] & = & { \mathbf{h}}_e\mathbf{x}[m]+{\mathbf{w}}_e[m ] \end{array } \label{eq : ch}\ ] ] where and are ( real ) channel matrices at the legitimate receiver and the eavesdropper respectively , and \}_m ] are independent and identically distributed ( i.i.d . )additive vector gaussian noise processes with zero means and _ identity _ covariance matrices .the transmitter has a single message , which is uniformly distributed over where is the _ rate _ of communication .the goal of communication is to deliver reliably to the legitimate receiver while keeping it information - theoretically secure from the eavesdropper . following the classical work , for every it is required that for sufficiently large , where is the block length of communication , ,\ldots,{\mathbf{z}}[n]) ] are i.i.d .additive vector gaussian noise processes with zero means and identity covariance matrices .. however , different notation is used here for the convenience of presentation . ] as illustrated in fig .[ fig : chm ] , the transmitter has a common message and two independent confidential messages and .the common message is intended for both receivers .the confidential message is intended for receiver but needs to be kept asymptotically perfectly secret from the other receiver .mathematically , for every we must have for sufficiently large block length .our goal here is to characterize the entire capacity region that can be achieved by any coding scheme , where , and are the communication rates corresponding to the common message and the confidential messages and , respectively . with both confidential messages and but _ without _ the common message , the problem was studied in for the multiple - input single - output ( miso ) case and in for general mimo case . rather surprisingly , it was shown in that , under a matrix power constraint both confidential messages can be _ simultaneously _ communicated at their respected maximum rates . with the common message and only _ one _ confidential message ( or ) , the capacity region of the mimo gaussian wiretap channel was characterized in using a channel - enhancement approach and an extremal entropy inequality of weingarten _ et al. _ .the main result of this section is a precise characterization of the capacity region of the mimo gaussian broadcast channel with a more complete message set that includes a common message and two independent confidential messages and .[ thm : gmbc ] the capacity region of the mimo gaussian broadcast channel with a common message and two confidential messages and under the matrix power constraint is given by the set of nonnegative rate triples such that for some , and .+ by setting we can recover the result of ( * ? ? ?* theorem 1 ) that includes both confidential messages and but without the common message . similar to (* theorem 1 ) , for any given the upper bounds on and can be simultaneously maximized by a same .in fact , the upper bounds on and in are fully symmetric with respect to and , even though it is not immediately evident from the expressions themselves .by setting we can recover the result of ( * ? ? ?* theorem 1 ) that includes the common message and the confidential message but without the other confidential message . fig .[ fig : ccm2](a ) illustrates the capacity region for the channel matrices and the matrix power constraint as given by ( the channel parameters are the same as those used for fig . [fig : ce ] . ) in fig .[ fig : ccm2](b ) , we have also plotted the -cross section of for several given values of . note that when , the -cross section is _ rectangular _ , implying that under a matrix power constraint , both confidential messages and can be simultaneously transmitted at their respective maximum rates . for , however , the -cross sections are generally non - rectangular as different boundary points on the same cross section may correspond to _ different _ choice of .the capacity region under an average total power constraint is summarized in the following corollary .the result is a direct consequence of theorem [ thm : gmbc ] and ( * ? ? ?* lemma 1 ) .the capacity region of the mimo gaussian broadcast channel with a common message and two confidential messages and under the average total power constraint is given by next , we prove theorem [ thm : gmbc ] . following , we shall focus on the canonical case in which the channel matrices and are square and invertible and the matrix power constraint is strictly positive definite . in this case , multiplying both sides of by , the mimo gaussian broadcast channel can be equivalently written as = { \mathbf{x}}_k[m]+{\mathbf{z}}_k[m ] , \quad k=1,2 \label{eq : ch - a}\ ] ] where \}_m ] , \} ] and \} ] , \} ] and \} ] .randomly partition the codewords into bins so that each bin contains codewords .further partition each bin into sub - bins so that each sub - bin contains codewords .label each of the codewords as where denotes the bin number , denotes the sub - bin number within each bin , and denotes the codeword number within each sub - bin .we will refer to the codeword collection as the -subcodebook corresponding to .[ fig : nb ] illustrates the overall codebook structure . __ to send a message triple , the transmitter first chooses the codeword from the -codebook .next , the transmitter looks into the -subcodebook corresponding to and _ randomly _( according to a uniform distribution ) chooses a codeword from the sub - bin of the bin .once a is chosen , an input sequence is generated according to }$ ] and is then sent through the channel . _decoding at receiver 1 ._ given , receiver 1 looks into the codebooks and and searches for a pair of codewords that are jointly typical with . in the case when with high probability the transmitted codeword pair is the only one that is jointly typical with . _security at receivers 2 and 3 . _fix . in the casewhen we have ( * ? ? ?* theorem 1 ) for sufficiently large .since and are independent , we have from that i.e. , the message is asymptotically perfectly secure at the eavesdropper . to summarize , for any given and any , any rate triple that satisfies is achievable .note that eliminating , and from and using fourier - motzkin elimination , we may conclude that any rate pair satisfying is achievable . to prove the converse part of the lemma, we first consider an upper bound on the confidential message rate .the perfect secrecy condition ( [ eq : eqv2 ] ) implies that for every , on the other hand , fano s inequality ( * ? ? ?. 2.11 ) implies that for every , + h(\epsilon_0 ) \notag\\ & : = n\delta .\label{eq : sd1}\end{aligned}\ ] ] applying ( [ eq : eqv4 ] ) and ( [ eq : sd1 ] ) , we have + \bigl[n \delta - h(w_s , w_p|y^{n})\bigr]\notag\\ & \le h(w_s , w_p|z^{n})-h(w_s , w_p|y^{n } ) + n ( \epsilon+\delta ) .\label{eq : apout2}\end{aligned}\ ] ] by the chain rule of the mutual information ( * ? ? ?2.5 ) , \notag\\ & = \sum_{i=1}^{n}\bigl [ i(w_s , w_p;y_i|y^{i-1},z_{i+1}^{n})\notag\\ & \qquad \quad -i(w_s , w_p;z_i|y^{i-1},z_{i+1}^{n})\bigr ] \label{eq : apout3}\end{aligned}\ ] ] where the last equality follows from ( * ? ? ?* lemma 7 ) .let and we have from ( [ eq : apout3 ] ) that .\label{eq : apout4}\end{aligned}\ ] ] next , we consider an upper bound on the sum private - confidential message rate . by ( [ eq : sd1 ] ) , applying the chain rule of the mutual information ( * ? ? ?2.5 ) , we have applying the standard single - letterization procedure ( e.g. , see ( * ? ? ? * ch .14.3 ) ) to and , we have the desired converse result for lemma [ lemma : dmc ] .the perfect secrecy condition ( [ eq : ps-2 ] ) implies that for every , [ eq : ssd ] \notag\\ & \le \epsilon_0 \log\left(2^{nr_0}-1\right)+h(\epsilon_0 ) : = n\delta_0 \label{eq : ssd0}\\ h&(w_1|\widetilde{y}_{1a}^{n } ) \notag\\ & \le \epsilon_0 \log\left(2^{nr_1}-1\right ) + h(\epsilon_0 ) : = n\delta_1 \label{eq : ssd1}\\ \text{and } \qquad h&(w_2|\widetilde{y}_{2a}^{n } ) \notag\\ & \le \epsilon_0 \log\left(2^{nr_2}-1\right ) + h(\epsilon_0 ) : = n\delta_2 .\label{eq : ssd2}\end{aligned}\ ] ] let which satisfies the markov chain we first bound based on ( [ eq : ssd0 ] ) as follows : similarly , we have next , we bound based on ( [ eq : eeqv1 ] ) and ( [ eq : ssd1 ] ) as follows : + \bigl[n \delta_1-h(w_1|\widetilde{y}_{1a}^{n})\bigr]\notag\\ & = h(w_1|w_0,y_{2b}^{n})+i(w_1;w_0|y_{2b}^{n})-h(w_1|\widetilde{y}_{1a}^{n } ) \notag\\ & \quad + n ( \epsilon+ \delta_1)\notag\\ & \le h(w_1|w_0,y_{2b}^{n})+h(w_0|y_{2b}^{n})-h(w_1|w_0,\widetilde{y}_{1a}^{n } ) \notag\\ & \quad + n ( \epsilon+ \delta_1 ) .\label{eq : app - r1b1}\end{aligned}\ ] ] substituting ( [ eq : ssd1 ] ) into ( [ eq : app - r1b1 ] ) , we may obtain applying ( * ? ? ?* lemma 7 ) , ( [ eq : app - r1b2 ] ) can be rewritten as + n ( \epsilon+\delta_0+\delta_1)\notag \\ & \le \sum_{i=1}^{n } \bigl[i(x_i;\widetilde{y}_{1a , i}|w_0,\widetilde{y}_{1a}^{i-1},y_{2b , i+1}^{n } ) \notag\\ & \quad -i(x_i;y_{2b , i}|w_0,\widetilde{y}_{1a}^{i-1},y_{2b , i+1}^{n})\bigr ] + n(\epsilon+\delta_0+\delta_1 ) \label{eq : upr1 - 2}\end{aligned}\ ] ] where ( [ eq : upr1 - 2 ] ) follows from the markov chain moreover , due to the markov chain we can further bound as \notag\\ & \quad + n ( \epsilon+\delta_0+\delta_1)\notag\\ & = \sum_{i=1}^{n}\bigl[i(x_i;\widetilde{y}_{1a , i}|u_i,\widetilde{y}_{1a}^{i-1 } ) \notag\\ & \quad -i(x_i;y_{2b , i}|u_i,\widetilde{y}_{1a}^{i-1})\bigr ] + n ( \epsilon+\delta_0+\delta_1 ) \label{eq : upr1 - 3}\\ & = \sum_{i=1}^{n } \bigl[i(x_i;\widetilde{y}_{1a , i}|u_i ) -i(x_i;y_{2b , i}|u_i)\bigr ] \notag\\ & \quad -\bigl[i(\widetilde{y}_{1a}^{i-1};\widetilde{y}_{1a , i}|u_i ) -i(\widetilde{y}_{1a}^{i-1};y_{2b , i}|u_i)\bigr ] \notag\\ & \quad +n ( \epsilon+\delta_0+\delta_1)\notag\\ & \le \sum_{i=1}^{n } \bigl[i(x_i;\widetilde{y}_{1a , i}|u_i ) -i(x_i;y_{2b , i}|u_i)\bigr ] \notag\\ & \quad + n ( \epsilon+\delta_0+\delta_1 ) \label{eq : upr1 - 4}\end{aligned}\ ] ] where ( [ eq : upr1 - 3 ] ) follows from the definition of in ( [ eq : def - u ] ) , and ( [ eq : upr1 - 4 ] ) follows from the fact that is degraded with respect to so . finally , applying the standard single - letterization procedure ( e.g. , see ( * ? ? ?* chapter 14.3 ) ) to ( [ eq : upr0 - 1 ] ) , ( [ eq : upr0 - 2 ] ) , ( [ eq : upr1 - 4 ] ) and ( [ eq : upr2 - 1 ] ) proves the desired result for lemma [ lemma : dmc2 ] .r. bustin , r. liu , h. v. poor , and s. shamai ( shitz ) , an mmse approach to the secrecy capacity of the mimo gaussian wiretap channel , " _ eurasip journal on wireless communications and networking _ , 2009 .r. liu , t. liu , h. v. poor , and s. shamai ( shitz ) , multiple - input multiple - output gaussian broadcast channels with confidential messages , " _ ieee trans .inf . theory _ ,56 , no . 9 , pp . 42154227 , sep . 2010. h. d. ly , t. liu , and y. liang , multiple - input multiple - output gaussian broadcast channels with common and confidential messages , " _ ieee trans .inf . theory _ ,56 , no . 11 , pp .54775487 , nov .2010 .h. weingarten , y. steinberg , and s. shamai ( shitz ) , the capacity region of the gaussian multiple - input multiple - output broadcast channel , " _ ieee trans .inf . theory _ ,vol . 52 , no . 9 , pp . 39363964 , sep . 2006 .h. weingarten , t. liu , s. shamai ( shitz ) , y. steinberg , and p. viswanath , the capacity region of the degraded multiple - input multiple - output compound broadcast channel , " _ ieee trans .inf . theory _ ,55 , no . 11 , pp . 50115023 , nov .2009 .r. liu , i. maric , p. spasojevic , and r. d. yates , discrete memoryless interference and broadcast channels with confidential messages : secrecy rate regions , " _ ieee trans .inf . theory _ ,vol . 54 , no . 6 , pp .24932507 , june 2008 .
|
this paper presents two new results on multiple - input multiple - output ( mimo ) gaussian broadcast channels with confidential messages . first , the problem of the mimo gaussian wiretap channel is revisited . a matrix characterization of the capacity - equivocation region is provided , which extends the previous result on the secrecy capacity of the mimo gaussian wiretap channel to the general , possibly imperfect secrecy setting . next , the problem of mimo gaussian broadcast channels with two receivers and three independent messages : a common message intended for both receivers , and two confidential messages each intended for one of the receivers but needing to be kept asymptotically perfectly secret from the other , is considered . a precise characterization of the capacity region is provided , generalizing the previous results which considered only two out of three possible messages . multiple - input multiple - output ( mimo ) communication , wiretap channel , capacity - equivocation region , broadcast channel , confidential message
|
synthesis of diagnostic test accuracy studies is the most common medical application of multivariate meta - analysis .meta - analysis is broadly defined as the quantitative review of the results of related but independent studies .the purpose of a meta - analysis of diagnostic test accuracy studies is to combine information over different studies , and provide an integrated analysis that will have more statistical power to detect an accurate diagnostic test than an analysis based on a single study .accurate diagnosis plays an important role in the disease control and prevention .diagnostic test accuracy studies observe the result of a gold standard procedure which defines the presence or absence of a decease and the result of a diagnostic test .they typically report the number of true positives ( diseased people correctly diagnosed ) , false positives ( non - diseased people incorrectly diagnosed as diseased ) , true negatives and false negatives . as the sensitivity ( proportion of those with the disease ) and specificity ( proportion of those without the disease ) are estimated from different samples in each study ( diseased and non - diseased patients ) , they can be assumed to be independent so that the within - study correlations are set to zero .however , there may be a negative between - studies association which should be accounted for .a negative association between these quantities across studies is likely because studies that adopt less stringent criterion for declaring a test positive invoke higher sensitivities and lower specificities . in situations where studies compare a diagnostic test with its gold standard, heterogeneity arises between studies due to the differences in disease prevalence , study design as well as laboratory and other characteristics .because of this heterogeneity , a generalized linear mixed model ( glmm ) has been recommended in the biostatistics literature to synthesize information .note in passing that it is equivalent with the hierarchical summary receiver operating characteristic model in rutter and gatsonis for the case without covariates .the glmm assumes independent binomial distributions for the true positives and true negatives , conditional on the latent pair of transformed ( via a link function ) sensitivity and specificity in each study .the random effects ( latent pair of transformed sensitivity and specificity ) are jointly analysed with a bivariate normal ( bvn ) distribution .chu _ et al ._ propose an alternative mixed model which operates on the original scale of sensitivity and specificity .the random effects follow the bivariate sarmanov s family of distributions with beta margins . however , this random effects distribution has a limited range of dependence and is inappropriate for general modelling unless the responses are weakly dependent .hence , this model is too restrictive in the context of diagnostic accuracy studies where strong ( negative ) dependence is likely .we propose a copula mixed model as an extension of the glmm and mixed model in chu _ by rather using a copula representation of the random effects distribution with normal and beta margins , respectively .copulas are a useful way to model multivariate data as they account for the dependence structure and provide a flexible representation of the multivariate distribution .the theory and application of copulas have become important in finance , insurance and other areas , in order to deal with dependence in the joint tails . here , we indicate that this can also be important in meta - analysis of diagnostic test accuracy studies .diagnostic test accuracy studies is a prime area of application for copula models , as the traditional assumption of multivariate normality is invalid in this context .a copula approach for meta - analysis of diagnostic accuracy studies was recently proposed by kuss _ who explored the use of a copula model for observed discrete variables ( number of true positives and true negatives ) which have beta - binomial margins .this model is actually an approximation of a copula mixed model with beta margins for the latent pair of sensitivity and specificity .although , this approximation can only be used under the unrealistic case that the number of observations in the respective study group of healthy and diseased probands is the same for each study . in real data applications , the number of true positives and negatives do not have a common support over different studies , hence , one can not conclude that there is a copula .the natural replicability is in the random effects probability for sensitivity and specificity . the remainder of the paper proceeds as follows .section [ stand - model - sec ] summarizes the standard glmm for synthesis of diagnostic test accuracy studies .section [ copula - mixed - model - sec ] has a brief overview of relevant copula theory and then introduces the copula mixed model for diagnostic test accuracy studies and discusses its relationship with existing mixed models .section [ sec - families ] discusses suitable parametric families of copulas for the copula mixed model , deduces summary receiver operating characteristic curves for the proposed model through quantile regression techniques and different characterizations of the bivariate random effects distribution , and demonstrates that they can show the effect of different model assumptions .section [ miss - section ] contains small - sample efficiency calculations to investigate the effect of misspecifying the random effects distribution on parameter estimators and standard errors and compare the proposed methodology to existing methods .section [ vuong - sec ] summarizes the assessment of the proposed models using the vuong s statistic , which is based on sample difference in kullback - leibler divergence between two models and can be used to differentiate two parametric models which could be non - nested .section [ sec - appl ] presents applications of our methodology to four data frames with diagnostic accuracy data from binary test outcomes .we conclude with some discussion in section [ sec - discussion ] , followed by a section with the software details and a technical appendix .we first introduce the notation used in this paper .the focus is on two - level ( within - study and between - studies ) cluster data .the data are are , where is an index for the within study measurements and is an index for the individual studies .the data , for study , can be summarized in a table with the number of true positives ( ) , true negatives ( ) , false negatives ( ) , and false positives ( ) ; see table [ 2times2 ] ..[2times2]data from an individual study in a table . [ cols="^,^,^ " , ] for the khs log - likelihood we have the limit , .\ ] ] the limit of the khsmle ( as ) is the maximum of ( [ limitacd ] ) ; we denote this limit as .the in ( [ limitacd ] ) are the model based probabilities and are computed to at least five significant digits using gauss - legendre quadrature with a sufficient number of quadrature points as described in subsection [ computation ] . for the log - likelihood in ( [ beta - mixed - cop - likelihood ] ) , we have the limit , the limit of the mle ( as ) is the maximum of ( [ limit ] ) ; we denote this limit as .representative results are shown in table [ kuss - asym ] for a bvn copula mixed model with beta margins , with mle results omitted because they were identical with the true values up to four or five decimal places .therefore , our method leads to unbiased estimating equations .regarding the khs method , conclusions from the values in the table and other computations that we have done are that for the khs method there is asymptotic bias ( decreases as increases ) for the univariate parameters and as and increase , and substantial asymptotic downward bias for the dependence parameter ; note that this slightly decreases as increases .thanks to professor harry joe , university of british columbia , for insightful comments .haitao chu , lei nie , yong chen , yi huang , and wei sun .bivariate random effects models for meta - analysis of comparative studies with binary outcomes : methods for the absolute risk difference and relative risk ., 21(6):621633 , 2012 . a. k. nikoloulopoulos .copula - based models for multivariate discrete response data . in f. durante , w. hrdle , and p. jaworski , editors ,_ copulae in mathematical and quantitative finance _ , pages 231249 .springer , 2013 .j. b. reitsma , a. s. glas , a. w.s .rutjes , r. j.p.m .scholten , p. m. bossuyt , and a. h. zwinderman .bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews ., 58(10):982990 , 2005 .
|
diagnostic test accuracy studies typically report the number of true positives , false positives , true negatives and false negatives . there usually exists a negative association between the number of true positives and true negatives , because studies that adopt less stringent criterion for declaring a test positive invoke higher sensitivities and lower specificities . a generalized linear mixed model ( glmm ) is currently recommended to synthesize diagnostic test accuracy studies . we propose a copula mixed model for bivariate meta - analysis of diagnostic test accuracy studies . our general model includes the glmm as a special case and can also operate on the original scale of sensitivity and specificity . summary receiver operating characteristic curves are deduced for the proposed model through quantile regression techniques and different characterizations of the bivariate random effects distribution . our general methodology is demonstrated with an extensive simulation study and illustrated by re - analysing the data of two published meta - analyses . our study suggests that there can be an improvement on glmm in fit to data and makes the argument for moving to copula random effects models . our modelling framework is implemented in the package copularemada within the open source statistical environment r. + + _ keywords : _ copula models ; diagnostic tests ; multivariate meta - analysis ; random effects models ; sroc , sensitivity / specificity .
|
population permanency in a patchy environment is the result of a complex interaction between spatial heterogeneity and temporal variability of the environment , dispersal , density - dependence and population structure .each of these factors have relative importance for population growth and it differs for terrestrial and aquatic species , large and small populations , plants and animals , vertebrates and invertebrates etc .; see for instance , , . one way to theoretically approachthe problem of population dynamics is by formulating mathematical models that incorporate internal and external factors of population growth .the literature on the population models with various level of complexity is quite vast and detailed review is beyond the scope of this paper .we mention only some of the well - established models that have been developed over the years . among the unstructured models , the malthus model of exponential growth and the verhulst logistic model are especially important . for the age - structured models with density - dependency or time - dependency we refer to , , , , , , , , .the common point for the age - structured models is that _ the net reproductive rate _ and _ the characteristic equation _are used to determine permanency of a population .the spatial structure has been recognized as one of the most important factors of growth . in this case , each individual s birth and death rate are dependent upon the habitat / patch where they are in the landscape . for simplicity ,let a population inhabit a discrete space which consists of several patches .a source is a high - quality patch that yields positive population growth , while a sink is a low - quality patch and it yields negative growth rate . in isolation , every subpopulation has its own dynamics .linking the patches by dispersal lead us to the source - sink dynamics , where all local subpopulations contribute to the unique global dynamics .for populations that inhabit several patches , possibility to move from one patch to another can be crucial for survival .for example , dispersal from a source to a sink can save the local sink subpopulation from extinction through the rescue effect and recolonization , , .the influence of spatial heterogeneity in unstructured populations was studied in , , , , , . the trade - off between competition and dispersalis investigated in and the relation between dispersal pattern and permanency was discussed in , .the continuous age - structured models with spatial structure can be divided into classes . in the first type of modelsindividuals occupy position in a spatial environment and spatial movement is typically controlled by diffusion or taxis processes . in the second class fits the models with several species or populations occupying different regions ( ` patches ' ) accompanying with migration between them .the usual practice here is to have only two classes ( immature and adults ) and dispersion between a few ( two or three ) temporally unchangeable patches , as in e.g. , , , , . in this paper, we provide a rigorous mathematical derivation of the results considering the existence and uniqueness of a solution in a fairly general form in presence of migrations .inspired by the single - patch models we come to the fundamental questions : * is it possible to define an analogue of the characteristic equation and the net reproductive rate for the several - patches model ?* if so , can they be used for the analysis of the large - time behavior of the solution and for establishing the condition for the population s permanency ?the main contribution of the paper is in the rigorous proof that the both questions have affirmative answers in the constant , periodic and the general time - dependent case .the method that we use for the time - dependent cases allows us to consider fluctuations that are not necessarily small in amplitude .besides , we use general results to discuss the real world problems , such as the survival of migrating species and pest control . to set up the model, we follow the argument of and , and assume that a population is age - structured , density - dependent and inhabits temporally variable and different patches . a local subpopulation on each patch experiences intraspecific competition , which results in additional density - dependent mortality .let denote the age distribution in the population patch at time with the corresponding birth rate and the initial distribution of population .then the assumption that only the members of the age class are competing led to the following mckendrick - von foerster type balance equations : in the domain subject to the _ birth law _ and the _ initial age distribution _ here denotes the _ maximal length of life _ of individuals in population at age , where is the mortality rate of the population patch , and the the dispersion matrix describes the migration rates between patches : the coefficients define a proportion of individuals of age at age on patch that migrates to patch . then is the total population at time .the predecessor of the present model in the single patch case is the model proposed by von foerster ; a detailed analysis was given by gurtin and maccamy and chipot , . a comprehensive treatment of this approach is given by iannelli .prss , was the first to study a mathematical model of an -species population with age - specific interactions in absence of migration . by using the theory of semilinear evolution equations he established the well - posedness and the existence of an equilibrium solution under certain constraints on the birth and death rates .he also derived some ( local or asymptotic ) stability results for for the equilibrium solutions .when , migration between patches is absent , and the system splits into independent balance equations .this model under an additional assumption that is the logistic regulatory function has recently been studied in .the case is much more challenging . in modeling the source - sink dynamics ,fundamentally important is the fact that individuals can disperse and move from one patch to another .migration , which in the biological terms means a round - trip from a birthplace , is particularly significant .then it is natural to expect that the global and asymptotic behaviour of solutions to is determined by both the sign pattern and the weighted graph associated with .* outline*. a summary of the mathematical framework and our main results are presented in section [ sec : main ] . in section [ sec : prel ] we discuss an auxiliary model and derive some preliminary results on the corresponding lower and upper solutions . in section [ sec : general ]we prove the existence and uniqueness of a solution to the balance equations ( [ genpr])([genic ] ) by reducing the original problem to a certain nonlinear integral equation . in section [ sec : const ]we define the associated characteristic equation and the maximal solution , and establish one of the key results of the paper : the net reproductive rate dichotomy .the remaining part of the paper is dedicated to the study of the asymptotic behavior and stability of the solution .we consider three cases : a constant environment ( i.e. the time - independent case ) in section [ sec : const ] , a periodic environment in section [ sec : periodic ] and an irregularly changing environment ( i.e. the general time - dependent case ) in section [ sec : irreg ] . [ [ notations . ] ] notations .+ + + + + + + + + + for easy reference we fix some standard notation used throughout the paper . denotes the positive cone .given we use the standard vector order relation : if for all , if and , and if for all .given , in particular , if is an -matrix we define for any in an obvious manner identifying with an element of .given and a continuous function , we define providing the main results , we give a brief summary of the structure conditions imposed on the balanced equations .we always assume that and are continuous - functions . ] for and is a continuous function of .furthermore suppose the following structure conditions hold : 1 .[ hain1 ] there exists such that for all and 2 .[ hain2 ] for any fixed , is a nonnegative nondecreasing function of for , and there exist real numbers , , and a function such that 3 .[ hain3 ] and is a _ metzler matrix _ : 4 .[ hain4 ] and there exist such that \times \r{+}.\ ] ] 5 .[ hain5 ] the function is continuous and .let us briefly explain the above conditions from the biological perspective . concerning [ hain1 ] , one usually uses a more restrictive condition that is a constant . nevertheless , is a more reasonable assumption: it means that the maximal length of life of individuals in a population may depend on but it grows not faster then the time .mathematically , asserts that the boundary curve is transversal to the characteristics of .the monotonicity assumption in [ hain2 ] ensures that increase in age - class density increases the death rate and has a negative effect on population growth .the classical example of the density independent mortality rate is compatible with in [ hain2 ] .another example is the logistic type model with where is the regulatory function ( carrying capacity ) ; this example fits [ hain2 ] for . concerning the metzler condition in [ hain3 ] , note that the dispersion coefficient expresses the proportion of population that from patch goes to patch , which naturally yields that .furthermore , according the support condition in [ hain4 ] , the improper integral in ( [ genbc ] ) is well - defined and actually is taken over the finite interval ] , hence applying by the fundamental theorem of calculus and ( [ metzler1 ] ) that as desired .[ lemma2 ] let be an upper solution of a.e . in such that . then on .furthermore , if then for .first we claim that is also an upper solution of ( [ system1 ] ) a.e . in ,where .indeed , since each is a locally lipschitz function , there exists a full lebesgue measure subset where all are differentiable . we will show that satisfy on .let and .if for some then , hence is a local maximum of ( because everywhere ) .this yields .furthermore , since , we have by lemma [ lem : metzler ] and ( [ fequal0 ] ) that if then by the continuity of one has , in some neighbourhood of .thus , applying ( [ system1 ] ) we have by and lemma [ lem : metzler ] that holds everywhere in the neighbourhood of .thus , the claim is proved .we also claim is that any upper solution to ( [ system1 ] ) with and for is identically zero in the interval . indeed ,if is such a function then let be chosen as the supremum of all such that in ] such that , and thus .since is locally lipschitz in , there exist and such that for any and any .define ( recall that by the assumption for all and ) . by the continuity of ,there exists such that for any .let the set be defined as above and . since by ( [ fequal0 ] ) , we have the latter inequality yields a.e . in is locally lipschitz it is absolutely continuous , thus in ] . since , where is the coordinate vector , lemma [ lem : metzler ] and the nonnegativity of yield that .\ ] ] the latter yields , thus for every ] .in particular , .since , there exists such that and , thus , . by the assumption, there exists a directed path in the graph .equivalently , there exists a sequence of pair - wise distinct , , such that for any , let us define where denotes the coordinate unit vector in .then therefore by ( [ system1 ] ) and lemma [ lem : metzler ] it follows for that hence . arguing as in ( [ fund ] )we find it follows from ( [ lastin ] ) , the nonnegativity of and the partial derivatives ( for ) that all summands of the latter sum must vanish .since the integrands are non - negative continuous functions , they must vanish identically for ] .let for some there exists such that the patch is accessible at .then there exist such that \subset \operatorname{\mathrm{supp}}m_k ] .there are and points , , such that ( i ) for ] then we claim that for all ] .in particular , .\ ] ] next , we assume that ] , where and are the same as in lemma [ lem : conv ] .since is not identically zero , there exists an interval }.\ ] ] therefore k=1,\ldots , n}.\ ] ] this implies that and }.\ ] ] therefore , for such and if is sufficiently small positive number .this gives positivity of ( [ int52 ] ) for . if then the first integral in ( [ int52 ] ) is estimated from below by and it is positive for ] , hence using ( [ nphipsi ] ) we have for any that next , by theorem [ th : main ] and theorem [ th : est ] we have and furthermore by ( [ balancev ] ) there holds satisfies by continuity of solutions ( [ balancev1 ] ) with respect to a parameter and ( [ varphieq ] ) , we have for any fixed that this readily yields . in this sectionwe shall assume that the condition hold , i.e. the biological meaning of the latter inequality is that individuals do not reproduce during migration ( but can die ) .this condition immediately implies that throughout this section , we use the following notation : [ prop : estsigma ] under the made assumptions , by corollary [ cor : krein ] there exists an eigenvector of corresponding the maximal eigenvalue , i.e. .let us consider the problem ( [ yax0 ] ) with the initial condition .using the assumption ( [ cond : d ] ) and summing up the equations ( [ yax0 ] ) for all we obtain that satisfies which readily yields then by ( [ reprmat ] ) since the sum we arrive at the right hand side of ( [ est : s ] ) .now , in order to prove the left hand side inequality in ( [ est : s ] ) , notice that in the made notation by virtue of for and for all admissible we have which yields in virtue of that combining this with ( [ reprmat ] ) we obtain thus implying ( [ est : s ] ) by virtue of .the estimates ( [ est : s ] ) are optimal . indeed , if , the system ( [ yax0 ] ) splits into separate equations implying that each is an eigenvector of with eigenvalue therefore is exactly the left hand side of ( [ est : s ] ) .on the other hand , suppose all patches to have the same birth and death rates : and for any , and also that the dispersion is absent : .then a similar argument yields implying the exactness of the upper estimate in ( [ est : s ] ) . in order to establish the corresponding estimates for the maximal solution we consider an auxiliary function where the minimum is taken over the simplex [ lem : m ] in the above notation , is nondecreasing in and furthermore , where is the function from [ hain2 ] . if then . if is the minimum point of ( [ not20 ] ) then , hence using the monotonicity condition in [ hain2 ] and we obtain which yields the nondecreasing monotonicity .in particular the limit in ( [ meanm ] ) does exist .denote it by . since , we have .in particular , .conversely , given let be the corresponding minimum point of ( [ not20 ] ) .let the number , , be chosen such that .define for and .then passing to the limit as in the latter inequality yields , thus implying ( [ meanm ] ) .finally , assume again that is the minimum point of ( [ not20 ] ) for .then using [ hain2 ] and the hlder inequality we obtain which yields ( [ mtilde ] ) . [ pro : upper ] in the notation of proposition [ prop : estsigma ] , if then there exists a unique such that where furthermore , since , the maximal solution and .let denote the corresponding solution of ( [ varphieq ] ) satisfying ( [ char111 ] ) .let . then summing up equations ( [ varphieq ] ) and using ( [ cond : d ] ) and ( [ not20 ] ) we obtain the obtained inequality implies that is a ( positive ) decreasing function of , in particular , .we have from ( [ mtilde ] ) rewriting the obtained inequality for yields after integrating this yields by virtue of next , since , it readily follows that this yields by virtue of that since the integral is a decreasing function of and , there exists ( a unique ) solving the equation ( [ r+ ] ) , thereby proving ( [ est : up ] ) . let us comment on ( [ est : up ] ) from the biological point of view .notice by theorem [ th : est ] that is the asymptotical value of the total number of newborns on all patches . by the dichotomy, implies , thus the total asymptotical number of newborns is zero . on the other hand , in the nontrivial case , hence by ( [ est : s ] ) , which easily implies that ( [ r+ ] ) has a positive solution .the next proposition provides a lower estimate for the maximal solution .[ pro : lower ] let there exist a function such that if for some then where is the unique solution to equation and first notice that ( [ lowere ] ) implies by ( [ est : s ] ) that , thus .since for , the -th equation in ( [ varphieq ] ) yields hence using ( [ hain20 ] ) we obtain by virtue of that arguing similar to the proof of proposition [ pro : upper ] we get from that therefore since , one has , hence again , let then is decreasing , and by ( [ lowere ] ) , thus there exists ( a unique ) solution of ( [ r- ] ) such that .now we consider an important particular case of the main problem ( [ genpr])([genic ] ) when the environment is periodically changing . in this section and in the rest of the paper, it is assumed that the vital rates , regulating function and dispersion coefficients are time - dependent and periodic with a period . the boundary - initial value problem ( [ genpr])([genic ] ) is now in a -periodic domain , where , , under the periodicity assumption that for any . throughout this section , we assume that the conditions [ hain1][hain5 ] are satisfied . notice that the existence and uniqueness of a solution to the periodic problem follows from the general result given by proposition [ prsol ] and it is given explicitly by ( [ n1 ] ) .note also that need not to be periodic in but it is natural to expect that converges to a -periodic function for sufficient large , where solves the associated _ characteristic equation _ here the operator is defined by and denotes the ( unique ) solution of the initial value problem where the initial condition we shall assume that the nonnegative cone is equipped with the supremum norm )} ] . to this end, we assume that is such that where and are the structure constants in [ hain1 ] and [ hain4 ] . rewriting and using the property that for any outside ] . since \times \r{} ] and one has the inequality this yields . in order to estimate , we notice that is the solution of the initial problem ( [ newpde ] ) .notice that by ( [ phibound ] ) )}\le \sqrt{n}e^{n\|\mathbf{d}\|b}\|\rho\|_\infty\le c_2:=r\sqrt{n}e^{n\|\mathbf{d}\|b}.\ ] ] let where the latter equality is by the periodicity .therefore , applying the mean value theorem to ( [ newpde ] ) we obtain for any and for some that where depends only on the structure conditions and .this readily implies choosing small enough , yields the desired conclusion .[ max_p ] for any such that , where is defined by ( [ omegadef ] ) , the limit exists and is a solution to the characteristic equation .furthermore , the limit does not depend on a particular choice of and it is the maximal solution to equation in the sense that if is any solution to the characteristic equation then .furthermore , if is a lower solution then . since and by the monotonicity of get : which implies that is a non - increasing sequence .the sequence is bounded from below because , therefore there exists a pointwise .the sequence is uniformly bounded by the constant .applying lemma [ uniform ] to family implies that the convergence is in fact uniform on each compact subset of .thus is a nonnegative continuous -periodic solution of .the rest of the proof is analogous to the proof of proposition [ it ] . in the remaining part of this sectionwe additionally assume that additionally condition [ hain6 ] holds . in that case , due to the periodicity, the infimum in [ hain6 ] can be replaced by the minimum . then arguing similarly to lemma [ lem : ymono ] ,one can verify that for any and , hence the corresponding net reproductive operator is well - defined defined by where is the solution of the linear system let denote the largest eigenvalue of and let be the maximal solution of equation ( [ chart ] ) .then the following results are established similarly to theorem [ th : main ] , theorem [ th : est ] and theorem [ th : estn ] respectively . if , then the characteristic equation has no nontrivial solutions ( in particular , ) . if , then is the only nontrivial solution of equation .[ th : estp ] if and is a solution to then .let be the total multipatch population . if , then as .if , then where is the maximal solution to the characteristic equation ( [ chart ] ) .in order to study asymptotic behavior of the solution to the model ( [ genpr])([genic ] ) in the case when temporal variation is irregular , we assume that the vital rates , regulating function and dispersion coefficients are bounded from below and above by equiperiodic functions for large .these periodic functions define two auxiliary periodic problems , whose solutions provide upper and lower bounds to a solution of the original problem .this leads us to two - side estimates of a solution to the original problem for large .more precisely , throughout this section we shall suppose that there exists and -periodic functions , and such that for any and as in section [ sec : periodic ] , one can consider the corresponding characteristic equations where denote or , and the operators are defined component - wise by and is the unique solution of the system with . then by proposition [ prsol ] where also let us denote by and the corresponding net reproductive operators and net reproductive rates .the main result of this section states that a solution of the population problem in an irregularly changing environment can be estimated by the corresponding solutions of the associated periodically varying population problems .[ main_gen ] let be a solution to equation .then the following dichotomy holds : 1 . if , then .2 . if and , then for any there exists such that where are solutions to . without loss of generality . let and let us define and iteratively for by where the operators and are defined by ( [ k ] ) and ( [ f ] ) respectively .arguing as in the proof of proposition [ gen_sol ] , we obtain the existence , where is a solution to ( [ n ] ) . also by proposition [ max_p ] , , where is the maximal solution to ( [ charpkpm ] ) .we will prove by induction that for any there holds for the claim follows from for .next , by our choice of , for . since for any andthe structure parameters are estimated by ( [ pb ] ) , one easily deduces from the definition of that for and . since and for all ] for sufficiently large .now suppose that ] . for , we have that , and , hence this proves that function defined by ( [ lowso ] ) is a lower solution of equation .therefore , if , then and characteristic equations ( [ charpkpm ] ) have nontrivial solutions . then by virtue of theorem [ th : estp ] , and . passing to the limit in ( [ est:1 ] ) and ( [ est:2 ] ) yields ( [ est : pernb ] ) .in this section we consider two simple applications of our approach showing how dispersion promotes survival of a population on sink patches . in the usual situation ,a habitat is a mixture of sources and sinks .our first example shows that permanency on all patches is possible if the patches are connected and if emigration from sources is sufficiently small and does not cause extinction of a local subpopulation .some researchers indicate that survival of migrating species is possible even if all occupied patches are sinks , see . taking migratory birds as an example ,we demonstrate that this is possible under certain conditions . in order to demonstrate the influence of dispersion on persistence of population, we compare a system with isolated patches with the corresponding system with dispersion .recall that in the isolated case , implying by ( [ sigmaind ] ) that the net reproductive rate of the patch is given by where is the survival probability . in this casethe spectrum of the net reproductive operator is we assume that and , for . in the biological terms , this is equivalent to saying that the first patch is a source and all other patches are sinks . without migration ,the population will persist on the first patch and become extinct on all other patches . for details about the age - structured logistic model that we used to describe isolated patches , we refer readers to . under the made assumptions , where is uniquely determined by let us allow a small migration between patches and assume that there also holds and , for .let us suppose that the dispersion coefficients where is a small number and the parameters satisfy [ hain3 ] in section [ sec : form ] .then the standard linearization argument shows that the solution to the corresponding time - independent model is given by therefore , the net reproductive operator takes the form then latter relation yields now , recall that if is a symmetric matrix and is an eigenvector with a simple eigenvalue then the corresponding perturbed eigenvalue of ( may not be symmetric ) is given by for , the largest eigenvalue is with the eigenvector .the perturbed eigenvalue , which will be the net reproductive rate for the net reproductive operator , is and this is greater than one for small provided that and strictly negative in at least one point of the support of .thus shows that survival on all patches is possible if emigration from the source is sufficiently small .now consider the extreme situation when a population inhabits two patches and the net reproductive rate on _ each _ patch is less or equal to one .we will demonstrate that , even in this case , there is a chance of survival if the structure parameters are suitably chosen .a realistic example for this kind of situation is a population of migratory birds .their habitats consists of two patches : breeding range ( characterized by the high birth rate in summer and high death rate in winter ) and non - breeding range ( low birth and death rates ) .thus , the breeding range is a sink because of the winter conditions , and the non - breeding range is a sink because of too few births .this implies extinction of population on both patches if there is no dispersal .if the dispersion matrix satisfies then the solution to the system ( [ exsys ] ) for is given by where a solution to this system is given by then , the net reproductive operator satisfies in the matrix form this becomes where thus , to show that , it is sufficient to show that for some choice of parameters and certain vector . using it follows that the functions and can be written as : sine the function monotonically increases from to , there exists a unique such that .suppose that .let us choose parameters such that for and for , that is or equivalently , we put and choose and and as solutions to equations : and it follows that and hence .the latter implies that that , thus has an eigenvalue greater than one , which proves the permanency of population on both patches .amarasekare , p. ; nisbet , r. m. spatial heterogeneity , source - sink dynamics , and the local coexistence of competing species . _ the american naturalist _ * 158 * ( 2001 ) , no . 6 , 572584 .pmid : 18707352 , .http://dx.doi.org/10.1086/323586 arditi , r. ; lobry , c. ; sari , t. is dispersal always beneficial to carrying capacity ? new insights from the multi - patch logistic equation ._ theoretical population biology _ * 106 * ( 2015 ) , 45 59 .http://www.sciencedirect.com/science/article/pii/s0040580915001021 berman , a. ; plemmons , r. j. _ nonnegative matrices in the mathematical sciences _ , academic press [ harcourt brace jovanovich , publishers ] , new york - london , 1979 .computer science and applied mathematics .cushing , j. m. _ an introduction to structured population dynamics _ , _ cbms - nsf regional conference series in applied mathematics _71 , society for industrial and applied mathematics ( siam ) , philadelphia , pa , 1998 .http://dx.doi.org/10.1137/1.9781611970005 deangelis , d. l. ; zhang , b. effects of dispersal in a non - uniform environment on population dynamics and competition : a patch model approach ._ discrete and continuous dynamical systems - series b _ * 19 * ( 2014 ) , no . 10 ,diekmann , o. ; gyllenberg , m. ; huang , h. ; kirkilionis , m. ; metz , j. ; thieme , h. r. on the formulation and analysis of general deterministic structured population models ii .nonlinear theory . _ journal of mathematical biology _ * 43 * ( 2001 ) , no . 2 , 157189 . hirsch , m. w. ; smith , h. l. competitive and cooperative systems : mini - review . in _ positive systems ( rome , 2003 )_ , _ lecture notes in control and inform .183190 , springer , berlin , 2003 .http://dx.doi.org/10.1007/978-3-540-44928-7_25 iannelli , m. ; pugliese , a. _ an introduction to mathematical population dynamics _ , _ unitext_ , vol .79 , springer , cham , 2014 . along the trail of volterra and lotka , la matematica per il 3 + 2 .http://dx.doi.org/10.1007/978-3-319-03026-5 kozlov , v. ; radosavljevic , s. ; tkachev , v. ; wennergren , u. : persistence analysis of the age - structured population model on several patches , in _ proceedings of the 16th international conference on mathematical methods in science and engineering _ , vol . 3 , 2016 pp . 717727arxiv:1608.04492 .kozlov , v. ; radosavljevic , s. ; turesson , b. o. ; wennergren , u. estimating effective boundaries of population growth in a variable environment ._ boundary value problems _ * 2016 * ( 2016 ) , no . 1 , 172 . http://dx.doi.org/10.1186/s13661-016-0681-9 krasnoselski , m. a. ; zabreko , p. p. _ geometrical methods of nonlinear analysis _ , _ grundlehren der mathematischen wissenschaften [ fundamental principles of mathematical sciences ] _ , vol . 263 , springer - verlag , berlin , 1984 . translated from the russian by christian c. fenske .meyer , c. _ matrix analysis and applied linear algebra _ , society for industrial and applied mathematics ( siam ) , philadelphia , pa , 2000 . with 1 cd - rom ( windows , macintosh and unix ) and a solutions manual ( iv+171 pp .http://dx.doi.org/10.1137/1.9780898719512 prss , j. on the qualitative behaviour of populations with age - specific interactions .appl . _ * 9 * ( 1983 ) , no . 3 , 327339 .hyperbolic partial differential equations .schmidt - wellenburg , c. a. ; visser , g. h. ; biebach , b. ; delhey , k. ; oltrogge , m. ; wittenzellner , a. ; biebach , h. ; kempenaers , b. trade - off between migration and reproduction : does a high workload affect body condition and reproductive state ?_ behavioral ecology _ * 19 * ( 2008 ) , no . 6 , 13511360 .smith , h. l. _ monotone dynamical systems _ , _ mathematical surveys and monographs _41 , american mathematical society , providence , ri , 1995 .an introduction to the theory of competitive and cooperative systems .webb , g. f. population models structured by age , size , and spatial position . in _structured population models in biology and epidemiology _ , _ lecture notes in math .1936 , pp .149 , springer , berlin , 2008 .
|
we consider a system of nonlinear partial differential equations that describes an age - structured population inhabiting several temporally varying patches . we prove existence and uniqueness of solution and analyze its large - time behavior in cases when the environment is constant and when it changes periodically . a pivotal assumption is that individuals can disperse and that each patch can be reached from every other patch , directly or through several intermediary patches . we introduce the net reproductive operator and characteristic equations for time - independent and periodical models and prove that permanency is defined by the net reproductive rate for the whole system . if the net reproductive rate is less or equal to one , extinction on all patches is imminent . otherwise , permanency on all patches is guaranteed . the proof is based on a new approach to analysis of large - time stability . department of mathematics , linkping university department of mathematics , linkping university department of mathematics , linkping university department of physics , chemistry , and biology , linkping university
|
research on machine learning has achieved great success on enhancing the models accuracy and efficiency .successful models such as support vector machines ( svms ) , random forests , and deep neural nets have been applied to vast industrial applications . however , in many applications , users may need not only a prediction model , but also suggestions on courses of actions to achieve desirable goals . for practitioners , a complex model such as a random forestis often not very useful even if its accuracy is high because of its lack of actionability . given a learning model , extraction of actionable knowledge entails finding a set of actions to change the input features of a given instance so that it achieves a desired output from the learning model .we elaborate this problem using one example .* example 1*. in a credit card company , a key task is to decide on promotion strategies to maximize the long - term profit .the customer relationship management ( crm ) department collects data about customers , such as customer education , age , card type , the channel of initiating the card , the number and effect of different kinds of promotions , the number and time of phone contacts , etc . for data scientists , they need to build models to predict the profit brought by customers . in a real case , a company builds a random forest involving 35 customer features .the model predicts the profit ( with probability ) for each customer .in addition , a more important task is to extract actionable knowledge to revert `` negative profit '' customers and retain `` positive profit '' customers . in general, it is much cheaper to maintain existing `` positive profit''customers than to revert `` negative profit '' ones .it is especially valuable to retain high profit , large , enterprise - level customers .there are certain actions that the company can take , such as making phone contacts and sending promotional coupons .each action can change the value of one or multiple attributes of a customer .obviously , such actions incur costs for the company .for instance , there are 7 different kinds of promotions and each promotion associates with two features , the number and the accumulation effect of sending this kind of promotion . when performing an action of `` sending promotion_amt_n '' , it will change features `` nbr_promotion_amt_n '' and `` s_amt_n '' , the number and the accumulation effect of sending the sales promotion , respectively . for a customer with `` negative profit '' ,the goal is to extract a sequence of actions that change the customer profile so that the model gives a positive profit " prediction while minimizing the total action costs . for a customer with `` positive profit '' ,the goal is to find actions so that the customer has a `` positive profit '' prediction with a higher prediction probability . research on extracting actionability from machine learning models is still limited .there are a few existing works .statisticians have adopted stochastic models to find specific rules of the response behavior of customer .there have also been efforts on the development of ranking mechanisms with business interests and pruning and summarizing learnt rules by considering similarity . however , such approaches are not suitable for the problems studied in this paper due to two major drawbacks .first , they can not provide customized actionable knowledge for each individual since the rules or rankings are derived from the entire population of training data .second , they did not consider the action costs while building the rules or rankings .for example , a low income housewife may be more sensitive to sales promotion driven by consumption target , while a social housewife may be more interested in promotions related to social networks .thus , these rule - based and ranking algorithms can not tackle these problems very well since they are not personalized for each customer .another related work is extracting actionable knowledge from decision tree and additive tree models by bounded tree search and integer linear programming .yang s work focuses on finding optimal strategies by using a greedy strategy to search on one or multiple decision trees .cui et al .use an integer linear programming ( ilp ) method to find actions changing sample membership on an ensemble of trees .a limitation of these works is that the actions are assumed to change only one attribute each time . as we discussed above, actions like `` sending promotion_amt_n '' may change multiple features , such as `` nbr_promotion_amt_n '' and `` s_amt_n '' .moreover , yang s greedy method is fast but can not give optimal solution , and cui s optimization method is optimal but very slow . in order to address these challenges ,we propose a novel approach to extract actionable knowledge from random forests , one of the most popular learning models .our approach leverages planning , one of the core and extensively researched areas of ai .we first rigorously formulate the knowledge extracting problem to a sub - optimal actionable planning ( soap ) problem which is defined as finding a sequence of actions transferring a given input to a desirable goal while minimizing the total action costs .then , our approach consists of two phases . in the offline preprocessing phase ,we use an anytime state - space search on an action graph to find a preferred goal for each instance in the training dataset and store the results in a database . in the online phase , for any given input , we translate the soap problem into a sas+ planning problem. the sas+ planning problem is solved by an efficient maxsat - based approach capable of optimizing plan metrics .we perform empirical studies to evaluate our approach .we use a real - world credit card company dataset obtained through an industrial research collaboration .we also evaluate some other standard benchmark datasets .we compare the quality and efficiency of our method to several other state - of - the - art methods .the experimental results show that our method achieves a near - optimal quality and real - time online search as compared to other existing methods .random forest is a popular model for classification , one of the main tasks of learning .the reasons why we choose random forest are : 1 ) in addition to superior classification / regression performance , random forest enjoys many appealing properties many other models lack , including the support for multi - class classification and natural handling of missing values and data of mixed types .2 ) often referred to as one of the best off - the - shelf classifier , random forest has been widely deployed in many industrial products such as kinect and face detection in camera , and is the popular method for some competitions such as web search ranking .consider a dataset , where is the set of training samples and is the set of classification labels .each vector consists of attributes , where each attribute can be either categorical or numerical and has a finite or infinite domain .note that we use to represent when there is no confusion .all labels have the same finite categorical domain .a random forest contains decision trees where each decision tree takes an input and outputs a label , denoted as . for any label ,the probability of output is where are weights of decision trees , is an indicator function which evaluates to 1 if and 0 otherwise .the overall output predicted label is a random forest is generated as follows . for , 1 .sample ( ) instances from the dataset with replacement .train an un - pruned decision tree on the sampled instances . at each node , choose the split point from a number of randomly selected features rather than all features . in classical planning , there are two popular formalisms , strips and pddl . in recent years , another indirect formalism , sas+ , has attracted increasing uses due to its many favorable features , such as compact encoding with multi - valued variables , natural support for invariants , associated domain transition graphs ( dtgs ) and causal graphs ( cgs ) which capture vital structural information . in sas+ formalism, a planning problem is defined over a set of multi - valued _ state variables _ .each variable has a finite domain .a state is a full assignment of all the variables .if a variable is assigned to at a state , we denote it as .we use to represent the set of all states .[ defn : sas - transition ] * ( transition ) * given a multi - valued state variable with a domain , a transition is defined as a tuple , where , written as .a transition is applicable to a state if and only if .we use to represent applying a transition to a state .let be the state after applying the transition to , we have .we also simplify the notation as or when there is no confusion .a transition is a * regular transition * if or a * prevailing transition * if .in addition , denotes a * mechanical transition * , which can be applied to any state and changes the value of to . for a variable , we denote the set of all transitions that affect as , i.e. , for all .we also denote the set of all transitions as , i.e. , .[ defn : sas - mutex ] * ( transition mutex ) * for two different transitions and , if at least one of them is a mechanical transition and , they are compatible ; otherwise , they are mutually exclusive ( mutex ) .[ defn : sas - action ] * ( action ) * an action is a set of transitions , where there do not exist two transitions that are mutually exclusive .an action is * applicable * to a state if and only if all transitions in are applicable to .each action has a * cost * .[ def : sas - problem ] * ( sas+ planning ) * a sas+ planning problem is a tuple defined as follows * is a set of state variables .* is a set of actions .* is the initial state .* is a set of goal conditions , where each goal condition is a partial assignment of some state variables .a state is a goal state if there exists such that agrees with every variable assignment in .note that we made a slight generalization of original sas+ planning , in which includes only one goal condition . for a state with an applicable action , we use to denote the resulting state after applying all the transitions in to ( in an arbitrary order since they are mutex free ) .[ defn : sas - action - mutex ] * ( action mutex ) * two different actions and are mutually exclusive if and only if at least one of the following conditions is satisfied : * there exists a non - prevailing transition such that and .* there exist two transitions and such that and are mutually exclusive . a set of actions is applicable to if each action is applicable to and no two actions in are mutex .we denote the resulting state after applying a set of actions to as .[ defn : sas - plan ] * ( solution plan ) * for a sas+ problem + , a solution plan is a sequence , where each , is a set of actions , and there exists , . note that in a solution plan , multiple non - mutex actions can be applied at the same time step . applying all actions in in any order to state . in this work ,we want to find a solution plan that minimizes a quality metric , the * total action cost * .we first give an intuitive description of the soap problem . given a random forest and an input , the soap problem is to find a sequence of actions that , when applied to , changes it to a new instance which has a desirable output label from the random forest .since each action incurs a cost , it also needs to minimize the total action costs . in general , the actions and their costs are determined by domain experts .for example , analysts in a credit card company can decide which actions they can perform and how much each action costs .there are two kinds of features ,_ soft attributes _ which can be changed with reasonable costs and _ hard attributes _ which can not be changed with a reasonable cost , such as gender .we only consider actions that change soft attributes .[ def : oap ] * ( soap problem ) * a soap problem is a tuple , where is a random forest , is a given input , is a class label , and is a set of actions .the goal is to find a sequence of actions , to solve : where is the cost of action , is a constant , is the output of as defined in ( [ eq : prob ] ) , and is the new instance after applying the actions in to .* example 2 . *a random forest with two trees and three features is shown in figure [ fig : forest ] . is a hard attribute , and are soft attributes .given and an input , the output from is 0 .the goal is to change to a new instance that has an output of 1 from .for example , two actions changing from 2 to 5 and from 500 to 1500 is a plan and the new instance is .the soap problem is proven to be an np - hard problem , even when an action can change only one feature .therefore , we can not expect any efficient algorithm for optimally solving it .we propose a planning - based approach to solve the soap problem .our approach consists of an offline preprocessing phase that only needs to be run once for a given random forest , and an online phase that is used to solve each soap problem instance . since there are typically prohibitively high number of possible instances in the feature space , it is too expensive and unnecessary to explore the entire space .we reason that the training dataset for building the random forest gives a representative distribution of the instances .therefore , in the offline preprocessing , we form an action graph and identify a preferred goal state for each training sample . *( feature partitions ) * given a random forest , we split the domain of each feature ( ) into a number of partitions according to the following rules . 1. is split into partitions if is categorical and has categories . is split into partitions if is numerical and has * branching nodes * in all the decision trees in .suppose the branching nodes are , the partitions are + .in example 2 , is splited into , and are splited into and , respectively .[ defn : state - transformation ] * ( state transformation ) * for a given instance , let be the number of partitions and the partition index for feature , we transform it to a sas+ state , where and . for simplicity, we use to represent when there is no confusion .note that if two instances and transform to the same state , then they have the same output from the random forest since they fall within the same partition for every feature . in that case , we can use in place of and .given the states , we can define sas+ transitions and actions according to definitions [ defn : sas - transition ] and [ defn : sas - action ] .for example 2 , can be transformed to state , .for an input , the corresponding state is .the action changing from 2 to 5 can be represented as . thus , the resulting state of applying is .[ defn : graph ] * ( action graph ) * given a soap problem , the action graph is a graph where is the set of transformed states and an edge if and only if there is an action such that .the weight for this edge is .the soap problem in definition [ def : oap ] is equivalent to finding the shortest path on the state space graph from a given state to a goal state .a node is a goal state if .given the training data , we use a heuristic search to find a * preferred goal * state for each that . for each of such , we find a path in the action graph from to a state such that while minimizing the cost of the path . , , minheap.push( ) , closedlist minheap.pop ( ) , , * if * * then return * closedlist = closedlist minheap.push( ) algorithm [ algo : non_opt ] shows the heuristic search .the search uses a standard evaluation function . is the cost of the path leading up to .let the path be , , , , and for , we have .we define the * heuristic function * as if , otherwise . for any state satisfying , .since the goal is to achieve , measures how far is from the goal . is a controlling parameter . in our experiments , is set to the mean of all the action costs .algorithm [ algo : non_opt ] maintains two data structures , a min heap and a closed list , and performs the following main steps : 1 .initialize , , and where represent the number of expanded states is the best goal state ever found , and records the cost of the path leading up to .add the initial state to the min heap ( lines 1 - 2 ) .2 . pop the state from the heap with the smallest ( line 4 ) .if and , update , , and the best goal state ( lines 5 - 6 ) .4 . if the termination condition ( ) is met , stop the search and return ( line 8) .5 . add to the closed list and for each edge , add to the min heap if is not in the closed list and not a goal state ( lines 10 - 12 ) .repeat from step 2 .the closed list is implemented as a set with highly efficient hashing - based duplicate detection .the search terminates when the search has not found a better plan for a long time ( ) .we set a large value ( ) in our experiments .note that algorithm [ algo : non_opt ] does not have to search all states since it will stop the search once a state * s * satisfies the termination condition ( line 8) . by the end of the offline phase , for each and the corresponding state , we find a preferred goal state .for an input in example 2 , the corresponding initial state is .an optimal solution is where , , , and the preferred goal state is .once the offline phase is done , the results can be used to repeatedly solve soap instances .we now describe how to handle a new instance and find the actionable plan . in online sas+ planning , we will find a number of closest states of and use the combination of their goals to construct the goal .this is inspired by the idea of similarity - based learning methods such as k - nearest - neighbor ( knn ) .we first define the similarity between two states . * ( feature similarity ) * given two states and , the similarity of the i - th feature variable is defined as : * if the i - th feature is categorical , if , otherwise . *if the i - th feature is numerical , where and are the partition index of features and , and is the number of partitions of the i - th feature .note that ] , is a hard attribute and and are not in the same partition . otherwise , the similarity is where is the feature weight in the random forest .note that ] . * action variables : , and ] . for each clause , its weight is defined as . for each clause in the hard clause set , its weight is so that it must be true . has the following hard clauses : * initial state : , * goal state : .it means at leat one goal condition must be true . *goal condition : , , .if is true , then for each assignment , at least one transition changing variable to value must be true at time . *progression : and ] , .* mutually exclusive transitions : for each mutually exclusive transitions pair , ] , . *composition of actions : and $ ] , . *action existence : for each non - prevailing transition , .there are three main differences between our approach and a related work , sase encoding .first , our encoding transforms the sas+ problem to a wpmax - sat problem aiming at finding a plan with minimal total action costs while sase transforms it to a sat problem which only tries to find a satisfiable plan .second , besides transition and action variables , our encoding has extra goal variables since the goal definition of our sas+ problem is a combination of several goal states while in sase it is a partial assignment of some variables .third , the goal clauses of our encoding contain two kinds of clauses while sase has only one since the goal definition of ours is more complicated than sase .we can solve the above encoding using any of the maxsat solvers , which are extensively studied .using soft clauses to optimize the plan in our wpmax - sat encoding is similar to balyo s work which uses a maxsat based approach for plan optimization ( removing redundant actions ) .to test the proposed approach ( denoted as planning " ) , in the offline preprocess , in algorithm [ algo : non_opt ] is set to . in the online search , we set neighborhood size and use wpm-2014-in to solve the encoded wpmax - sat instances . for comparison , we also implement three solvers : 1 ) an iterative greedy algorithm , denoted as `` greedy '' which chooses one action in each iteration that increases while minimizes the total action costs .it keeps iterating until there is no more variables to change .2 ) a sub - optimal state space method denoted as `` ns '' .3 ) an integer linear programming ( ilp ) method , one of the state - of - the - art algorithms for solving the soap problem . ilp gives exact optimal solutions ..datasets information and offline preprocess results . [ cols="<,^,^,^,^,^,^",options="header " , ] table [ tb : summarize ] shows a comprehensive comparison in terms of the average search time , the solution quality measured by the total action costs , the action number of solutions , and the memory usage under the preprocessing percentage 100% . we report the search time ( t ) in seconds , total action costs of the solutions ( cost ) , action number of solutions ( l ) , and the memory usage ( gb ) , averaged over 100 runs . from table[ tb : summarize ] , we can see that even though our method spends quite a lot of time in the offline processing , its online search is very fast . since our method finds near optimal plans for all training samples ,its solution quality is much better than greedy while spending almost the same search time .comparing against np , our method is much faster in online search and maintains better solution qualities in a1a and ionosphere scale and equal solution qualities in other 8 datasets .comparing against ilp , our method is much faster in online search with the cost of losing optimality . typically a trained random forest model will be used for long time . since our offlinepreprocessing only needs to be run once , its cost is well amortized over large number of repeated uses of the online search . in short , our planning approach gives a good quality - efficiency tradeoff : it achieves a near - optimal quality using search time close to greedy search .note that since we need to store all preprocessed states and their preferred goal states in the online phase , the memory usage of our method is much larger than greedy and ns approaches .we have studied the problem of extracting actionable knowledge from random forest , one of the most widely used and best off - the - shelf classifiers .we have formulated the sub - optimal actionable plan ( soap ) problem , which aims to find an action sequence that can change an input instance s prediction label to a desired one with the minimum total action costs .we have then proposed a sas+ planning approach to solve the soap problem . in an offline phase ,we construct an action graph and identify a preferred goal for each input instance in the training dataset . in the online planning phase , for each given input , we formulate the soap problem as a sas+ planning instance based on a nearest neighborhood search on the preferred goals ,encode the sas+ problem to a wpmax - sat instance , and solve it by calling a wpmax - sat solver .our approach is heuristic and suboptimal , but we have leveraged sas+ planning and carefully engineered the system so that it gives good performance .empirical results on a credit card company dateset and other nine benchmarks have shown that our algorithm achieves a near - optimal solution quality and is ultra - efficient , representing a much better quality - efficiency tradeoff than some other methods . with the great advancements in data science , an ultimate goal of extracting patterns from data is to facilitate decision making .we envision that machine learning models will be part of larger ai systems that make rational decisions .the support for actionability by these models will be crucial .our work represents a novel and deep integration of machine learning and planning , two core areas of ai .we believe that such integration will have broad impacts in the future .note that the proposed action extraction algorithm can be easily expanded to other additive tree models ( atms ) , such as adaboost , gradient boosting trees .thus , the proposed action extraction algorithm has very wide applications . in our soap formulation ,we only consider actions having deterministic effects .however , in many realistic applications , we may have to tackle some nondeterministic actions .for instance , push a promotional coupon may only have a certain probability to increase the accumulation effect since people do not always accept the coupon .we will consider to add nondeterministic actions to our model in the near future .l. cao , c. zhang , d. taniar , e. dubossarsky , w. graco , q. yang , d. bell , m. vlachos , b. taneri , e. keogh , et al .domain - driven , actionable knowledge discovery ._ ieee intelligent systems _ , 0 (4):0 7888 , 2007 .q. lu , r. huang , y. chen , y. xu , w. zhang , and g. chen . a sat - based approach to cost - sensitive temporally expressive planning ._ acm transactions on intelligent systems and technology _ , 50 ( 1):0 18:118:35 , 2014 . j. shotton , t. sharp , a. kipman , a. fitzgibbon , m. finocchio , a. blake , m. cook , and r. moore . real - time human pose recognition in parts from single depth images ._ communications of the acm _ , 560 ( 1):0 116124 , 2013 .
|
a main focus of machine learning research has been improving the generalization accuracy and efficiency of prediction models . many models such as svm , random forest , and deep neural nets have been proposed and achieved great success . however , what emerges as missing in many applications is actionability , i.e. , the ability to turn prediction results into actions . for example , in applications such as customer relationship management , clinical prediction , and advertisement , the users need not only accurate prediction , but also actionable instructions which can transfer an input to a desirable goal ( e.g. , higher profit repays , lower morbidity rates , higher ads hit rates ) . existing effort in deriving such actionable knowledge is few and limited to simple action models which restricted to only change one attribute for each action . the dilemma is that in many real applications those action models are often more complex and harder to extract an optimal solution . in this paper , we propose a novel approach that achieves actionability by combining learning with planning , two core areas of ai . in particular , we propose a framework to extract actionable knowledge from random forest , one of the most widely used and best off - the - shelf classifiers . we formulate the actionability problem to a sub - optimal action planning ( soap ) problem , which is to find a plan to alter certain features of a given input so that the random forest would yield a desirable output , while minimizing the total costs of actions . technically , the soap problem is formulated in the sas+ planning formalism , and solved using a max - sat based approach . our experimental results demonstrate the effectiveness and efficiency of the proposed approach on a personal credit dataset and other benchmarks . our work represents a new application of automated planning on an emerging and challenging machine learning paradigm . actionable knowledge extraction , machine learning , planning , random forest , weighted partial max - sat
|
to identify the dynamical state of multi - planetary systems , we use the megno technique ( the acronym of mean exponential growth factor of nearby orbits ; cincotta & sim 2000 ) .this method provides relevant information about the global dynamics and the fine structure of the phase space , and yields simultaneously a good estimate of the lyapunov characteristic numbers with a comparatively small computational effort . from the megno technique, we have built the mips package ( acronym of megno indicator for planetary systems ) specially devoted to the study of planetary systems in their multi - dimensional space as well as their conditions of dynamical stability .particular planetary systems presented in this paper are only used as initial condition sources for theoretical studies of 3-body problems . by convention ,the reference system is given by the orbital plane of the inner planet at .thus , we suppose the orbital inclinations and the longitudes of node of the inner ( noted 1 ) and the outer ( noted 2 ) planets ( which are non - determined parameters from observations ) as follows : and in such a way that the relative inclination and the relative longitude of nodes are defined at as follows : and .the mips maps presented in this paper have been confirmed by a second global analysis technique ( marzari _ et al ._ 2006 ) based on the frequency map analysis ( fma ; laskar 1993 ) .studying conditions of dynamical stability in the neighborhood of the hd73526 two - planet system ( period ratio : 2/1 , see initial conditions in table 1 ) , we only find one stable and robust island ( noted ( 2 ) ) for a relative inclination of about ( see fig .[ fig1]a ) .such a relative inclination ( where in fact and ) may be considered to a coplanar system where the planet 2 has a retrograde motion with respect to the planet 1 . from a kinematic point of view , it amounts to consider a scale change of in relative inclinations . taking into account initial conditions inside the island ( 2 ) of fig . 1a , we show that the presence of a strong mean - motion resonance ( mmr ) induces clear stability zones with a nice v - shape structure , as shown in fig .1b plotted in the ] with about 0.015 au wide . due to the retrograde motion of the outer planet 2 ,this mmr is a 2:1 retrograde resonance , also noted 2:-1 mmr . ] parameter space for initial conditions taken in the stable zone ( 2 ) of panel ( a ) . note that masses remain untouched whatever the mutual inclinations may be ; they are equal to their minimal observational values .black and dark - blue colors indicate stable orbits ( and respectively with , the megno indicator value ) while warm colors indicate highly unstable orbits.,title="fig:",width=166 ] [ fig1 ] ] parameter space considering a scale reduction of the hd82943 planetary system ( see table 1 ) according to a factor 7.5 on semi - major axes ( masses remaining untouched ) .the dynamical behavior of the reduced system ( fig .2b ) with respect to the initial one ( fig .2a ) points up the clear robustness of retrograde configurations contrary to prograde ones .the `` prograde '' stable islands completely disappear while only the `` retrograde '' stable island resists , persists and even extends more or less .even for very small semi - major axes and large planetary masses , which should a priori easily make a system unstable or chaotic , stability is possible with counter - revolving orbits . in the case of the 2:1 retrograde resonance , although close approaches happen more often ( 3 for the 2:-1 mmr ) compared to the 2:1 prograde resonance , the 2:-1 mmr remains very efficient for stability because of faster close approaches between the planets .a more detailed numerical study of retrograde resonances can be found in gayon & bois ( 2008 ) ..[tab1]orbital parameters of the hd , hd , hd , hd and hd planetary systems .data sources come from tinney et al .( 2006 ) , mayor et al .( 2004 ) , vogt et al .( 2005 ) , mccarthy et al .( 2004 ) and correia et al .( 2005 ) respectively . for each system and each orbital element, the first line corresponds to the inner planet and the second one to the outer planet . [ cols="^,^,^,^,^,^",options="header " , ] [ tab2 ]the occurence of stable two - planet systems including counter - revolving orbits appears in the neighborhood of a few systems observed in 2:1 or 5:1 mmr .new observations frequently induce new determinations of orbital elements .it is the case for the hd160691 planetary system given with 2 planets in mccarthy _( 2004 ) then with 4 planets in pepe _ et al . _hence , systems related to initial conditions used here ( see table 1 ) have to be considered as _ academic _ systems . statistical results for stability of these academic systemsare presented in table 2 , both in the prograde case ( ) and in the retrograde case ( ) . for each data source ,1000 random systems taken inside observational error bars have been integrated . among these random systems, the proportion of stable systems either with prograde orbits or with counter - revolving orbits is given in table 2 .in all cases , a significant number of stable systems is found in retrograde mmr .moreover , in most data sources , retrograde possibilities predominate .the 2:1 ( prograde ) mmrs preserved by synchronous precessions of the apsidal lines ( asps ) are from now on well understood ( see for instance lee & peale 2002 , bois _ et al ._ 2003 , ji _ et al ._ 2003 , ferraz - mello _ et al .the mmr - asp combination is often very effective ; however , asps may also exist alone for stability of planetary systems .related to subtle relations between the eccentricity of the inner orbit ( ) and the relative apsidal longitude ( i.e. ) , fig .3 permits to observe how the 2:1 retrograde mmr brings out its resources in the ] v - shape of fig .1b ) , the 2:-1 mmr is combined with a uniformly prograde asp ( both planets precess on average at the _ same rate _ and in the _ same prograde direction _ ) . * in the island ( 2 ) ( i.e. outside but close to the ] map also exposes a third island ( 3 ) that proves to be a wholly chaotic zone on long term integrations .let us note that the division between islands ( 1 ) and ( 2 ) is related to the degree of closeness to the 2:-1 mmr .2 ] .color scale and initial conditions are the same as in fig .1 with in addition the and values chosen in the island ( 2 ) of fig .1a.,title="fig:",width=166 ] + [ fig3 ]we have found that retrograde resonances present fine and characteristic structures particularly relevant for dynamical stability . we have also shown that in cases of very compact systems obtained by scale reduction , only the `` retrograde '' stable islands survive . from our statistical approach and the scale reduction experiment ,we have expressed the efficiency for stability of retrograde resonances .such an efficiency can be understood by very fast close approaches between the planets although they are in greater number .we plan to present an hamiltonian approach of retrograde mmrs in a forthcoming paper ( gayon , bois , & scholl , 2008 ) . besides , in gayon & bois ( 2008 ) , we propose two mechanisms of formation for systems harboring counter - revolving orbits .free - floating planets or the slingshot model might indeed explain the origin of such planetary systems .
|
multi - planet systems detected until now are in most cases characterized by hot - jupiters close to their central star as well as high eccentricities . as a consequence , from a dynamical point of view , compact multi - planetary systems form a variety of the general -body problem ( with ) , whose solutions are not necessarily known . extrasolar planets are up to now found in prograde ( i.e. direct ) orbital motions about their host star and often in mean - motion resonances ( mmr ) . in the present paper , we investigate a theoretical alternative suitable for the stability of compact multi - planetary systems . when the outer planet moves on a retrograde orbit in mmr with respect to the inner planet , we find that the so - called retrograde resonances present fine and characteristic structures particularly relevant for dynamical stability . we show that retrograde resonances and their resources open a family of stabilizing mechanisms involving specific behaviors of apsidal precessions . we also point up that for particular orbital data , retrograde mmrs may provide more robust stability compared to the corresponding prograde mmrs .
|
hill s vortex is one of the few known analytical solutions of euler s equations in three - dimensional ( 3d ) space . in the cylindrical polarcoordinate system , it represents a compact axisymmetric region of azimuthal vorticity ] ( `` : = '' means `` equal to by definition '' ) , these flows satisfy the following system in the frame of reference moving with the translation velocity of the vortex [ eq : euler3d ] where the vorticity function has the form in which and are constants . system therefore describes a compact region with azimuthal vorticity varying proportionally to the distance from the flow axis embedded in a potential flow .the boundary of this region is a priori unknown and must be found as a part of the problem solution . system thus represents a _ free - boundary _ problem and , as will become evident below , this property makes the study of the stability of solutions more complicated . for hill s vortex the equilibrium shape of the boundary has the form of a sphere with the flow described in the translating frame of reference by the streamfunction , & \mbox{if } & x^2+\sigma^2 > a^2 , \end{array } \right .\label{eq : psihill}\ ] ] where is the radius of the sphere .the components of the velocity field ^t ] , so that and . therefore , below we will use as our independent variable . combining and replacing the line integrals with the corresponding definite ones we finally obtain the perturbation equation [ eq : l ] where denotes the associated linear operator and [ eq : i ] = \left[-3\cos\theta + 5 \,\cos\theta'\right ] \, \q(\theta,\theta ' ) , \label{eq : i1 } \\ i_2(\theta,\theta ' ) & : = \n_\theta \cdot \dpartial{\bk(\theta,\theta')}{n_{\theta } } = \left[\cos\theta + \cos\theta ' \right ] \ , \q(\theta,\theta ' ) \label{eq : i2 } \end{aligned}\ ] ] in which k ( \tilde{r } ) + \sin\theta ' \left [ \cos\theta -\cos\theta ' \right]^2 e ( \tilde{r } ) } { 2 \pi \ , \left [ \cos(\theta-\theta ' ) -1 \right ] \sqrt{ 2- 2\,\cos(\theta-\theta ' ) } } } , \\ \tilde{r } & : = \sqrt { { \left[\cos \left ( -\theta+\theta ' \right ) -\cos \left ( \theta+\theta ' \right)\right ] / \left[1-\cos \left ( \theta+\theta ' \right ) \right]}}.\end{aligned}\ ] ] as regards the singularities of the kernels , one can verify by inspection that [ eq : ksing ] the singularities of the kernels and vanish at and . properties will be instrumental in achieving spectral accuracy in the discretization of system .equation is a first - order integro - differential equation and as such would in principle require only one boundary condition .however , since the kernel is obtained via averaging with respect to the azimuthal angle ( due to the axisymmetry assumption , see ) , the different terms and integrands in equation exhibit the following reflection symmetries [ eq : refl ] \quad \left(\v_0\cdot\t\right)_\theta & = - \left(\v_0\cdot\t\right)_{-\theta } , \label{eq : refl_a } \\ i_1(\theta,\theta ' ) & = - i_1(-\theta,-\theta ' ) , \label{eq : refl_b } \\i_2(\theta,\theta ' ) & = - i_2(-\theta,-\theta ' ) \label{eq : refl_c}\end{aligned}\ ] ] indicating that equation is invariant with respect to the change of the independent variable ] subject to _ even _ initial data , , its solution will also remain an even function of at all times , i.e. , , .in particular , _if_they are smooth enough , these solutions will satisfy the symmetry conditions thus , system with even initial data ( which is consistent with the axisymmetry assumption ) and subject to boundary conditions is _ not _ an over - determined problem .these observations will be used in the next section to construct a spectral discretization of equation .after introducing the ansatz where and , system together with the boundary conditions takes the form of a constrained eigenvalue problem [ eq : evalp ] where the operator is defined in .the eigenvalues and the eigenfunctions characterize the stability of hill s vortex to axisymmetric perturbations .in this section we describe the numerical approach with a focus on the discretization of system and the solution of the resulting algebraic eigenvalue problem . we will also provide some details about how this approach has been validated .we are interested in achieving the highest possible accuracy and , in principle , eigenvalue problems for operators defined in the continuous setting on one - dimensional ( 1d ) domains can be solved with machine precision using chebfun . however , at present , chebfun does not have the capability to deal with singular integral operators such as .we have therefore implemented an alternative hybrid approach relying on a representation of the operator in a trigonometric basis in which kernel singularities are treated analytically and chebfun is used to evaluate the remaining definite integrals with high precision .the eigenfunctions are approximated with the following truncated series where are unknown coefficients , which satisfies exactly the boundary conditions .the interval ] andthe matrix , equation can be expressed as introducing operator defined as } , \label{eq : b}\ ] ] the constraint can be expressed as .the kernel space of this operator , , thus corresponds to the subspace of functions satisfying condition .the projection onto this subspace is realized by the operator , where is the identity matrix and is the moore - penrose pseudo - inverse of the operator .defining the restricted variable , problem transforms to where is the moore - penrose pseudo - inverse of the projector , which can now be solved using standard techniques .we note that the dimension of this problem is .an alternative approach to impose constraint is to frame as a generalized eigenvalue problem .we now offer some comments about the accuracy and validation of the computational approach described above .the accuracy of approximation of singular integrals in and was tested by applying this approach to the integral operator in which has the same singularity structure as in and in , and for which an exact formula is available , cf . .in addition , an analogous test was conducted for the tangential velocity component given by ] in the trigonometric basis .we then obtain from from which the original problem is clearly recovered when .the regularized eigenvectors corresponding to are therefore guaranteed to be smoother than the original eigenvectors ( the actual improvement of smoothness depends on the value of ) . in the next section , among other results , we will study the behavior of solutions to eigenvalue problem for a decreasing sequence of regularization parameters .in this section we first summarize the numerical parameters used in the computations and then present the results obtained by solving eigenvalue problem for different values of the regularization parameter .all computations were conducted setting in the regularization operator and using the resolutions in .we allowed the regularization parameter to take a wide range of values .we note that with the smallest values from this range regularization barely affects the eigenvalue problem even when the highest resolutions are used .therefore , these value may be considered small enough to effectively correspond to the limit . in our analysis below we will first demonstrate that , for a fixed parameter , the solutions of the regularized problem converge as the numerical resolution is refined .then , we will study the behavior of the eigenvalues and eigenfunctions as the regularization parameter is reduced . a typical eigenvalue spectrum obtained by solving problemis shown in figure [ fig : sp ] .the fact that the spectrum is symmetric with respect to the lines and reflects the hamiltonian structure of the problem .given the ansatz for the perturbations introduced in [ sec : stab ] , eigenvalues with negative imaginary parts correspond to linearly unstable eigenmodes and we see in figure [ fig : sp ] that there are two such eigenvalues in addition to two eigenvalues associated with linearly stable eigenmodes .we will refer as the first and the second to the eigenvalues with , respectively , the larger and the smaller magnitude .in addition to these four purely imaginary eigenvalues , there is also a large number of purely real eigenvalues covering a segment of the axis which can be interpreted as the _ continuous _ spectrum .the spectrum shown in figure [ fig : sp ] was found to be essentially independent of the regularization parameter and its dependence on the numerical resolution is discussed below . in this analysiswe will set . and.,scaledwidth=80.0% ] the dependence of the four purely imaginary eigenvalues on the resolution is shown in figures [ fig : imlam](a d ) , where we see that the eigenvalues all converge to well - defined limits .however , as will be discussed below , problem does not admit smooth solutions ( eigenvectors ) and therefore the convergence of eigenvalues with is only algebraic rather than spectral .thus , the numerical approximation error for an eigenvalue can be represented as for some and . using the data from figure [ fig : imlam ] to evaluate as a function of the resolution , one can estimate the order of convergence using a least - squares fit as for the first eigenvalue ( both stable and unstable ) and for the second ( both stable and unstable ) .this confirms that the first eigenvalues converge much faster than the second .the dependence of the purely real eigenvalues on the resolution is illustrated in figures [ fig : relam](a b ) .first of all , we notice that the purely real eigenvalues do not appear to converge to any particular limit as is increased and instead fill the interval of the axis with increasing density ( figure [ fig : relam](b ) ) . from figure[ fig : relam](a ) we can infer that the lower and upper bounds of this interval approximately scale with the resolution as \sim n^{-0.22 } \quad \text{and } \quad \max_i \left[|\re(\lambda_i^n)|\right ] \sim n^{1.04 } , \quad i=1,\dots , n , \label{eq : lm}\ ] ] where denotes the -th eigenvalue computed with the resolution .all these observations allow us to conclude that the continuous eigenvalue problem has four purely imaginary eigenvalues and a continuous spectrum coinciding with the axis .we now move on to discuss the eigenvectors corresponding to the purely imaginary eigenvalues .the linearly unstable and stable eigenvectors are shown as functions of the polar angle for different resolutions in figures [ fig : evec_n_th](a d ) . in these figureswe only show the real parts of the eigenvectors , since given our ansatz for the perturbation , the imaginary parts do not play any role when the eigenvalues are purely imaginary .hence , below the term `` eigenvector '' will refer to .we note that , as the resolution increases , the unstable and stable eigenvectors associated with a given eigenvalue become reflections of each other with respect to the midpoint with the unstable eigenvectors exhibiting a localized peak near the rear stagnation point ( ) and the stable eigenvectors exhibiting such a peak near the front stagnation point ( ) . in figures [ fig : evec_n_th](a d ) we also observe that , for a fixed regularization parameter , the numerical approximations of eigenfunctions converge uniformly in for increasing , although this convergence is significantly slower for points close to the endpoint opposite to where the eigenvector exhibits a peak .we remark that the same behaviour of spectral approximations to eigenfunctions was also observed by .the two unstable eigenvectors and are strongly non - normal with , where and are respectively the inner product and the norm in the space , when and . consequently , the two unstable eigenvectors appear quite similar as functions of , especially near the peak ( cf . figures [ fig : evec_n_th](a b ) ) . on the other hand , the fourier spectra of their expansion coefficients shown in figures [ fig : evec_n_sp](a b )exhibit quite distinct properties .more specifically , we see that the slope of the fourier spectra for is quite different in the two cases : it is close to and for the eigenvectors associated with , respectively , the first and second eigenvalue .we emphasize however that the specific slopes are determined by the choice of the parameter in the regularizing operator and here we are interested in the relative difference of the slopes in the two cases .further distinctions between the eigenvectors associated with the first and the second eigenvalue will be elucidated below when discussing their behavior in the limit of decreasing regularization parameter .having established the convergence of the numerical approximations of the eigenfunctions with the resolution for a fixed regularization parameter , we now go on to characterize their behaviour when is decreased . unless indicated otherwise ,the results presented below were obtained with the resolution . in figures [ fig : evec_kc_th](a b ) we show the behavior of the two unstable eigenvectors near the rear and front stagnation points for different values of the regularization parameter .we see that , as this parameter is decreased , the peak near the rear stagnation point ( figure [ fig : evec_kc_th](a ) ) becomes steeper and more localized , especially for the eigenvector associated with the first eigenvalue .likewise , the oscillation of the unstable eigenvectors near the front stagnation point ( figure [ fig : evec_kc_th](b ) ) also becomes more intense and localized as decreases , although this effect is more pronounced in the case of the eigenvector corresponding to the second eigenvalue .these properties are further characterized in the plots of the fourier spectra of the two eigenvectors shown in figures [ fig : evec_kc_sp](a b ) for different values of the regularization parameter . in these plotsit is clear that , as the regularization effect vanishes ( corresponding to decreasing values of ) , the point where the slope of the spectrum changes moves towards larger wavenumbers . for approximate slopes of the coefficient spectra are , respectively , and for the eigenvectors associated with the first and second eigenvalue ( this difference of slopes may explain the different behaviors in the physical space already observed in figure [ fig : evec_kc_th](b ) ) . extrapolating from the trends evident in figures [ fig : evec_kc_sp](a b ) it can be expected that these slopes will remain unchanged in the limit . with this behaviour of the fourier coefficients , involving no decay at all for the first eigenvector and a slow decay for the second , expansion does not converge in the limit of , indicating that the stable and unstable eigenvectors do not have the form of smooth functions , but rather `` distributions '' . as regards the nature of their singularity , the slopes observed in figures [ fig : evec_kc_sp](a b ) , i.e. , and , indicate that the eigenvector associated with the first eigenvalue is consistent with the dirac delta ( whose spectral slope is also 0 ) , whereas the eigenvector associated with the second eigenvalue is intermediate between the dirac delta and the heaviside step function ( whose spectral slope is -1 ) . finally , we go on to discuss the eigenvectors associated with the purely real eigenvalues forming the continuous part of the spectrum .since , as demonstrated in figure [ fig : relam ] , for increasing resolutions different eigenvalues are actually computed in the continuous spectrum , there is no sense of convergence with .we will therefore analyze here the effect of decreasing the regularization parameter at a fixed resolution .as above , we will focus on the real parts of the eigenvectors ( with the imaginary parts having similar properties ) .to fix attention , we consider the neutrally - stable eigenvector associated with the eigenvalue . in figure[ fig : neut_th ] we show the dependence of on the polar angle with and for different values of the regularization parameter .we observe that as decreases the oscillations move away from the centre of the domain ].,scaledwidth=50.0% ]in this section we first provide a simple argument to justify the numerical results obtained in [ sec : results ] and then make some comparisons with the results of earlier studies .some properties of the eigenvectors discussed in [ sec : results ] are consequences of the `` degeneracy '' of the stability operator . more specifically , knowing the streamfunction field characterizing hill s vortex , the coefficient of the derivative term on the rhs in can be expressed as which vanishes at the endpoints . to illustrate the effect of this degeneracy we will consider a simplified model problem obtained from by dropping the integral terms and rescaling the coefficients , so that we obtain \label{eq : sindudt}\ ] ] for some .we now perform a change of variables defined through , so that where the lower integration bound was chosen to make the transformation antisymmetric with respect to the midpoint of the interval ] to the real line . introducing this change of variables in equation , we obtain it then follows from and that equation admits a continuous spectrum coinciding with the entire complex plane with the eigenfunctions given by . when the eigenvalues are restricted to the real line , the corresponding eigenfunctions exhibit oscillations with wavelengths decreasing as , as was also observed in [ sec : results ] for the neutrally - stable modes , cf .figure [ fig : neut_th ] . on the other hand , for purely imaginary eigenvalues , where , the corresponding eigenfunctions take the form which for has the properties and consistent with the singular behaviour of the unstable eigenmodes observed in [ sec : results ] , cf .figures [ fig : evec_n_th ] and [ fig : evec_kc_th ] .thus , one can conclude that the singular structure of the eigenvectors is a consequence of the degeneracy of the coefficient in front of the derivative term in and some qualitative insights about this issue can be deduced based on the simplified problem .we add that similar problems are known to arise in hydrodynamic stability , for example , in the context of the inviscid rayleigh equation describing the stability of plane parallel flows . in that problem, however , the singularity appears inside the domain giving rise to critical layers with locations dependent on the eigenvalues .we now return to a remark made in introduction , namely , that hill s vortices represent a one - parameter family of solutions parameterized by the constant , or equivalently , by the translation velocity , cf . .we remark that the stability operator defined in is linear in which implies that eigenvalues ( and hence also the growth rates ) will be proportional to ( or ) .this is also consistent with the observations made by .next we compare our findings with the results of which concerned essentially the same problem .we remark that these results were verified computationally by .the exponential growth rate of the unstable perturbations predicted by was ( using our present notation ) which is in excellent agreement with the first unstable eigenvalue found here , cf .figure [ fig : imlam](a ) .similar agreement was found as regards the structure of the most unstable perturbation also found it to have the form of a localized spike at the rear stagnation point ( the fact that this spike had a finite width seems related to the truncation of the infinite system of ordinary differential equations ) .it appears that the second unstable mode , cf .figure [ fig : evec_n_th](b ) , was undetected by the analysis of due to its smaller growth rate , cf .figure [ fig : imlam](b ) . to close this section , we comment on the continuous part of the spectrum which was reported in [ sec : results ] , cf .figures [ fig : sp ] and [ fig : relam ] .such continuous spectra often appear in the study of infinite - dimensional hamiltonian , or more generally , non self - adjoint systems , where nontrivial effects may arise from its interaction with the discrete spectrum . in the present problem , however ,the results of and indicate that the observed instability has the form of a purely modal growth which can be completely explained in terms of the discrete spectrum and the associated eigenfunctions .moreover , this is confirmed by the very good agreement between the growth rate of the instability determined by and the value of the first unstable eigenvalue obtained in our study .these observations thus allow us to conclude that there is no evidence for any role the continuous spectrum might play in the observed instability mechanism .in this study we have considered the linear stability of hill s vortex with respect to axisymmetric circulation - preserving perturbations .this was done using the systematic approach of to obtain an eigenvalue problem characterizing the linearized evolution of perturbations to the shape of the vortex boundary .recognizing that the euler equation describing the evolution of discontinuous vorticity distributions gives rise to a _ free - boundary _ problem , our approach was based on shape differentiation of the contour - dynamics formulation in the 3d axisymmetric geometry . as such, it did not involve the simplifications invoked in the earlier studies of this problem by which were related to , e.g. , approximate only satisfaction of the kinematic conditions on the vortex boundary .the resulting singular integro - differential operator was approximated with a spectral method in which the integral expressions were evaluated analytically and using chebfun .we considered a sequence of regularized eigenvalue problems featuring smooth eigenfunctions for which the convergence of the numerical approximation was established .then , the original problem was recovered in the limit of vanishing regularization parameter . since in the limit the eigenfunctions were found to be distributions , the convergence of this approach with the resolution was not very fast , but it did provide a precise characterization of their regularity in terms of the rate of decay of the fourier coefficients in expansion . following this procedure we showed that the stability operator has four purely imaginary eigenvalues , associated with two unstable and two stable eigenmodes , in addition to a continuous spectrum of purely real eigenvalues associated with neutrally - stable eigenmodes .the two unstable eigenmodes are distributions in the form of infinitely sharp peaks localized at the rear stagnation point and differ by their degree of singularity .the stable eigenmodes have the form of similar peaks localized at the front stagnation point .on the other hand , the neutrally - stable eigenvectors have the form of `` wiggles '' concentrated in a vanishing neighbourhood of the two stagnation points with the number of oscillations increasing with the eigenvalue magnitude .our results are consistent with the findings from the earlier studies of this problem by .we emphasize that these earlier studies did not , however , solve the complete linear stability problem and only considered the linearized evolution of some prescribed initial perturbation ( they can be therefore regarded as evaluating the action of an operator ( matrix ) on a vector , rather than determining all of its eigenvalues and eigenvectors ) .these studies did conclude that initial perturbations evolve towards a sharp peak concentrated near the rear stagnation point .thus , our present findings may be interpreted as sharpening the results of these earlier studies . in particular ,excellent agreement was found with the growth rate of the unstable perturbations found by .the findings of the present study lead to some intriguing questions concerning the initial - value problem for the evolution of hill s vortex with a perturbed boundary .it appears that , in the continuous setting without any regularization , this problem may not be well - posed , in the sense that , for generic initial perturbations , the vortex boundary may exhibit the same poor regularity as observed for the unstable eigenvectors in [ sec : results ] ( i.e. , be at least discontinuous ) . while it is possible that the nonlinearity might exert some regularizing effect , this is an aspect of the problem which should be taken into account in its numerical solution .a standard numerical approach to the solution of such problems is the axisymmetric version of the `` contour dynamics '' method in which the discretization of the contour boundary with straight segments or circular arcs combined with an approximation of the singular integrals provide the required regularizing effect . on the other hand ,the singular structure of the solution can be captured more readily with higher - order methods , such as the spectral approach developed here .there is a number of related problems which deserve attention and will be considered in the near future .a natural extension of the questions addressed here is to investigate the stability of hill s vortex with respect to non - axisymmetric perturbations , as already explored by .another interesting question is to consider the effect of swirl .hill s vortex is a member of the norbury - fraenkel family of inviscid vortex rings and their stability remains an open problem .it was argued by that the highly localized nature of the boundary response of hill s vortex to perturbations is a consequence of the presence of a stagnation point . since the norbury - fraenkel vortices other than hill s vortex do not feature stagnation points on the vortex boundary , it may be conjectured that in those cases eigenfunctions of the stability operator will be smooth functions of the arclength coordinate .therefore , in the context of the linear stability problem , the family of the norbury - fraenkel vortex rings may be regarded as a `` regularization '' of hill s vortex analogous and alternative to our approach developed in [ sec : numer ] , cf . . the different problems mentioned in this paragraph , except for the effect of swirl ,can be investigated using the approach developed by and also employed in the present study .as regards the stability of hill s vortex with swirl , the difficulty stems from the fact that , to the best of our knowledge , there is currently no vortex - dynamics formulation of the type available for axisymmetric flows with swirl .our next step will be to analyze the stability of the norbury - fraenkel vortex rings to axisymmetric perturbations .finally , it will also be interesting to compare the present findings with the results of the short - wavelength stability analysis of .in particular , one would like to know if there is any overlap between the two stability analyses and , if so , whether they can produce comparable predictions of the growth rates .the authors are thankful to toby driscoll for helpful advice on a number of chebfun - related issues and to dmitry pelinovsky for his comments on the singular structure of the eigenfunctions .anonymous referees provided constructive comments which helped us to improve the paper . b.p .acknowledges the support through an nserc ( canada ) discovery grant .1988 nonlinear stability bounds for inviscid , two - dimensional , parallel or circular flows with monotonic vorticity , and the analogous three - dimensional quasi - geostrophic flows ._ j. fluid mech . _ * 191 * , 575581 .
|
we consider the linear stability of hill s vortex with respect to axisymmetric perturbations . given that hill s vortex is a solution of a free - boundary problem , this stability analysis is performed by applying methods of shape differentiation to the contour dynamics formulation of the problem in a 3d axisymmetric geometry . this approach allows us to systematically account for the effect of boundary deformations on the linearized evolution of the vortex under the constraint of constant circulation . the resulting singular integro - differential operator defined on the vortex boundary is discretized with a highly accurate spectral approach . this operator has two unstable and two stable eigenvalues complemented by a continuous spectrum of neutrally - stable eigenvalues . by considering a family of suitably regularized ( smoothed ) eigenvalue problems solved with a range of numerical resolutions we demonstrate that the corresponding eigenfunctions are in fact singular objects in the form of infinitely sharp peaks localized at the front and rear stagnation points . these findings thus refine the results of the classical analysis by . vortex flows vortex instability ; mathematical foundations computational methods ;
|
hyper - dense small - cell deployments are expected to play a pivotal role in delivering high capacity and reliability by bringing the network closer to users . however , in order to make hyper - dense deployments a reality , enhancements including effective interference management , self - organization , and energy efficiency are required . given that large - scale deployments composed of hundreds or thousands of network elements can increase the energy consumption substantially , the need for energy efficiency ( _ green communications _ ) has been recognized by the cellular communications industry as an important item in research projects and standardization activities .initial attempts to improve the energy efficiency in cellular networks were oriented towards minimizing the power radiated through the air interface , which in turn reduces the electromagnetic pollution and its potential effects on human health. however , most of the energy consumption ( between 50% to 80% ) in the radio access network takes place in base stations ( bss ) and it is largely independent of the bss load .since cellular networks are dimensioned to meet the service demand in _ the busy hour _ ( i.e. , peak demand ) , it is expected that , under non - uniform demand distributions ( both in space and time ) , a substantial portion of the resources may end up being underutilized , thus incurring in an unnecessary expenditure of energy .the problem may become worse in many of the scenarios foreseen for 5 g , presumably characterized by hyper - dense small - cell deployments , hierarchical architectures , and highly heterogeneous service demand conditions .therefore , the idea of switching off lightly loaded base stations has been considered recently as a promising method to reduce the energy consumption in cellular networks .this framework is referred to as cell switch - off ( cso ) and it is focused on determining the largest set of cells that can be switched off without compromising the quality - of - service ( qos ) provided to users .unfortunately , cso is difficult to carry out due to the fact that it represents a highly challenging ( combinatorial ) optimization problem whose complexity grows exponentially with the number of bss , and hence , finding optimal solutions is not possible in polynomial time .moreover , the implementation of cso requires coordination among neighbor cells and several other practical aspects , such as coverage provision and the need for minimizing the number of ( induced ) handovers and transitions . in practice, optimizing the number of transitions , as well as the time required for them , is advisable because switching on / off bss is far from being a simple procedure , and indeed , this process must be gradual and controlled .moreover , a large number of transitions could result in a high number of handovers with a potentially negative impact on qos .although cso is a relatively young research topic , a significant amount of contributions has been made .hence , an exhaustive survey is both out of the scope and not feasible herein . instead , a literature review including , in the opinion of the authors , some of the most representative works is provided .thus , in the comparative perspective shown in table [ tablerelatedwork ] , the following criteria have been considered : [ cols="^,^,^,^,^,^ " , ] [ tablenotation ]in order to study the tradeoffs in cso , the use of multiobjective optimization has been considered .multiobjective optimization is the discipline that focuses on the resolution of the problems involving the simultaneous optimization of several conflicting objectives , and hence , it is a convenient tool to investigate cso , where the two fundamental metrics , energy consumption and network capacity , are in conflict .the target is to find a subset of _ good _ solutions from a set according to a set of criteria , with cardinality greater than one . in general , the objectives are in conflict , and so , improving one of them implies worsening another .consequently , it makes no sense to talk about a single global optimum , and hence , the notion of an optimum set becomes very important . a central concept in multiobjectiveoptimization is pareto efficiency .a solution has pareto efficiency if and only if there does not exist a solution , such that dominates .a solution is preferred to ( dominates ) another solution , ( ) , if is better than in at least one criterion and not worse than any of the remaining ones .the set of pareto efficient solutions is called optimal nondominated set and its image is known as the optimal pareto front ( opf ) . in multiobjective optimization , it is unusual to obtain the opf due to problem complexity ; instead , a near - optimal or estimated pareto front ( pf ) is found .readers are referred to for an in - depth discussion .the following performance metrics have been considered is not explicit , however , it is important to note that all of them depend on , i.e. , the network topology . ] : * _ the number of active cells _ ( ) . under the full - load assumption ,energy consumption is proportional to the number of active cells : * _ average network capacity _( ) .this metric is based on the expected value of the spectral efficiency at area element level .hence , the effect of the spatial service demand distribution ( ) must be considered .the metric is defined as follows : \odot \mathbf{n } \right]\cdot \mathbf{1}. \label{eq : cso_f2}\ ] ] the vector corresponds to the _ weighted _ spectral efficiency of each area element .the idea is to give more importance to the network topologies ( s ) that provide better aggregate capacity ( ) to the areas with higher service demand . in ( [ eq : cso_f2 ] ) , ( the number of area elements ) is used to normalize the obtained capacity to the uniform distribution case , i.e. , .the vector contains the inverse of the sum of each column in , i.e. , the number of pixels served by each cell .it is assumed that each user is served by one cell at a time .this vector is used to distribute the capacity of each cell evenly over its coverage area , i.e. , the bandwidth is shared equally by the area elements belonging to each cell .this improves the fairness in the long run similar to the proportional fairness policy that tends to share the resources equally among users as time passes .this fairness notion results in decreasing the individual rates as the number of users increases .this effect is also captured by as the bandwidth per area element is inversely proportional to the size of the cell .* _ cell edge performance _( ) .the percentile of the pixel rate cumulative distribution function ( cdf ) is commonly used to provide an indicator for cell edge performance .a vector with the weighted average rate at area element level can be obtained as follows : .\label{eq : cso_f3}\ ] ] then , the percentile 5 is given by the vector is a sorted ( ascending order ) version of . * _ uplink power consumption _ ( ) . in order to provide an estimate of the uplink power consumption of any network topology , a fractional compensation similar to the open loop power control ( olpc ) used in long term evolution ( lte )is considered .it is given by where is a design parameter that depends on the allocated bandwidth and target signal - to - noise ratio ( snr ) and ] + cell selection ( ) & highest rx .power & rx . power ( ) & -123 dbm & small scale fad .& as in + link performance ( ) & shannon s formula & frac .( ) & 1.00 & cov .( ) & 0.02 + max .path loss ( ) & 163 db & min .rate ( ) & 400 kbps & target qos ( ) & 97.5 + user distribution & according to & traffic model & full buffers & sinr ( ) & -7.0 db + qos checking interval & & & & & + + population size & 100 & crossover .prob . & 1.00 & type of var . & discrete + mutation prob . & & termination crit . &+ [ tableevaluationsetting ] + the first part of this section is devoted to illustrate some coverage aspects and provides insights into the potential impact of the transmit power on the performance of cso .[ fig : coveragemaps ] provides a qualitative perspective .the figure shows the size of the maximum coverage ( points in which the received ps power is greater than ) for the central bss ( ) for two different transmit powers ( and ) . for the sake of clarity , shadowing is not considered .a quantitative description is shown in fig .[ fig : coveragevstxpower ] which indicates the percentage of the target area ( ) that can be covered with different values of . note for instance that , starting from ( of coverage ) , need to be increased more than eight times ( up to ) to double the coverage ( up to ) , while reaching of coverage requires less than four times the power required for of coverage . obviously , this depends on the propagation model , but the message is that this analysis should be taken into account during the design phase of any cso strategy in order to determine appropriate values for . in the results shown in figs .[ fig : serversoverlapping2 ] and [ fig : serversoverlapping11 ] , all the cells are active and transmit at the same . fig .[ fig : serversoverlapping2 ] indicates the average number of bs that can be _ detected _ as a function of ( the average is taken over the whole coverage area ) .[ fig : serversoverlapping11 ] shows the percentage of the coverage area in which bss ( servers ) are _ heard _ with a quality ( sinr ) within below the one of the best server . from these results ,it becomes clear that the choice of has a big influence on the size of the feasible set in ( [ op : cso ] ) , i.e. , the set of s for which constraint [ op : cso : c1 ] is fulfilled .hence , the impact of is significant , mainly in low load conditions .+ first , the results regarding the solution of ( [ op : cso ] ) for the objectives functions introduced in subsection [ sec : propframework_probform_fl ] ( ) are provided .[ fig : pareto_front_moea_mda ] shows the resulting pareto front by solving ( [ op : cso ] ) , when in ( [ op : cso : main ] ) , i.e. , the joint optimization of the number of active bs ( ) and the average network capacity ( ) , by means of moeas ( algorithm nsga - ii ) and algorithm [ alg : cso : mindist ] .as expected , the use of evolutionary optimization provides _better _ solutions than algorithm [ alg : cso : mindist ] , i.e. , greater values of for the same value of .however , it is important to recall that the solutions obtained through algorithm [ alg : cso : mindist ] feature the minimum distance property ( see section [ sec : propframework_probform_mopf ] ) , and that , algorithm [ alg : cso : mindist ] ( ) is , in case of small - to - moderate cluster size , less complex than nsga - ii ( , : population size ) . a quantitative perspective of such performance gap is shown in fig .[ fig : pareto_front_moea_mda_gains ] .the blue / circle pattern corresponds to the gain in terms of for each value of indicated in the left vertical axis as ` average capacity gain ' . as a result of the combinatorial nature of nsga - ii ,the gains are higher when network topologies are composed of less bss , i.e. , small values of .the red / square pattern shows the capacity gain per cell , indicated in the right vertical axis .it can be seen that the gain of using moea is around in topologies with less than 20 active bss ( ) .hence , the use of moeas implies better network topologies in cases where the computational complexity can be afforded . the resulting pareto front by solving ( [ op : cso ] ) , for ( ) and ( ) in ( [ op : cso : main ] ) , are shown in figs .[ fig : pareto_front_p5_nel ] and [ fig : pareto_front_upc_nel ] , respectively .the first case illustrates the impact of cso on cell edge performance . note that while fig .[ fig : pareto_front_moea_mda ] shows a fairly linear growth of the average network capacity with the number of active cells , fig .[ fig : pareto_front_p5_nel ] indicates that cell edge performance ( represented by ) is substantially improved only by network topologies featuring a higher number of active cells ( ) .this result clearly suggests that mechanisms for intercell interference coordination ( icic ) should be applied together with cso in cases of low load conditions to improve the qos of cell edge users .[ fig : pareto_front_upc_nel ] illustrates the impact of cso on the power consumption of users ( uplink ) .as it was mentioned , the goal is not to determine exact uplink power consumption figures , but to create means for comparison among network topologies with different number of active bss .thus , a normalized version of ( see [ eq : cso_fuplpc ] ) is considered .as it can be seen , it turns out that the relationship between the number of active bss and the resulting uplink ( open - loop - based ) power consumption is highly nonlinear , being the energy expenditure considerably high in sparse network topologies ( ) .hence , in scenarios where the lifetime of devices should be maximized ( sensor networks ) , the use of cso is not clear .recall that uplink link budget is also considered as a coverage criterion . to close this subsection , fig .[ fig : cell_coupling_based_analysis ] shows the results corresponding to the solution of ( [ op : cso ] ) for the objective functions introduced in section [ sec : propframework_probform_fl ] ( ) . according to definition [ def : netcap ] , and given the spatial demand distribution ( see fig . [fig : std ] ) , and yield a demand volume ( ) equal to .the resulting load sharing patterns ( obtained by means of algorithm [ alg : iteraproxloads ] ) for and are shown in fig .[ fig : cv_loads ] .note that increasing results in higher load dispersion . to quantify this , fig .[ fig : cv_pc_load ] shows the impact of on the coefficient of variation ( cv ) of the loads ( ) . the associated load - dependent power consumption ( ) is also indicated .note that and are maximized when and , respectively .as expected , the load dependent power consumption ( ) is maximized when , i.e. , .the dependence of on is explained by the strong nonlinearity of ( [ eq : neloadstatistically_2 ] ) and the fact that , from the load - coupling point of view , , and hence , no change is expected after .the results shown in figs .[ fig : cv_loads ] and [ fig : cv_pc_load ] are obtained for , i.e. , when all the bss are active .the joint optimization of and is shown in fig . [fig : cv_pc_mo ] .as it can be seen , there is a conflicting relationship between them .the attributes of the _ extreme _ solutions ( and ) in the pareto front are indicated .there is also a certain correlation between the objectives ( and ) and the number of active cells ( nac ) .the topology with the lowest energy consumption ( ) requires less active bss but it has the highest load dispersion ( ) .note the difference between the highest and lowest loaded bs in . in contrast , the best load balancing ( ) involves more active bss , and hence , worst values of .a comparison among solutions obtained through each ici model , fl and lc , is provided next . as indicated earlier , solving ( [ op : cso ] )results in a set of pareto efficient ( nondominated ) network topologies that are specific for either a spatial service demand distribution ( : full - load ) or a service demand conditions , i.e. , spatial demand distribution plus volume ( : load - coupling ) . recall that is obtained by joint optimizing and in ( [ op : cso ] ) for a given spatial demand distribution ( ) , while obtaining involves the joint optimization of and in ( [ op : cso ] ) for a given and ( volume ) .note that , the ` full - load ' analysis is volume - independent , and hence , it does not require specify ( full load is assumed for the active cells ) .+ thus , in order to evaluate these solutions by means of system level simulations , it is initially assumed that at each qos checking interval ( evaluation parameters are shown in table [ tableevaluationsetting ] ) , the ( nondominated ) network topologies of each set ( and ) are all applied and evaluated .the goal is to create qos statistics for each network topology and load condition .then , the network topology that is able to provide the desired qos ( of users are satisfied of time ) is selected and applied ( as indicated in subsection [ sec : propframework_conceptualdesign ] ) .the comparative assessment is shown in fig .[ fig : comparative ] , where the legends indicate the set the applied network topology belongs to ( or ) and the ici model ( fl or lc ) used in the system level trials . fig .[ fig : comparative_c_nc_load ] shows the load - dependent power consumption of each network topology . clearly , from the cso point of view, the topologies in result in lower power consumption as they feature less active bss ( nac is indicated in green boxes ) given that the load - coupling model predicts better sinr than full - load ( see fig .[ fig : sinr_bound ] ) , and hence , network capacity is favored . however , as increases , both models become somehow equivalent as the loads tend to 1 ; as a result , the energy consumption is quite similar . figs .[ fig : comparative_c_nc_load_dot2 ] and [ fig : comparative_c_nc_load_dot6 ] show the qos level ( in terms of the number of satisfied users ) that is obtained with the selected solution of each set for and , respectively .the results make evident that the performance of the network topologies in is severely degraded if the ici levels become higher than the ones from which they were calculated for , see ( full ici ) . indeed , the performance of these solutions is sensitive to variations from the mean values ( that happens when considering snapshots ) in moderate - to - high load conditions , even when the load - coupling based ici is considered , as seen in fig . [fig : comparative_c_nc_load_dot6 ] for . on the other hand , the network topologies in consistent performance when they are evaluated under full load ( ) , and obviously , provide an even better performance under load - coupling ( ) for both demand volumes .hence , given that the energy consumption gain is in the order of in the best case , it can be concluded that the full load model provides a competitive and somehow _ safer _ energy - saving vs. qos tradeoff in the context of cso .the proposed cso scheme can use either approach . summarizing : *the mo for fl , i.e. , and in ( [ op : cso ] ) , is volume - independent ; offline system level simulations are required for each load condition ( ) , and energy saving is smaller in comparison to lc . * in mo for lc ,i.e. , and in ( [ op : cso ] ) , is volume dependent ; different offline optimization procedures are required for each load condition ( ) , and energy saving is larger in comparison to fc . in order to provide a wide perspective of the merit of the cso framework presented herein ,several recent / representative cso schemes have been used as baselines .obviously , an exhaustive comparison is not feasible .however , the idea is to illustrate some _ pros _ and _ cons _ of different approaches and the impact of some design assumptions .the following benchmarks are considered : * _ cell zooming _ : it was proposed in .the idea is to sequentially switch - off bs starting from the lowest loaded .the algorithm ends when a cell can not be switched - off because at least one user can not be * _ improved cell zooming _ : this scheme is presented in and it is similar to the one in , but it includes a more flexible termination criterion that allows to check more cells before terminating , and so , more energy - efficient topologies can be found . *_ load - and - interference aware cso _ : the design of this cso scheme presented in takes into account both the received interference and load of each cell to create a ranking that is used to sequentially switch - off the cells whose load is below a certain threshold . * _ set cover based cso _ :the cso scheme proposed in relies on the idea of switch - on bs sequentially according to a certain sorting criterion . in this work ,the sorting criterion is based on the number of users a cell can served in the snr regime .the performance comparison is shown in fig .[ fig : per_comp ] . to make the comparison fair ,the full - load ici conditions are considered .[ fig : bench_nel ] and [ fig : bench_qos ] show the average number of active cells and qos ( for different service demand volumes ) , respectively .as it can be seen , the best energy saving is obtained by , although at the expense of qos degradations .this is due to the fact that in , users can be easily put in outage .in contrast , cso schemes such as provide the desired qos ( as long as ) since cso decisions require associating all users . however , this results in an increment in the average number of active cells with respect to .the schemes labeled as ` mda ' and ` moea ' correspond to the ( infeasible ) dynamic selection of network topologies from the sets obtained through algorithms [ alg : cso : mindist ] and nsga - ii , respectively , which are shown as reference .the performance of the proposed mo cso is indicated by red boxes and labeled as ` mo cso ' .as it can be seen , the proposed scheme provides an excellent tradeoff between the required number of active cells and the obtained qos , especially when where the performance ( qos ) of other cso is compromised .however , the most significant enhancement in the proposed scheme is its feasibility .[ fig : bench_bars ] shows four performance indicators : transitions , handovers , qos , and nac .given that the network topologies are calculated offline , they can be evaluated extensively by means of system level simulations ( under a wide range of coverage criteria and conditions ) to further guarantee their real - time performance , i.e. , the operator can select topologies with more active cells rather than the ones which strictly need to guarantee qos .therefore , the selected network topologies can be applied ( without real - time complexity ) during periods of time in which service demand is described by ; as a result , no transitions or handovers are induced due to cso .hence , feasible yet effective cso performance is achieved . as it was shown earlier ,the proposed framework is generic , flexible , and no assumption are made in regards to , for instance , the cellular layout or objective functions ; as a result , the framework is also suitable for small - cell deployments where irregular topologies and heterogeneous demand conditions are expected . to close this section , a complexity overview of the optimization algorithms is provided . according to , the complexity of nsga - ii is , where and correspond to the population size and the number of objective functions , respectively . in our case , and can be set depending on the scale of the problem . however , there is a consensus about the size of the population when using genetic algorithms , such as nsga - ii , and it is considered that during calibration populations of 20 up to 100 individuals can be used .values greater than 100 hardly achieve significant gains and the same global convergence is obtained .regarding algorithm [ alg : cso : mindist ] , it s complexity is , where is the number of cells in the network . in practice , which is a significant reduction in terms of complexity that comes at expense of some performance .in evolutionary algorithms , a termination criterion is usually defined / need .one metric used to measure the level of convergence is the the _ hypervolume _ indicator .it reflects the size of volume dominated by the estimated pareto front . in this work, the seacrh is terminated if the improvement in the hypervolume is smaller than a threshold ( 0.001% ) after a certain number of generations ( in this study , 20 ) .finally , crossover and mutation probabilities are set to and ( one mutation per solution , on average ) , respectively , as indicated in table [ tableevaluationsetting ] .cso is a promising strategy that allows significant energy saving in cellular networks where both radio access network ( capacity supply ) and service demand are heterogeneous . in this article 1 ) cso has been carefully analyzed considering coverage criteria , ici models , and practical aspects , such as network - initiated handovers and on - off / off - on transitions , and 2 ) a novel mo - based cso scheme has been introduced .the proposed solution succeeds in minimizing the number of transitions and handovers caused by the cso operation and it is able to operate without need for heavy computational burden as the core processing is done offline .in addition , a cluster based - operation have been proposed to allow for semi - distributed implementation .the results show that , when compared with previous proposals , the proposed solution provides competitive performance in terms of qos and energy saving while offering clear advantages from the feasibility perspective as it reduces the number of handovers and transitions .the results also highlight the importance of considering coverage criteria ( in downlink and uplink ) and pay attention to the selection of operational parameters , e.g. , the power allocated to ps ( typically used as criterion for coverage ) .a comparative analysis between ici models ( full - load and load - coupling ) indicates that the full - load assumption is a _ safe _ approach in the context of cso as it provides natural protection against deviations from average load values that are 1 ) used as input of the algorithm , and 2 ) inherent of real time operation , i.e. , discrete realizations of users .the impact of cso on the power consumption of ue has also been studied .the results indicate that sparse topologies ( few active bss ) have a significant impact on uplink power consumption , and hence , cso is not suitable for scenarios with energy - sensitive devices such as sensor networks .research on topology adaptation has still a long way until its maturity .feasible and effective techniques for traffic pattern recognition to complement cso are still in infancy .it is our strong belief that cso , as a promising approach to _ greener _ networks , is a key piece of a more general set of capabilities that will appear in 5 g networks , also including promizing and disruptive concepts , such as downlink uplink decoupling ( dude ) .dude , where user equipment can transmit and receive to and from different base stations , is indeed , a clear research direction from the perspective of cso , where both uplink and downlink could be considered as _independent _ networks .the authors would like to thank tamer beitelmal from carleton university , and dr .ngoc dao from huawei canada research centre , for their valuable feedback .this work was supported by the academy of finland ( grants 287249 and 284811 ) .mario garca - lozano is funded by the spanish national science council through the project .in order to estimate the average load vector ( ) , algorithm [ alg : iteraproxloads ] is proposed .basically , the estimation of the average load at each cell ( ) is refined through each iteration comprising lines [ alg1_l4 ] to [ alg1_outerloop2 ] . in line[ alg1_load ] , the function load ( ) estimates each , based on ( [ eq : neloadstatistically_2 ] ) , from the values of previous iterations ( where ) and the ones that have been just updated in the current iteration ( this _ fast _ update is done in line [ alg1_innerloop2 ] ) .[ fig : iter_load_alg ] illustrates the motivation and performance of algorithm [ alg : iteraproxloads ] .basically , the use of load coupling provides a more accurate estimation of ici levels in the network as shown in fig .[ fig : sinr_bound ] . note that the use of full icic represents a more _ conservative _ approach .[ fig : iter_load ] shows that algorithm [ alg : iteraproxloads ] only requires few iterations to converge and that this depends on the starting point , but in any case convergence is fast .s. samarakoon , m. bennis , w. saad , and m. latva - aho , `` dynamic clustering and on / off strategies for wireless small cell networks , '' _ ieee transactions on wireless communications _15 , pp . 21642178 , march 2016 .m. u. jada , m. garca - lozano , and j. hmlinen , `` energy saving scheme for multicarrier hspa + under realistic traffic fluctuation , '' _ mobile networks and applications _21 , no . 2 ,pp . 247258 , 2016 .a. antonopoulos , e. kartsakli , a. bousia , l. alonso , and c. verikoukis , `` energy - efficient infrastructure sharing in multi - operator mobile networks , '' _ ieee communications magazine _ , vol .53 , pp . 242249 , may 2015 .t. beitelmal and h. yanikomeroglu , `` a set cover based algorithm for cell switch - off with different cell sorting criteria , '' in _ 2014 ieee int .conference on communications workshops ( icc ) _ , pp . 641646 , june 2014 .d. gonzalez g. , h. yanikomeroglu , m. garcia - lozano , and s. r. boque , `` a novel multiobjective framework for cell switch - off in dense cellular networks , '' in _ 2014 ieee int .conference on communications ( icc ) _ , pp . 26412647 , june 2014 .a. alam , l. dooley , and a. poulton , `` traffic - and - interference aware base station switching for green cellular networks , '' in _ 2013 ieee 18th int . workshop on computer aided modeling and design of communication links and networks _ , pp .6367 , sept 2013 .h. klessig , a. fehske , g. fettweis , and j. voigt , `` cell load - aware energy saving management in self - organizing networks , '' in _ 2013 ieee 78th vehicular technology conf .( vtc fall ) _ , pp .16 , sept 2013 .c. meng , x. li , x. lu , t. liang , y. jiang , and w. heng , `` a low complex energy saving access algorithm based on base station sleep mode , '' in _ 2013 ieee / cic international conference on communications in china ( iccc ) _ , pp . 491495 , aug 2013 .c. peng , s .- b .lee , s. lu , h. luo , and h. li , `` traffic - driven power saving in operational 3 g cellular networks , '' in _acm 17th annual international conference on mobile computing and networking ( mobicom 11 ) _ , pp . 121132 , 2011 .s. zhou , j. gong , z. yang , z. niu , and p. yang , `` green mobile access network with dynamic base station energy saving , '' in _acm 15th annual international conference on mobile computing and networking ( mobicom 09 ) _ , pp . 1012 , 2009 .x. zhou , z. zhao , r. li , y. zhou , and h. zhang , `` the predictability of cellular networks traffic , '' in _ 2012 int .symposium on communications and information technologies ( iscit ) _ , pp . 973978 , oct 2012 .d. , m. garcia - lozano , s. ruiz , and d. s. lee , `` optimization of soft frequency reuse for irregular lte macrocellular networks , '' _ ieee trans . on wireless communications _ ,12 , pp . 24102423 , may 2013 .a. simonsson and a. furuskar , `` uplink power control in lte - overview and performance , subtitle : principles and benefits of utilizing rather than compensating for sinr variations , '' in _ieee 68th vehicular technology conference ( vtc 2008-fall ) _ , pp . 15 , 2008 .d. m. rose , j. baumgarten , and t. kurner , `` spatial traffic distributions for cellular networks with time varying usage intensities per land - use class , '' in _ 2014 ieee 80th vehicular technology conference ( vtc2014-fall ) _ , pp .15 , sept 2014 .s. zhou , d. lee , b. leng , x. zhou , h. zhang , and z. niu , `` on the spatial distribution of base stations and its relation to the traffic density in cellular networks , '' _ ieee access _ ,vol . 3 , pp .9981010 , 2015 .
|
cell switch - off ( cso ) is recognized as a promising approach to reduce the energy consumption in next - generation cellular networks . however , cso poses serious challenges not only from the resource allocation perspective but also from the implementation point of view . indeed , cso represents a difficult optimization problem due to its np - complete nature . moreover , there are a number of important practical limitations in the implementation of cso schemes , such as the need for minimizing the real - time complexity and the number of transitions and cso - induced handovers . this article introduces a novel approach to cso based on multiobjective optimization that makes use of the statistical description of the service demand ( known by operators ) . in addition , downlink and uplink coverage criteria are included and a comparative analysis between different models to characterize intercell interference is also presented to shed light on their impact on cso . the framework distinguishes itself from other proposals in two ways : 1 ) the number of transitions as well as handovers are minimized , and 2 ) the computationally - heavy part of the algorithm is executed offline , which makes its implementation feasible . the results show that the proposed scheme achieves substantial energy savings in small cell deployments where service demand is not uniformly distributed , without compromising the quality - of - service ( qos ) or requiring heavy real - time processing . cellular networks , energy efficiency , cell switch - off , cso , multiobjective optimization , pareto efficiency .
|
wireless data traffic over the internet and mobile networks has been growing at an enormous rate due to the explosion of available video content and proliferation of devices with increased display capabilities . according to cisco visual networking index ,the wireless data traffic is expected to reach more than 24.3 exabytes per month by 2019 .requests for massive data transmission over wireless systems , together with the time - varying nature of wireless connections and constraints on the available resources such as channel bandwidth and capacity , imposes significant challenges .one approach that addresses the challenges mentioned above is to bring part of the requested content closer to end users via caching .this is often referred to as the _ push _ method .more specifically , popular contents are delivered and pre - stored in caches during relatively idle periods of wireless networks , which are retrieved later at peak time to mitigate the network congestion problem . to further reduce the total volume of data traffic , broadcast or multicasttransmissions can be incorporated into the push technique . in particular , considering that popular contents are usually requested by a large number of users , we may utilize a shared channel to deliver them via broadcast or multicast networks .the performance of this technique relies heavily on the push strategy due to the gain only comes from the cache .in contrast to pre - storing popular contents directly at caches closer to end users for future retrieval , a recently proposed implementation of the push technique relies on the idea of coded cache .it can offer extra performance gain in terms of decrease in the network traffic through creating coded - multicasting opportunities by jointly coding multiple data streams at different caches . with coded cache , a careful selection of content overlap across cachescan ensure that multiple requests for different contents can be addressed with a single coded stream .however , existing studies on the coded cache - based push method assumed the presence of a shared link , which is error - free and can be accessed by every user .however , in realistic wireless systems , the shared link is capacity - constrained and more importantly , it may be of different channel conditions for various users . in this case, the system performance would be restricted by the user with the worst channel quality . for illustration , consider a simple scenario where the overlapped content comes from different users with different shared channel conditions .additional delay will be induced because it could take the user with poorer channel condition longer to transmit the required data trunk , which leads to inefficient use of wireless channel bandwidth .therefore , an appropriate resource allocation method is needed to take full advantage of scarce bandwidth and power resources . in this paper, we shall consider the problem of optimal resource allocation for coded cache - based push systems with a shared fading channel to address the above drawback .the study will be conducted in the context of a broadcast / multicast network and investigate how the transport mode and the number of users influence its performance .we aim to maximum the throughput with controllable power and bandwidth . to the best of our knowledge , literature on the resource allocation optimization for the coded cache - based scheme and its performance in fadding channels is still lacking .the coded cache - based push scheme consists of two phases : the placement phase and the delivery phase . in the placement phase , each content is divided into sub - contents and part of them are pushed to users during the relatively idle periods of the wireless network .content requests issued in the delivery phase are satisfied using the coded mutlicasting data streams . for illustration purpose , consider the following simple example where an error - free shared link is available .* example 1 * ( codec scheme in ) .suppose the server has contents a and b and there are users , each having a local cache of bits .each content also has bits . in the placement phase of the coded cache - based push scheme ,each user randomly stores bits of contents a and b , where .specifically , content a might be divided into four segments , denoted by , where represents the part of content a stored in the server , and are the parts of content a pushed into users 1 and 2 , respectively , and is the part of content a stored at both users .besides , let us assume that , where denotes the size of a data segment .content b is divided in the same manner as content a. in the delivery phase , suppose user 1 and user 2 request content b and a. with conventional caching scheme , the server needs to unicast and to user 1 while unicasting and to user 2 .the total amount of data transmitted is on the other hand , with the coded cache , the server can satisfy the same requests by transmitting , and over the shared link , where denotes the bit - wise xor operation .the total traffic volume is the above results on data traffic reduction can be generalized to the scenario of contents and users . in this case , the realization of coded cache - based push scheme is given in the algorithm 1 .it can be seen that , in the placement phase , the local cache of each user is divided into segments with identical size , each of which stores parts of each content . in the delivery phase ,the server traverses each subset of all users and transmits a coded stream to address the content requests .this procedure produces a traffic with this traffic consists of two parts : the local cache gain , which is produced by the uncoded caching , and the global cache gain , which is provided by the coded cache and the associated multicasting opportunities . * placement phase * + * delivery phase * + in realistic wireless networks, different users may experience different states of the shared link , which could be described using e.g. , the signal - to - noise ratio ( snr ) to reflect the combined effects of the channel fading and the local awgn .this would lead to a phenomenon called _ multicast saturation _ , where the channel capacity remains almost unchanged when the number of multicast users becomes sufficiently large and continues to increase .the capacity saturation has been shown to be due to the limitation imposed by the user with the worst channel condition .the above problem would appear in the context of coded cache - based push method in the wireless networks .this can be seen by examining algorithm 1 , where multicasting transmissions to different users are performed in the delivery phase . in the following sections, we shall first present the deployment of the coded caching scheme in the wireless network and then proceed to developing resource allocation schemes to combat the impact of different channel states at various users .consider a wireless network shown in fig .associated notations are listed in table i. the network in consideration consists of a wireless broadcast or multicast network with fading channels , which are accessible by users in the covered area .each user has a local cache of bits .the awgn at each user has a psd , where .the server of the wireless system has contents and it is connected to the broadcasting system through wired backhaul .each content has the identical size of bits .the coded cache - based push method is adopted and we assume that each user s local cache has been configured in the placement phase using algorithm 1 . + server .+ & the number of users in the system .+ & the size of each content .+ & content .+ & psd of user i. + & the number of contents can be prefetched by each user .+ & the index of the content requested by user .+ & the indictor represents the resource allocation for transmission in time slot and subcarrier .+ & the bandwidth of each subcarrier .+ & the time of each time slot. + & the number of subcarriers in the system .+ & the bandwidth allocated for transmission in time slot .+ & the power allocated for transmission in time slot .+ & the receiver set in transmission. + & the signal size of transmission .+ [ tab : notation ] in the delivery phase , user sends its request to the base station .the base station collects all users requests and take the coded delivery in algorithm 1 .it can be found that there are coded transmissions in this phase , the of which are the multcasting transmission .the rest transmissions are unicasting transmission that unicast the unprefetched bits of each requested contents .for example , the base station unicasts to user 1 and to user 2 in the * example 1*. thus , we use the broadcasting system to accomplish first transmissions , then use the cellular system to unicast the unprefetched part of each user s request .we shall investigate the problem of resource allocation via taking into account the fact that under the considered wireless network , each transmission may experience different channel fading and the awgn of receiving users may have different psds ( i.e. , the received snr of different transmissions could be varying) . for this purpose ,let us denote the average transmitting power and the bandwidth of the base station as and .besides , we adopt the simplified model in , where it is assumed that the available bandwidth is partitioned evenly into subcarriers and the transmission is time - slotted .let and be the bandwidth of each subcarrier and the duration of each time slot such that . to formulate an optimization problem for resource allocation , we define to be a binary variable such that if the time slot and the subcarrier are allocated to the transmission ; otherwise , .the set of feasible solutions to the resource allocation problem can then be expressed as where the equality constraint comes from the one - to - one correspondence in the sense that any time slot and subcarrier can only be assigned to one transmission . besides, we denote as the duration of the transmission , and as the bandwidth and power allocated for transmission in the time slot .finally , let be the set of users receiving the transmission and be the number of bits in the transmission . can be evaluated as follows .consider a particular bit of a certain content .the probability that it is pushed into a user s local cache is .consider the user subset that has users and is receiving the transmission .the probability that this bit is stored solely at each of those users is .thus , the average signal size of the transmission is with the above notations , the generic resource allocation problem for coded cache - based push scheme for wireless networks can be formulated as * opt : * \geq \frac{s_{k}}{t_{u } } , \forall k\\ & n_{k}^{m}=\max\limits_{j\in u_k}\{n_{j}\ } , \forall k\\ & 0 \leq b_{k}(i ) = \sum\limits_{j=1}^{h}x_{ij}^{k}b_{u } \leq b , \forall k , i\\ & \sum\limits_{k=1}^{2^k}b_{k}(i)\leq b,\quad\sum\limits_{k=1}^{2^k}p_{k}(i)\leq p\\ \text{variable}\quad & x_{ij}^{k}\in\mathfrak{f}\end{aligned}\ ] ] the objective function is the weighted sum of the transmission time for the transmission . since the traffic volume under coded caching scheme for each transmission is fixed , maximizing the system throughput becomes equal to minimizing the total transmission time .the weighting factors are introduced to generalize the formulation . the constraints ( 5 ) and ( 6 ) represents that for each transmission and under the limitation from the user with the worst channel condition , the resource allocation scheme needs to allow a sufficient channel capacity in order to guarantee the delivery of that transmission .constraints ( 7 ) and ( 8) correspond to that the amount of allocated bandwidth and transmitting power should not overtake their maximum allowable values .the opt falls into a nonlinear integer programming ( nip ) model , which is np - hard in the strong sense .in fact , a real resource allocation can not be such optimal and should follow some rules . in the following section , we will relax this model to get a sub - optimal solution and examine the solution to the generic resource allocation problem in ( 4 ) in two modes , namely , the time - division ( td ) mode and the frequency - division ( fd ) mode .+ in the time - division ( td ) transport mode , the transmissions are performed in sequence but each transmission would utilize the available bandwidth in a whole . in this case , the total transmission time equals to the sum of the number of the time slots of all transmissions .hence , and the decision space is , where is a binary variable such that if the time slot is allocated to transmission . putting the definition of into ( 5)-(8 ) and replacing the decision space ( 2 ) by ( 10 ) ,we obtain the resource allocation problem for coded cache - based push technique under td transport mode . to further simplify the optimization problem, we adopt the equal power td technique , where the transmit power and bandwidth are allocated to each transmission for a fraction of the total transmission time . under the assumption that the transmission consumes percent of the total transmission time , the resource allocation problem can then be re - written as + * opt - td : * in the * opt - td * model ,the objective function is the total transmission time to satisfy all users requests .the principle of the above optimization framework is illustrated in fig .[ fig3](a ) . applying the _ cauchy - schwarz inequality _ , we can obtain the optimal solutions which are given by when it comes to frequency - division(fd ) mode , since the time slot is fully allocated to each transmission , each transmission is operated in the parallel manner . therefore , the total transmission time is determined by the transmission that has the longest time and the ^{\infty } , \forall k$ ] and the decision space is , where is a binary variable such that if the subcarrier is allocated to the transmission . satisfies .putting the definition of into ( 5)-(8 ) and substituting the decision space ( 2 ) using ( 11 ) , the optimization problem for resource allocation in the fd mode is established .similar to the td case , under the assumption that the base station allocates of its total power and of its total bandwidth to the transmission , we can write the resource allocation problem for coded cache - based push method in wireless network as + * opt - fd : * in the * opt - fd * model , the objective function is the total transmission time to satisfy all requests .fig.[fig3](b ) shows the principle of the resource allocation problem in the fd mode .the solutions to the * opt - fd * problem can be found via the application standard nonlinear programming algorithms .in this section , the proposed resource allocation scheme including td and fd , are simulated in a broadcast network . for comparison, we use the traditional uncoded cache method as the baseline scheme .* baseline scheme : * the system only consists of a cellular network , which unicasts the unrepfetched content into each user . andwe consider the same resource optimization model to get the maximum system throughput . for simplification ,we denote it as and the coded caching scheme as . the simulations operated under the following assumptions .the users are uniformly distributed in a cell with a large - scale path loss of 2 .the broadcast radius is km .the number , due to the transmissions of coded cache scheme ] of users is between 2 to 128 .the channel is frequency selective ricean fading channel .the noise variance . from the previous theoretical analysis , the traffic volume can be reduced due to the prefetching of local cache , and the coded caching scheme introduce a global cache gain , which shows a huge traffic reducing gain compared to the baseline scheme . however , in the wireless fading channel , one of the fundamental questions is how the system throughput scales as the size of local cache .does there exist a global cache gain due to the coding ? based on above resource allocation model , fig.[fig4 ] and fig.[fig5 ] shows the comparison of coded caching scheme and baseline scheme when the cache size increases .we have following observations : [ fig : usernum ] the system throughput increases as the cache size increases under above two schemes and two kinds of resource allocation strategies , except the coded caching scheme under time division . and the performance of both schemes under fd mode are better than td mode .we plot the throughput gain of the coded caching scheme compared to the baseline scheme in fig.[fig5 ] . under fd mode, coded caching scheme shows an approximate constant throughput improving gain of . under td mode, it first decreases , then increases up to . for comparison, we also plot the traffic gain of coded cache scheme , which is much larger than the throughput gain .these results show that , in the wireless communication setup , the gain of coded cache is limited .the main reason is that , the system throughput is bounded by both traffic volume and channel rate .although the traffic volume is reduced a lot due to the coded multicast , the multicast capacity is restricted by the user that has the worst channel gain .[ fig : usernum ] as shown in the fig.[fig4 ] , the system throughput of coded caching scheme under time division shows a unique single valley manner . under small cache size ,the system throughput is decreasing , while under large cache size , the system throughput is increasing .the main reason behind this trend is that _ the traffic reduce due to the cache size increasing can not improve the system throughput under td_. here we show a possible explanation . from the theoretical analysis in section v.a ,the optimal solution under td strategy equals to a weighted sum of all and contains a root square form , which will weaken the traffic volume reducing effect of by the extraction of it and weighed summation .then we show the impact of the network resources , such as the system bandwidth and the system power , on the network performance .based on the prior results we get , both coded caching scheme and base scheme show different performances with different cache sizes .thus , we conduct the following simulation under two kinds of cache sizes .fig.[fig6](a ) and fig.[fig6](b ) plots the system throughput versus system power under two kinds of cache sizes .it can be seen that the system throughput increases when the system power increases , and the gap between above schemes under two strategies is constant . and coded caching scheme performs a bit better than conventional scheme in fd modes .fig.[fig6](c ) and fig.[fig6](d ) plots the system throughput versus system bandwidth under two kinds of cache size .it can be seen that the throughput also increases with bandwidth . however , when the cache size is small , the performance of coded caching scheme under td mode is even worse than conventional one .in contrast , when it comes to the fd mode , given a larger cache size , coded caching scheme with resource allocation is much more better than the conventional scheme .from the previous analysis , we show that , in the wireless fading environments , the coded caching scheme almost has a constant gain compared to the baseline scheme , which is caused by the wireless channel fading phenomenon .moreover , we investigate the relationship between system throughput of both schemes and the degree of channel fading .since a multicast system saturates the capacity when the number of users increases , it is necessary to investigate how the system throughput scales as the the number of users . in fig .[ fig : usernum ] , we plot the system throughput versus the number of users with . for comparison , we do not consider the outage. under the coded cache scheme , the system throughput becomes zero when the number of users is larger enough , regardless of any resource allocation strategies .based on the coded cache scheme , most of transmissions will be restricted by the user with worst channel gain . since the worst channel gain will be zero when the number of users is infinity, the most parts of system throughput will be zero . under the baseline scheme, the user with worst channel gain only restrict the throughput of itself , while the system throughput just decreases a little via efficient bandwidth allocation .in this paper , we apply the coded caching scheme into the wireless network with fading channel .we investigate the performance of coded caching scheme with resource allocation when consider the transmission mode in the wireless scenario and formulate a sub - optimal problem . under the wireless environment with various performances of user channels , the system throughput yielded by the coded caching schemeis limited by the worse case users , and only shows the performance gain under large cache size and large system bandwidth . moreover , with power or bandwidth allocation , the throughput performance of coded caching scheme under the fd transport mode is significantly better than that under the td mode .further simulations also show that its performance will decrease as the number of users becomes sufficiently large without considering the outage of users .recently , wang proposes that the group coded delivery(gcd ) , i.e. , divide users into group and operate coded caching scheme separately , can still guarantee the performance of traffic - volume under the heterogenous user cache sizes .in fact , when we adopt the gcd in our regime , it can effectively counteract the restrictions of the worst case user , since we can classify the users based on their channel gain and operate the resource allocation separately . in future work , we will investigate the deploy of gcd to further improve the performance of coded caching scheme in the wireless network .
|
the rapid growth of data volume and the accompanying congestion problems over the wireless networks have been critical issues to content providers . a novel technique , termed as coded cache , is proposed to relieve the burden . through creating coded - multicasting opportunities , the coded - cache scheme can provide extra performance gain over the conventional push technique that simply pre - stores contents at local caches during the network idle period . but existing works on the coded caching scheme assumed the availability of an error - free shared channel accessible by each user . this paper considers the more realistic scenario where each user may experience different link quality . in this case , the system performance would be restricted by the user with the worst channel condition . and the corresponding resource allocation schemes aimed at breaking this obstacles are developed . specifically , we employ the coded caching scheme in time division and frequency division transmission mode and formulate the sub - optimal problems . power and bandwidth are allocated respectively to maximum the system throughput . the simulation results show that the throughput of the technique in wireless scenario will be limited and would decrease as the number of users becomes sufficiently large .
|
the goal of this paper is to obtain homogenization results for the dynamics of accelerated frenkel - kontorova type systems with types of particles .the frenkel - kontorova model is a simple physical model used in various fields : mechanics , biology , chemestry _ etc . _the reader is referred to for a general presentation of models and mathematical problems . in this introduction ,we start with the simplest accelerated frenkel - kontorova model where there is only one type of particle ( see eq . ) .we then explain how to deal with types of particles ( see eq . ) .we finally present the general case , namely systems of odes of the following form ( for a fixed ) where denotes the position of the particle at the time . here , is the mass of the particle and is the force acting on the particle , which will be made precise later .remark the presence of the damping term on the left hand side of the equation .if the mass is assumed to be small enough , then this system is monotone .we will make such an assumption and the monotonicity of the system is fundamental in our analysis .we recall that the case of fully overdamped dynamics , i.e. for , has already been treated in ( for only one type of particles ) .several results are related to our analysis .for instance in , homogenization results are obtained for monotone systems of hamilton - jacobi equations .notice that they obtain a system at the limit while we will obtain a single equation .techniques from dynamical systems are also used to study systems of odes ; see for instance and references therein .the classical frenkel - kontorova model describes a chain of classical particles evolving in a one dimensional space , coupled with their neighbours and subjected to a periodic potential . if denotes time and denotes the position of the particle , one of the simplest fk models is given by the following dynamics where denotes the mass of the particle, is a constant driving force which can make the whole `` train of particles '' move and the term describes the force created by a periodic potential whose period is assumed to be . notice that in the previous equation , we set to one physical constants in front of the elastic and the exterior forces ( friction and periodic potential ) . the goal of our work is to describe what is the macroscopic behaviour of the solution of as the number of particles per length unit goes to infinity .as mentioned above , the particular case where is referred to as the fully overdamped one and has been studied in .we would like next to give the flavour of our main results .in order to do so , let us assume that at initial time , particles satisfy for some and some lipschitz continuous function which satisfies the following assumption * * initial gradient bounded from above and below * for some fixed . such an assumption can be interpreted by saying that at initial time , the number of particles per length unit lies in .it is then natural to ask what is the macroscopic behaviour of the solution of as goes to zero , _i.e. _ as the number of particles per length unit goes to infinity .to this end , we define the following function which describes the rescaled positions of the particles where denotes the floor integer part . one of our main results states that the limiting dynamics as goes to of is determined by a first order hamilton - jacobi equation of the form where is a continuous function to be determined . more precisely , we have the following homogenization result [ th:0 ] there exists a critical value such that for all , m_0^c] ] denotes the function .this section is devoted to the definition of viscosity solutions for systems of equations such as , and . in order to construct hull functionswhen proving theorem [ th:2 ] , we will also need to consider a perturbation of with linear plus bounded initial data .for all these reasons , we define a viscosity solution for a generic equation whose hamiltonian satisfies proper assumptions . before making precise assumptions , definitions and fundamental results we will need later ( such as stability , comparison principle , existence ), we refer the reader to the user s guide of crandall , ishii , lions and the book of barles for an introduction to viscosity solutions and and references therein for results concerning viscosity solutions for systems of weakly coupled partial differential equations . as we mentioned it before ,we consider systems with general non - linearities .precisely , for , we consider the following cauchy problem : for , and , {j , m},\xi_j , \inf_{y'\in{{\mathbb r}}}\left(\xi_j(\tau , y')-py'\right ) + py-\xi_j(\tau , y),(\xi_j)_y ) \end{array } \right . \\\left\ { \begin{array}{l } u_{j+n}(\tau , y)= u_{j}(\tau , y+1)\\ \xi_{j+n}(\tau , y)=\xi_j(\tau , y+1 ) \end{array } \right .\end{array}\right.\ ] ] submitted to the initial conditions the most important example we have in mind is the following one for some constants , and where appears in , , . in view of, it is clear that in the case where effectively depends on the variable , solutions must be such that the infimum of is finite for all time .hence , when do depend on , we will only consider solutions satisfying for some : for all and all , we may assume that holds true for all time for a family of constants .since we have to solve a cauchy problem , we have to assume that the initial datum satisfies the assumption * * ( initial condition ) * + satisfies ( a0 ) ( with ) ; it also satisfies if depends on for some .as far as are concerned , we make the following assumptions . * * ( regularity ) * * * is continuous . * * for all , there exists such that for all , with ] . *the function is a _ sub - solution _ ( resp .a _ super - solution _ ) of on if holds true for in the case where depends on , and and for all , and are upper semi - continuous ( resp . lower semi - continuous ) , and for all and any test function such that attains a local maximum ( resp .a local minimum ) at the point , then we have and for all and any test function such that attains a local maximum ( resp . a local minimum ) at the point , then we have {j , m}(y),\xi_j(\tau , y ) , \inf_{y'\in{{\mathbb r}}}\left(\xi_j(\tau , y')-py'\right)+py-\xi_j(\tau , u),\phi_y(\tau , y))\ ] ] * the function is a _ sub - solution _ ( resp ._ super - solution _ ) of , if is a sub - solution ( resp .super - solution ) on and if it satisfies moreover for all * a function is a _ viscosity solution _ of ( [ eq:22n ] ) ( resp . of , )if is a sub - solution and is a super - solution of ( [ eq:22n ] ) ( resp .of , ) .sub- and super - solutions satisfy the following comparison principle which is a key property of the equation .[ pro:3 ] assume ( a0 ) and that satisfy ( a1)-(a5 ) .let ( resp . ) be a sub - solution ( resp . a super - solution ) of ,such that holds true for and in the case where depends on .we also assume that there exists a constant such that for all and \times { { \mathbb r}} ] and , assume ( a0)-(a5 ) .there exists a constant such that and are respectively super and sub - solution of , for all .moreover , we can choose where , and are respectively given in , ( a0 ) and .we prove that is a super - solution of , . in view of ( a0 ) with , we have for all and {j , m}(y),\xi^+_j(\tau , y ) , \inf_{y'\in{{\mathbb r}}}\left(\xi^+_j(\tau , y')-py'\right)+py-\xi_j^+(\tau , y),(\xi^+_j)_y(\tau , y)\bigg)\\ = & g_j\bigg(\tau , [ u^+(\tau,\cdot)-\lfloor u^+_j ( \tau , y)\rfloor]_{j , m}(y),\xi^+_j(\tau , y ) -\lfloor u^+_j ( \tau , y)\rfloor,\\ & \quad \quad\inf_{y'\in{{\mathbb r}}}\left(\xi_{0}(y'+\frac j n)-py'\right)+py -\xi_{0}(y+\frac j n),(\xi_{0})_y(y+\frac j n)\bigg)\\ \le & l_2 c_0+l_0 + l_0 + g_j\bigg(\tau , [ u^+(\tau,\cdot)-u^+_j ( \tau , y)]_{j , m}(y),\xi^+_j(\tau , y ) - u^+_j ( \tau , y),0,(\xi_{0})_y(y+\frac j n)\bigg)\\ \le & l_2 c_0+l_0+l_0 + l_0 k_0 \frac m n + l_0m_0 + g_j\bigg(\tau , 0,\dots , 0 , 0,0 , ( \xi_{0})_y(y+\frac j n)\bigg)\\ \le & l_2 c_0 + 2l_0 + l_0 k_0 \frac m n+l_0m_0+\overline g\end{aligned}\ ] ] where we have used the periodicity assumption ( a4 ) for the second line , assumptions ( a0 ) and ( a1)(ii ) for the third line , the fact that for and assumption ( a0 ) for the forth line and for the last line . when is independent on , we can simply choose .this ends the proof of the lemma . by applying perron s method together with the comparison principle, we immediately get from the existence of barriers the following result assume ( a0)-(a5 ). then there exists a unique solution of , .moreover the functions are continuous for all .we now claim that particles are ordered .[ pro : croissancej ] assume ( a0 ) and that the s satisfy ( a1)-(a6 ) .let be a solution of - such that holds true for if depends on .assume also that the s are lipschitz continuous in space and let denote a common lipschitz constant .then and are non - decreasing with respect to .the idea of the proof is to define .in particular , we have moreover , is a solution of {j , m},\zeta_j , \inf_{y'\in{{\mathbb r}}}\left(\zeta_j(\tau , y')-py'\right)+py-\zeta_j(\tau , y),(\zeta_j)_y),\\ \end{array}\right . \\ \\ \left\{\begin{array}{l } v_{j+n}(\tau , y)= v_{j}(\tau , y+1),\\ \zeta_{j+n}(\tau , y)=\zeta_j(\tau , y+1)\\ \end{array}\right . \\ \\ \left\{\begin{array}{l } v_j(0,y)=u_{0}(y+\frac j n),\\ \zeta_j(0,y)=\xi_{0}(y+\frac j n ) \ , .\end{array}\right . \end{array}\right.\] ] now the goal is to obtain and .the arguments are essentially the same as those used in the proof of the comparison principle .the main difference is that is replaced with {\bar j , m}(\bar x),\xi_{\bar j}(\bar t,\bar x ) , \inf(\xi_{\bar j}(\bar t , y')-p y')+p\bar x-\xi_{\bar j}(\bar t,\bar x),\bar p+2\a\bar x)\nonumber\\ & - g_{\bar j+1}(\bar t,[v(\bar t,\cdot)]_{\bar j , m}(\bar y),\zeta_{\bar j}(\bar t,\bar y ) , \inf(\zeta_{\bar j}(\bar t , y')-p y')+p\bar y-\zeta_{\bar j}(\bar t,\bar y),\bar p)\\ \\\le & g_{\bar j}(\bar t,[u(\bar t,\cdot)]_{\bar j , m}(\bar y),\xi_{\bar j}(\bar t,\bar x ) , \inf(\xi_{\bar j}(\bar t , y')-p y')+p\bar x-\xi_{\bar j}(\bar t,\bar x),\bar p+2\a\bar x)\nonumber\\ & - g_{\bar j+1}(\bar t,[v(\bar t,\cdot)]_{\bar j , m}(\bar y),\zeta_{\bar j}(\bar t,\bar y ) , \inf(\zeta_{\bar j}(\bar t , y')-p y')+p\bar y-\zeta_{\bar j}(\bar t,\bar y),\bar p ) + l_0 l_u |\bar x-\bar y| \\ & = : \overline \delta g_j \end{aligned}\ ] ] where we have used the lipschitz continuity of and assumption ( a1 ) . to obtain the desired contradiction, we have to estimate the right hand side of this inequality .first , using step 3 of the proof of the comparison principle ( with the same notation ) , we can define such that for , we get from the following estimate using monotonicity assumptions ( a2)-(a3 ) together with ( a1 ) , we get {\bar{j},m},\xi_{\bar j}(\bar t,\bar x ) , \inf(\xi_{\bar j}(\bar t , y')-p y')+p\bar x-\xi_{\bar j}(\bar t,\bar x),\bar p+2\a\bar x)\\ & & - g_{\bar j+1}(\bar t,[v(\bar t,\bar y)+ ( \cdot+1)\delta ] _{ \bar j , m},\zeta_{\bar j}(\bar t,\bar y ) , \inf(\zeta_{\bar j}(\bar t , y')-p y')+p\bar y-\zeta_{\bar j}(\bar t,\bar y),\bar p ) \\ & & + l_0(2m+1)\delta + l_0 l_u |\bar x-\bar y|\ , .\end{aligned}\ ] ] now we are going to use assumption ( a6 ) .remark first that we have for all and for , yields thus ( a6 ) implies that {\bar j , m}(\bar y),\xi_{\bar j}(\bar t,\bar x ) , \inf(\xi_{\bar j}(\bar t , y')-p y')+p\bar x-\xi_{\bar j}(\bar t,\bar x),\bar p+2\a\bar x ) \\\le g_{\bar j+1}(\bar t,[v(\bar t,\bar y)+ ( \cdot+1)\delta ] _ { \bar j , m},\xi_{\bar j}(\bar t,\bar x ) , \inf(\xi_{\bar j}(\bar t , y')-p y')+p\bar x-\xi_{\bar j}(\bar t,\bar x),\bar p+2\a\bar x ) \ , .\end{gathered}\ ] ] hence {\bar j , m},\xi_{\bar j}(\bar t,\bar x ) , \inf(\xi_{\bar j}(\bar t , y')-p y')+p\bar x-\xi_{\bar j}(\bar t,\bar x),\bar p+2\a\bar x ) \\ - g_{\bar j+1}(\bar t,[v(\bar t,\bar y)+ ( \cdot+1)\delta ] _ { \bar j , m},\zeta_{\bar j}(\bar t,\bar y ) , \inf(\zeta_{\bar j}(\bar t , y')-p y')+p\bar y-\zeta_{\bar j}(\bar t,\bar y),\bar p)\\ + l_0(2m+1)(\xi_{\bar j } ( \bar t , \bar x)-\zeta_{\bar j}(\bar t,\bar y ) ) + 2 ( m+1)l_0 l_u |\bar x-\bar y| + l_0 ( 2m+1 ) \alpha \max_{k\in\{\bar j - m,\dots , \bar j+m\}}(2|l_k \bar x|+l_k^2)\ , .\end{gathered}\ ] ] now , to obtain the desired contradiction , it suffices to follow the computation from ; in particular , choose in. then we obtain which is absurd for and small enough ( since as )this section is devoted to the proof of the main homogenization result ( theorem [ th:3n ] ) .the proof relies on the existence of hull functions ( theorem [ th:2 ] ) and qualitative properties of the effective hamiltonian ( theorem [ th:4 ] ) . as a matter of fact, we will use the existence of lipschitz continuous sub- and super - hull functions ( see proposition [ pro:139 ] ) .all these results are proved in the next sections .we start with some preliminary results . through a change of variables ,the following result is a straightforward corollary of lemma [ lem:1 ] and the comparison principle .[ lem:2 ] assume ( a0)-(a5 ) .then there is a constant , such that for all , the solution of , satisfies for all and we also have the following preliminary lemma .[ lem:3 ] assume ( a0)-(a5 ) .then the solution of , satisfies for all , , and and in particular we obtain that functions and are non - decreasing in .we prove the bound from below ( the proof is similar for the bound from above ) .we first remark that ( a0 ) implies that the initial condition satisfies for all and from ( a4 ) , we know that for , the equation is invariant by addition of integers to solutions .after rescaling it , equation is invariant by addition of constants of the form , .for this reason the solution of associated with initial data is .similarly the equation is invariant by space translations .therefore the solution with initial data is . finally , from and the comparison principle ( proposition [ pro:3 ] ) , we get which proves the bound from below .this ends the proof of the lemma .we now turn to the proof of theorem [ th:3n ] .we only have to prove the result for all . indeed ,using the fact that and , we will get the complete result . for all ,we introduce the following half - relaxed limits these functions are well defined thanks to lemma [ lem:2 ] .we then define we get from lemmata [ lem:2 ] and [ lem:3 ] that both functions satisfy for all , ( recall that as ) we are going to prove that is a sub - solution of .similarly , we can prove that is a super - solution of the same equation .therefore , from the comparison principle for , we get that .and then , which shows the expected convergence of the full sequence and towards for all .we now prove in several steps that is a sub - solution of .we classically argue by contradiction : we assume that there exists and a test function such that let denote .from , we get combining theorems [ th:2 ] and [ th:4 ] , we get the existence of a hull function associated with such that indeed , we know from these results that the effective hamiltonian is non - decreasing in , continuous and goes to as .we now apply the perturbed test function method introduced by evans in terms here of hull functions instead of correctors .precisely , let us consider the following twisted perturbed test functions for here the test functions are twisted in the same way as in .we then define the family of perturbed test functions by using the following relation in order to get a contradiction , we first assume that the functions and are and continuous in uniformly in . in view of the third line of , we see that this implies that and are uniformly continuous in ( uniformly in ) . for simplicity , and since we will construct approximate hull functions with such a ( lipschitz ) regularity , we even assume that and are globally lipschitz continuous in ( uniformly in ) .we will next see how to treat the general case .* case 1 : and are and globally lipschitz continuous in * * step 1.1 : is a super - solution of ( [ eq:6n ] ) in a neighbourhood of * when and are , it is sufficient to check directly the super - solution property of for .we begin by the equation satisfied by .we have , with and , where we have used the equation satisfied by to get the second line and the non - negativity of , the fact that and the fact that is , to get the last line on for small enough .we now turn to the equation satisfied by . with the same notation, we have {i , m}(x)\right ) -\frac{\alpha_0}\eps ( \phi^{\varepsilon}_i-\psi^{\varepsilon}_i)\\ = & ( g_i)_\tau(\tau , z ) + \phi_t ( t , x ) ( g_i)_z(\tau , z ) - 2 f_i\left(\tau,\left[\frac{\phi^\eps(t,\cdot)}{\eps}\right]_{i , m}(x)\right ) - \alpha_0 ( h_i(\tau , z)-g_i(\tau , z))\nonumber\\ = & ( \phi_t(t , x)-\lambda)\ ( g_i)_z ( \tau , z ) + 2 \overline l + 2\left ( f_i\left(\tau , \left[h(\tau,\cdot)\right]_{i , m}(z)\right ) - f_i\left(\tau,\left[\frac{\phi^\eps(t,\cdot)}{\eps}\right]_{i , m}(x)\right)\right)\nonumber\\ \ge & ( \phi_t(t , x)-\lambda)\ ( g_i)_z ( \tau , z ) + 2 \overline l -2l_{f}\left|\left[h(\tau,\cdot)\right]_{i , m}(z ) - \left[\frac{\phi^\eps(t,\cdot)}{\eps}\right]_{i , m}(x)\right|_\infty\nonumber\end{aligned}\ ] ] where we have used that equation is satisfied by to get the third line and ( a1 ) to get the fourth one ; here , denotes the largest lipschitz constants of the s ( for ) with respect to .let us next estimate , for and , , then , by definition of , we have if , let us define such that .we then have where only depends on the modulus of continuity of on ( for small enough such that with uniformly bounded and then ) . hence ,if are lipschitz continuous with respect to uniformly in and , we conclude that we can choose small enough so that {i , m}(z ) - \left[\frac{\phi^\eps(t,\cdot)}{\eps}\right]_{i , m}(x ) \right|_\infty \ge 0\ , .\ ] ] combining and , we obtain {i , m}(x)\right)+ \frac{\alpha_0}{{\varepsilon } } ( \phi^{\varepsilon}_i-\psi^{\varepsilon}_i ) \ge & \left(\phi_t(t , x)-\lambda \right ) \ ( g_i)_z ( \tau , z)\\ \ge & \left(\frac \theta 2+\phi_t(t , x)-\phi_t({\overline{t}},{\overline{x}})\right ) \( g_i)_z ( \tau , z ) \\ = & \left(\frac \theta 2 + o_r ( 1)\right ) \( g_i)_z ( \tau , z ) \ge 0 \ , .\end{aligned}\ ] ] we used the non - negativity of , the fact that and again the fact that is , to get the result on for small enough .therefore , when the and are and lipschitz continuous on uniformly in and , is a viscosity super - solution of ( [ eq:6n ] ) on .* step 1.2 : getting the contradiction * by construction ( see remark [ rem : osc - hull ] ) , we have and as for all , and therefore from the fact that on ( see ( [ eq:31 ] ) ) , we get for small enough with the integer in the same way , we have therefore , for , we can apply the comparison principle on bounded sets to get passing to the limit as goes to zero , we get which implies that this gives a contradiction with in ( [ eq:31 ] ) . therefore is a sub - solution of ( [ eq:3 ] ) on and we get that and converges locally uniformly to for .this ends the proof of the theorem .* case 2 : general case for * in the general case , we can not check by a direct computation that is a super - solution on .the difficulty is due to the fact that the and the may not be lipschitz continuous in the variable .this kind of difficulties were overcome in by using lipschitz super - hull functions , _i.e. _ functions satisfying , except that the function is only a super - solution of the equation appearing in the first line .indeed , it is clear from the previous computations that it is enough to conclude . in ,such regular super - hull functions ( as a matter of fact , regular super - correctors ) were built as exact solutions of an approximate hamilton - jacobi equation .moreover this lipschitz continuous hull function is a super - solution for the exact hamiltonian with a slightly bigger . herewe conclude using a similar result , namely proposition [ pro:139 ] .notice that in proposition [ pro:139 ] and are only lipschitz continuous and not .this is not a restriction , because the result of step 1.1 can be checked in the viscosity sense using test function ( see for further details ) .comparing with , notice that we do not have to introduce an additional dimension because here ( see ) .this ends the proof of the theorem .in this section , we first study the ergodicity of the equation by studying the associated cauchy problem ( subsection [ subsec : ergo ] ) .we then construct hull functions ( subsection [ subsec : hull ] ) . in this subsection ,we study the cauchy problem associated with with with , and with initial data .we prove that there exists a real number ( called the `` slope in time '' or `` rotation number '' ) such that the solution stays at a finite distance of the linear function .we also estimate this distance and give qualitative properties of the solution .we begin by a regularity result concerning the solution of .[ pro:130 ] assume ( a1)-(a5 ) and .let , and be the solution of , with defined by and .assume that holds true for .then satisfies we first show that and are non - decreasing with respect to .since the equation is invariant by translations in and using the fact that for all , we have we deduce from the comparison principle that which shows that and are non - decreasing in .we now explain how to get the lipschitz estimate .we would like to prove that where as soon as for any .we argue by contradiction by assuming that for such an .we next exhibit a contradiction .the supremum defining is attained since satisfies and can be explicitly computed .[ [ case-1 . ] ] case 1 .+ + + + + + + assume that the supremum is attained for the function at , , .since we have by assumption , this implies that , .hence we can obtain the two following viscosity inequalities ( by doubling the time variable and passing to the limit ) with .subtracting these inequalities , we obtain we thus get which is a contradiction in case 1 . [[ case-2 . ] ] case 2 .+ + + + + + + assume next that the supremum is attained for the function . by using the same notation and by arguing similarly , we obtain the following inequality where is the heaviside function and where we have used .we now use * the fact that the supremum is attained for the function * the fact that implies that ( remember that we already proved that is non - decreasing with respect to ) * assumption ( a1 ) ; in the following , still denotes de largest lipschitz constants of the s with respect to ; * the fact that in order to get from the previous inequality the following one using the same computation as the one of the proof of proposition [ pro:3 ] step 3 , we get where is a constant . since and , we finally deduce that for small enough , it is now sufficient to use once again that and the fact that in order to get the desired contradiction in case 2 .the proof is now complete .we now claim that particles are ordered .[ pro : croissancej - delta ] assume ( a0 ) , ( a1)-(a6 ) and let , and be the solution of , with defined by .assume that holds true for if .then and are non - decreasing with respect to . if , the results is a straightforward consequence of propositions [ pro : croissancej ] and [ pro:130 ] . if , the result is obtained by stability of viscosity solution ( i.e. and as ) . [ pro:11 ]let and .assume ( a0)-(a6 ) and let be a solution of , with defined in and with initial data with some .then there exists such that for all , and ( where is chosen equal to zero for ) .moreover we have for all , , in order to prove proposition [ pro:11 ] , we will need the following classical lemma from ergodic theory ( see for instance ) .[ lem : ergo ] consider a continuous function which is sub - additive , that is to say : for all , then has a limit as and we now turn to the proof of proposition [ pro:11 ] .we perform the proof in three steps .we first recall that the fact that and are non - decreasing in and follows from propositions [ pro:130 ] and [ pro : croissancej - delta ] .* step 1 : control of the space oscillations . *we are going to prove the following estimate . for all , all and all , therefore from the comparison principle and from the integer periodicity of the hamiltonian ( see ( a3 ) ), we get that since is non - decreasing in , we deduce that for all ] .we have {j , m}(y))=&f_j(\tau,[u(\tau,\cdot ) -\lfloor u_j(\tau , y ) \rfloor]_{j , m}(y))\nonumber\\ \le & l_f+f_j(\tau,[u(\tau,\cdot)- u_j(\tau , y ) ] _ { j , m}(y))\nonumber\\ \le & l_f+l_f \sup_{k\in \{0,\dots , m\ } } ( u_{j+k}(\tau , y)- u_j(\tau , y ) ) + \sup_\tau f(\tau , 0,\dots 0)\end{aligned}\ ] ] where we have used the periodicity assumption ( a4 ) for the first line , the lipschitz regularity of for the second and third ones , and the fact that is non - decreasing with respect to for the third line .moreover for all , we have that where we have used the periodicity of for the first line , the monotonicity in of for the second one and the control of the oscillation for the third one .we then deduce that {j , m}(y))\le l_f ( 2 + p ( m+n ) ) + \sup_\tau f(\tau , 0,\dots 0).\ ] ] combining this inequality with and , we deduce that we now define for all .classical arguments from viscosity solution theory show that we then deduce that using the same arguments with super - solution for , we get the desired result .* step 3 : control of the time oscillations .* we now explain how to control the time oscillations . the proof is inspired of .let us introduce the following continuous functions defined for and and in particular , these functions satisfy .the goal is to prove that and have a common limit as .we would like to apply lemma [ lem : ergo ] . in view of the definition of and , we see that and are sub - additive .analogously , and are also sub - additive .hence , if we can prove that these quantities are finite , we will know that they converge .we will then have to prove that the limits of and are the same .* step 3.1 : first control on the time oscillations * we first prove that are finite . for all , where and is defined in .consider .using the control of the space oscillations , we get that where recalling ( see lemma [ lem:1 ] ) that is a sub - solution and using the comparison principle on the time interval , we deduce that we now want to estimate from below .let us assume that the infimum in is reached for the index . then since .we then deduce that where we have used for the second line , the fact that is non - decreasing in for the third line , the periodicity of for the fourth line and for the last one . in the same way , we get that injecting this in , we get that and in the same way , we also get and taking , we finally get .* step 3.2 : refined control on the time oscillations * + we now estimate in order to prove that they have the same limit .[ lem : lambda+-lambda- ] for all , where . by definition of ,for all , there exists and such that consider .we choose such that and we set and using , we get that using the comparison principle , we then deduce that we now want to estimate from above .let us assume that the maximum in is reached for the index .we then have for all where we have used for the first line , the fact that is non - decreasing in for the second line , the periodicity of for the third line and for the last one . in the same way , we also get injecting this in , we get and taking and using ( with and ) and ( with and ) , we get in the same way , we get using also , and the fact that and are non - decreasing in , we finally get the comparison of and makes appear the additional constant , and the comparison between and ( and similarly between and ) creates an additional constant . indeed , we have this explains the value of the new constant .this implies that since this is true for all , the proof of the lemma is complete . *step 3.3 : conclusion * + we now can conclude that are equal .if denotes the common limit , we also have , by lemma [ lem : ergo ] , that for every , moreover , by lemma [ lem : lambda+-lambda- ] , we have and so we finally deduce ( using a similar argument for ) that combining this estimate and , we get with and this finally implies with . in this subsection , we construct hull functions for a general hamiltonian . as we shall see , this is a straightforward consequence of the construction of time - space periodic solutions of ; see proposition [ pro:122 ] and corollary [ pro:122bis ] below .we will then prove that the time slope obtained in proposition [ pro:11 ] is unique and that the map is continuous ; see proposition [ pro:129 ] below .given , we consider the equation in {j , m},\xi_j,\inf_{y'\in{{\mathbb r}}}\left(\xi_j(\tau , y')-py'\right)+py-\xi_j(\tau , y),(\xi_j)_y)\\ \end{array}\right . \\\\ \left\{\begin{array}{l } u_{j+n}(\tau , y)=u_j(\tau , y+1)\\ \xi_{j+n}(\tau , y)=\xi_j(\tau , y+1 ) \ , , \end{array}\right . \end{array}\right.\ ] ] where is given in for .then we have the following result [ pro:122]*(existence of time - space periodic solutions of ) * + let , and .assume ( a1)-(a6 ) .then there exist functions solving on and a real number satisfying for all , moreover satisfies for eventually , when the hamiltonians are independent on , we can choose and independent on . by considering for all and for all , immediately get the following corollary [ pro:122bis ] * ( existence of hull functions ) * + assume ( a1)-(a6 ) .there exists a hull function in the sense of definition [ defi:1n ] satisfying and we now turn to the proof of proposition [ pro:122 ] .the proof is performed in three steps . in the first one ,we construct sub- and super - solutions of in with good translation invariance properties ( see the first two lines of ) .we next apply perron s method in order to get a ( possibly discontinuous ) solution satisfying the same properties .finally , in step 3 , we prove that if the functions do not depend on , then we can construct a solution in such a way that it does not depend on either . * step 1 : global sub- and super - solution * by proposition [ pro:11 ] , we know that the solution of , with initial data satisfies on we first construct a sub - solution and a super - solution of for ( and not only ) that also satisfy the first two lines of , _ i.e. _ satisfy for all , to do so , we consider for two sequences of functions ( indexed by , ) and consider we first remark that thanks to , all these semi - limits are finite. we also remark that for all , is a sub - solution of .a similar remark can be done for the super - solutions . now a way to construct sub - solution ( resp .a super - solution ) of satisfying is to consider and notice that , and satisfy moreover on .therefore we have in particular * step 2 : existence by perron s method * applying perron s method we see that the lowest- super - solution lying above is a ( possibly discontinuous ) solution of on and satisfies we next prove that satisfies . for ,let us consider by construction the family is a super - solution of and is again above the sub - solution .therefore from the definition of , we deduce that which implies that and satisfy , _i.e _ the first two equalities of .similarly , we can consider , for which is again super - solution above the sub - solution .therefore which implies that and are non - decreasing in , _i.e. _ the third line of is satisfied .let us now prove that and are non - decreasing in .we consider , for the fact that this is a super - solution uses assuption ( a6 ) .indeed , let us assume that the infimum for is reached for the index and that the infimum for is reached for the index .then , formally , on one hand we have where we have used the fact that . on the other hand , we have {j+k_\xi}(y ) , \xi_{j+k_\xi}^\infty(\tau , y ) , \inf_{y'}(\xi_{j+k_\xi}^\infty(\tau , y')-py')+py -\xi_{j+k_\xi}^\infty(\tau , y ) , ( \xi_{j+k_\xi}^\infty)_y)\\ \ge & g_{j+k_\xi}(\tau , [ \check u^\infty(\tau,\cdot)]_{j+k_\xi}(y ) , \check \xi_{j}^\infty(\tau , y ) , \inf_{y'}(\check \xi_{j}^\infty(\tau , y')-py')+py -\check \xi_{j}^\infty(\tau , y ) , ( \check \xi_{j}^\infty)_y)\\ \ge & g_{j+k_\xi-1}(\tau , [ \check u^\infty(\tau,\cdot)]_{j+k_\xi-1}(y ) , \check \xi_{j}^\infty(\tau , y ) , \inf_{y'}(\check \xi_{j}^\infty(\tau , y')-py')+py -\check \xi_{j}^\infty(\tau , y ) , ( \check \xi_{j}^\infty)_y)\\ \ge & \dots\\ \ge & g_{j}(\tau , [ \check u^\infty(\tau,\cdot)]_{j}(y ) , \check \xi_{j}^\infty(\tau , y ) , \inf_{y'}(\check \xi_{j}^\infty(\tau , y')-py')+py -\check\xi_{j}^\infty(\tau , y ) , ( \check \xi_{j}^\infty)_y)\end{aligned}\ ] ] where we have used the fact that and joint to the monotonicity assumption of in the variable and for the first inequality and assumtion ( a6 ) joint to the fact that is non - decreasing in ( by construction ) for the other inequalities .we then conclude that is again super - solution above the sub - solution .therefore which implies that and are non - decreasing in , _i.e. _ the forth line of is satisfied .finally , the function still satisfies and also satisfies .* step 3 : further properties when the are independent on * when the do not depend on , we can apply steps 1 and 2 with in , and replaced with .this implies that the hull function does not depend on .this ends the proof of the proposition .[ pro:129 ] consider and assume ( a1)-(a6 ) .then * there exists a unique real number such that there exists a solution of on such that there exists such that for all , and the are defined in and ; moreover , we can choose with given in ; * if is seen as a function of ( ) , then this function is continuous .before to prove this proposition , let us give the proof of theorem [ th:2 ]. just apply proposition [ pro:129 ] with .the proof follows classical arguments .however , we give it for the reader s convenience . the proof is divided in two steps .* step 1 : uniqueness of * given some , assume that there exist with their corresponding hull functions .then define for , which are both solutions of equation ( [ eq:22n ] ) on . by corollary [ pro:122bis] , we know that and satisfy. then we have with which implies ( from the comparison principle ) for all using the fact that and , we deduce that for and we have which implies by ( [ eq:125bis ] ) because this is true for any , we deduce that the reverse inequality is obtained exchanging and .we finally deduce that , which proves the uniqueness of the real , that we call .* step 2 : continuity of the map * let us consider a sequence such that .let and be the corresponding hull functions . from corollary [ pro:122bis ] , we can choose these hull functions such that for where we recall that is defined in ( [ eq : c4 ] ) . remark that both and depends on , but can be bounded for in a neighbourhood of .we deduce in particular that there exists a constant such that let us consider a limit of , and let us define this family of functions is such that the family is a sub - solution of on . on the other hand , if denotes the hull function associated with and , then is a solution of ( [ eq:121 ] ) on . finally , as in step 1 , we conclude that similarly , considering we can show that therefore and this proves that ; the continuity of the map follows and this ends the proof of the proposition .when proving the convergence theorem [ th:3n ] , we explained that , on the one hand , it is necessary to deal with hull functions that are uniformly continuous in ( uniformly in and ) in order to apply evans perturbed test function method ; on the other hand , given some , we also know some hamiltonian , with effective hamiltonian , such that every corresponding hull function is necessarily discontinuous in ( see ) . recall that a hull function solves in particular {j , m})+ \alpha_0 ( h_j -g_j)\\ \end{array}\right.\ ] ] with and we overcome this difficulty as in ( see also ) .we build approximate hamiltonians with corresponding effective hamiltonians , and corresponding hull functions , such that we will show that it is enough to choose for with ( in fact , we will consider ) .we have the following variant of corollary [ pro:122bis ] . [ pro:135 ]+ assume ( a1)-(a3 ) . given , and ,then there exists a family of lipschitz continuous functions satisfying for and there exists such that {j , m } ) + \alpha_0 ( h_j -g_j ) \\ & + \delta p \left\{a_0 + \inf_{z'\in{{\mathbb r}}}\left(h_j(\tau , z')-z ' ) + z - h_j(\tau , z)\right)\right\}(h_j)_z \\ \end{array}\right . \\ \\ \left\{\begin{array}{rl } h_{j+n}(\tau , z)=&h_j(\tau , z+p ) \\g_{j+n}(\tau , z ) = & g_j ( \tau , z+p ) \end{array}\right . \end{array}\right.\ ] ] and for all moreover there exists a constant defined in such that , moreover , when the do not depend on , we can choose the hull function such that it does not depend on either .the construction follows the one made in proposition [ pro:11 ] and proposition [ pro:122 ] .however , proposition [ pro:122 ] has to be adapted . indeed ,since we want to construct a lipschitz continuous function with a precise lipschitz estimate , we do not want to use perron s method .this is the reason why here we can use a space - time lipschitz estimate of to get enough compacity to pass to the limit .the space lipschitz estimate comes from proposition [ pro:130 ] .the time lipschitz estimate of the s follows from lemma [ lem : u - xi ] and the equation satisfied by .the time lipschitz estimate of the s is obtained in the same way , using the fact that we can bound the right hand side of the equation satisfied by .indeed , one can use the space oscillation estimate of to bound {j , m}(x)) ] with given by as a function of the time integral of .since we attempt to get , we will look for functions which are periodic of period .the basic idea is to use a fixed point argument .first , we `` regularize '' the right hand side of by considering for some given {j , m}(y ) ) ) + \delta \left(1+t_k^1(\inf_{y'}(v(\tau , y'))-v(\tau , y))\right)(t_k^3(v_y+p))\ ] ] where are truncature functions . in particular , uniformly in and so for all , there exists a solution \times [ 0,\frac 1p)) ] .standard parabolic estimates show that \times [ 0,\frac 1p))}\\ \le & c|\f_{k , j}(\tau , v_1)-\f_{k , j}(\tau , v_2)|_{l^q([0,t]\times [ 0,\frac 1p))}\\ \le & c \left(|v_2-v_1|_{l^q([0,t]\times [ 0,\frac 1p ) ) } + |\inf(v_2)-v_2-(\inf(v_1)-v_1)|_{l^q([0,t]\times [ 0,\frac 1p))}+|(v_2-v_1)_y|_{l^q([0,t]\times [ 0,\frac 1p))}\right)\\ \le & c t^\beta|v_2-v_1| _ { w^{2,1;q}([0,t]\times [ 0,\frac 1p))}\end{aligned}\ ] ] for some ( see ) .while we have smooth solutions below the truncature , we can apply the arguments of subsection [ sec : appb ] and get estimates on the gradient of the solution which ensures that the solution is indeed below the truncature .finally , a posteriori , the truncature can be completely removed because of our estimate on the gradient of the solution .
|
we consider systems of odes that describe the dynamics of particles . each particle satisfies a newton law ( including a damping term and an acceleration term ) where the force is created by the interactions with other particles and with a periodic potential . the presence of a damping term allows the system to be monotone . our study takes into account the fact that the particles can be different . after a proper hyperbolic rescaling , we show that solutions of these systems of odes converge to solutions of some macroscopic homogenized hamilton - jacobi equations . * ams classification : * 35b27 , 35f20 , 45k05 , 47g20 , 49l25 , 35b10 . particle system , periodic homogenization , frenkel - kontorova models , hamilton - jacobi equations , hull function
|
blind source separation ( bss ) is a major area of research in signal and image processing .it aims at recovering source signals from their mixtures without detailed knowledge of the mixing process .applications of bss include signal analysis and processing of speech , image , and biomedical signals , especially , signal extraction , enhancement , denoising , model reduction and classification problems .the goal of this paper is to study new bss methods for nearly degenerate data arising from nuclear magnetic resonance ( nmr ) spectroscopy .the bss problem is defined by the following matrix model where .rows of represents the measured mixed signals , rows of are the source signals .the are sampled functions of an acquisition variable which may be time , frequency , position , wavenumber , etc , depending on the underlying physical process .hence there are samples in the measurements .the objective of bss is to solve for and given . in the context of nmr spectroscopy ,the mixing coefficients are not typically measured .this is where bss techniques become useful .the problem is also known as nonnegative matrix factorization ( nmf ) .similar to factorizing a composite number ( ) , there are permutation and scaling ambiguities in solutions to bss . for any permutation matrix and invertible diagonal matrix , ( , )is another pair equivalent to the solution , since various bss methods have been proposed relying on _ priori _ knowledge of source signals such as spatio - temporal decorrelation , statistical independence , sparseness , nonnegativity , etc , .recently there have been considerable interests for solving nonnegative bss problems , which emerge in computer tomography , biomedical image processing , nmr spectroscopy .this work is originated from analytic chemistry , in particular , nmr spectroscopy .applications include identification of organic compounds , metabolic fingerprinting , disease diagnosis , and drug design . as chemical mixtures abound in human organs , for example , blood , urine , and metabolites in brain and muscles .each compound has a unique spectral fingerprint defined by the number , intensity and locations of its nmr peaks . in drug design, structural information must be isolated from spectra that also contain the target molecule , side products , and impurities .the different spectra come from fourier transform of nmr measurement of absorbance of radio frequency radiation by receptive nuclear spins of the same mixture sample at different time segments when exposed to high magnetic fields .the nmr spectra are nonnegative . besides , nmr spectra of different chemical compounds are usually not independent , especially as compounds ( component molecules ) have similar functional groups , the peaks overlap in the composite nmr spectra making it difficult to identify the compounds involved .ica - type approaches recover independent source signals and thus are unable to separate nmr source spectra .new methods need to be invented to handle this class of data .recently nonnegative bss has been attracted considerable attention in nmr spectroscopy .for example , naanaa and nuzillard ( nn ) proposed a nonnegative bss method in based on a strict local sparseness assumption of the source signals .the nn assumption ( nna ) requires the source signals to be strictly non - overlapping at some locations of acquisition variable ( e.g. , frequency ) . in other words ,each source signal must have a stand - alone peak where other sources are strictly zero there .such a strict sparseness condition leads to a dramatic mathematical simplification of a general nonnegative matrix factorization problem ( [ lmm ] ) which is non - convex . geometrically speaking, the problem of finding the mixing matrix reduces to the identification of a minimal cone containing the columns of mixture matrix .the latter can be achieved by linear programming .in fact , nn s sparseness assumption and the geometric construction of columns of were known in the 1990 s in the problem of blind hyper - spectral unmixing , where the same mathematical model ( [ lmm ] ) is used .the analogue of nn s assumption is called pixel purity assumption . the resulting geometric ( cone ) method is the so called n - findr , and is now a benchmark in hyperspectral unmixing .nn s method can be viewed as an application of n - findr to nmr data .it is possible that measured nmr data may not strictly satisfy nn s sparseness conditions , which introduces spurious peaks in the results .postprocessing methods have been developed to address the resulting errors .such a study has been performed recently in case of ( over)-determined mixtures where it is found that larger peaks in the signals are more reliable and can be used to minimize errors due to lack of strict sparseness . in this paper , we consider how to separate the data if nn assumption is not satisfied .we are concerned with the regime where source signals do not have stand - alone peaks yet one source signal dominates others over certain intervals of acquisition variable . in other words , a dominant interval(s ) condition ( di ) is required for source signals .this is a reasonable condition for many nmr spectra .for example , the di condition holds well in the nmr data which motivated us .the data is produced by the so - called dosy ( diffusion ordered spectroscopy ) experiment where a physical sample of mixed chemical compounds in solvent ( water ) is prepared .dosy tries to distinguish the chemicals based on variation in their diffusion rates. however , dosy fails to separate them if the compounds have similar chemical functional groups ( i.e. , they have similar diffusion rates ) . in this application ,the diffusion rates of the chemicals serve as the mixing coefficients .this presents an additional mathematical challenge due to the near singularity of the mixing matrix .separating these degenerate data is intractable to the convex cone methods , thus we are prompted to develop new approaches .examination the di condition reveals a great deal about the geometry of the mixtures .actually , the scattered plot of columns of must contain several clusters of points , and these clusters are centered at columns of .hence , the problem of finding boils down to the identification of the clusters , and it can be accomplished by data clustering , for example , k - means . although the data clustering in general produces a fairly good estimate of the mixing matrix , its output deviates from the true solution due to the presence of the noise , initial guess of the clustering algorithm , and so on . in the case of nearly singular mixing matrix , a small perturbation can lead to large errors in the source recovery ( e.g. , spurious peaks ) . to overcome this difficulty and improve robustness of the separation , we propose two different methods .one is to find a better estimation of mixing matrix by allowing a constrained perturbation to the clustering output , and it is achieved by a quadratic programming . the intention is to move the estimation closer to the true solution . the other is to seek sparse source signals by exploiting the di condition .an optimization problem is formulated for recovering the source signals .the paper is outlined as follows ; in section 2 , we shall review the essentials of nn approach , then we propose a new condition on the source signals motivated by nmr spectroscopy data . in section 3 ,we introduce the method . in section 4 ,we further illustrate our method with numerical examples including the processing of an experimental dosy nmr data set .section 5 is the conclusion .we shall use the following notations throughout the paper .the notation stands for the -th column of matrix , for the -th column of matrix , the -th column of matrix . while and are the -th rows of matrix and , or the -th source and mixture , respectively .this work was partially supported by nsf - adt grant dms-0911277 and nsf grant dms-0712881 .the authors thank professor a.j .shaka and dr .hasan celik for helpful discussions and their experimental nmr data .in the paper , we shall consider the determined case ( ) , although the results can be easily extended to the over - determined case ( ) .consider the linear model ( [ lmm ] ) where each column in represents data collected at a particular value of the acquisition variable , and each row represents a mixture spectrum . in this section , we shall first discuss the briefs of nn method , then introduce the new source conditions and the method .in , naanaa and nuzillard ( nn ) presented an efficient sparse bss method and its mathematical analysis for nonnegative and partially orthogonal signals such as nmr spectra .consider the ( over)-determined regime where the number of mixtures is no less than that of sources ( ) , and the mixing matrix is full rank . in simple terms ,nn s key sparseness assumption ( referred to as nna below ) on source signals is that each source has a stand - alone peak at some location of the acquisition variable where the other sources are identically zero .more precisely , the source matrix is assumed to satisfy the following condition : for each there exists an such that and eq .( [ lmm ] ) can be rewritten in terms of columns as where denote the column of , and the column of .assumption nna implies that or .( [ lincomb ] ) is rewritten as which says that every column of is a nonnegative linear combination of the columns of .here $ ] is the submatrix of consisting of columns each of which is collinear to a particular column of .it should be noted that are not known and have to be computed .once all the are found , an estimation of the mixing matrix is obtained .the identification of columns is equivalent to identifying a convex cone of a finite collection of vectors .the cone encloses the data columns in matrix , and is the smallest of such cones .such a minimal enclosing convex cone can be found by linear programming methods .mathematically , the following constrained equations are formulated for the identification of , then any column will be a column of if and only if the constrained equation ( [ lpnf ] ) is inconsistent . however ,if noises are present , the following optimization problems are suggested to estimate the mixing matrix a score is associated with each column .a column with a low score is unlikely to be a column of because this column is roughly a nonnegative linear combination of the other columns of . on the other hand, a high score means that the corresponding column is far from being a nonnegative linear combination of other columns .practically , the columns from with highest scores are selected to form , the mixing matrix .the moore - penrose inverse of is then computed and an estimate to is obtained : .nn method proves to be both accurate and efficient if nna condition holds .however , if the condition is not satisfied , errors and artifacts may be introduced because the true mixing matrix is no longer the smallest enclosing convex cone of columns of the data matrix .recently , the authors have developed postprocessing techniques on how to improve nn results with abundance of mixture data , and how to improve mixing matrix estimation with major peak based corrections .the work in actually considered a relaxed nna ( rnna ) condition : for each there exists an such that and where .simply said , each source signal has a dominant peak at acquisition position where the other sources are allowed to be nonzero .nna condition recovers if all .the rnna is more realistic and robust than the ideal nna for real - world nmr data . motivated by the dosy nmr spectra, we propose here a different relaxed nn condition on the source signals .note that the rows of are the source signals , and they are required to satisfy the following condition : for , source signal is required to have dominant interval(s ) over , while is allowed to overlap with other signals at the rest of the acquisition region .more formally , it implies that source matrix satisfies the following condition for each , there is a set such that for each .we shall call this dominant interval condition , or di condition . fig .[ sourcecondition ] is an idealized example of three di source signals .in addition to the di source condition , the mixing matrix is required to be near singular .the motivation is the similar diffusion rates of the chemicals with similar structure .this poses a mathematical challenge to invert a near singular matrix , since a small error in the recovered mixing matrix might lead to a considerable deviation in the source recovery . among the singularly mixed signals ( or degenerate data ) ,in this paper we shall consider the following two types : 1 ) columns of the mixing matrix are parallel ; 2 ) one column of the mixing matrix is a nonnegative linear combination of others .case 1 is motivated by nmr of the chemicals with similar diffusion rate . we shall call this condition parallel column condition , or pcc .case 2 can also be encountered in nmr spectroscopy of chemicals , and we shall call it one column degenerate condition , or ocdc . please note that both pcc and ocdc should be considered to hold approximately in real - world data .now suppose we have a set of nearly degenerate signals from di sources .we require that compared to the size of dominant interval(s ) in the acquisition region , the source signals overlapping region is much smaller .in fact , this is a reasonable assumption for the nmr data which motivates us .more importantly , this requirement enables the success of the clustering method .next , we shall estimate the columns of mixing matrix by data clustering .the dominant interval(s ) from each of the source signals implies that there is a region where the source dominates others .more precisely , there are columns of such that where dominate , i.e. , .the identification of is equivalent to finding a cluster formed by these s in . as illustrated in the geometry plot of in fig .[ sourcecondition ] , three clusters are formed .many clustering techniques are available for locating these clusters , for example , k - means is one of the simplest unsupervised learning algorithms that solve the well known clustering problem .we shall use k - means analysis in this paper because it is computationally fast , and easy to implement .consider an example of three di source signals with ocdc mixing matrix condition , the three centers are shown in fig .[ sourcecondition ] . for real - world data, we show an example of nmr spectra of quinine , geraniol , and camphor mixture in fig .[ real_data ] .the clusters in the middle implies that ocdc condition hold well for this data . apparently , nn method ( and other convex cone methods ) would fail to separate the source signals due to the degeneracy of the mixing matrix. it might be able to identify two columns of as the two edges , it by no means can locate s degenerate column .for the pcc degenerate case , clustering is also able to deliver a good estimation , even when the data is contaminated by noise .we show the results in fig .[ real_data_pcc ] where the three clusters are very close due to the pcc degeneracy .nn solution would deviate considerably from the true solution .for the data we tested , clustering techniques like k - means works well when the condition number of the mixing matrix is up to .though the solutions of mixing matrix by clustering methods are rather good estimation to the true solution , small deviations from the true ones will introduce large errors in the source recovery ( ) .next we propose two approaches to overcome this difficulty .both approaches need to solve optimization problems .the first one intends to improve the source recovery by seeking a better mixing matrix , while the second approach reduces the spurious peaks by imposing sparsity constraint on the sources .are identified as the three center points ( in red diamond ) attracting most points in scatter plots of the columns of x ( left ) , and the three rows of x ( right ) .nn method identifies two columns of as the points in the blue circle.,title="fig:",width=302,height=283 ] are identified as the three center points ( in red diamond ) attracting most points in scatter plots of the columns of x ( left ) , and the three rows of x ( right ) .nn method identifies two columns of as the points in the blue circle.,title="fig:",width=302,height=283 ] are identified as the three center points ( in red diamond ) attracting most points in scatter plots of the columns of x ( left ) , and the three rows of x ( right).,title="fig:",width=302,height=283 ] are identified as the three center points ( in red diamond ) attracting most points in scatter plots of the columns of x ( left ) , and the three rows of x ( right).,title="fig:",width=302,height=283 ] suppose the estimation of the mixing matrix by clustering is . then the source recovery can be obtained . as discussed above , errors in could be introduced even by a small perturbation in to the ground truth .negative spurious peaks are produced in most cases , see the fig .[ eg1_rec ] where the negative peaks on the left plot actually can be viewed as bleed through from another source .clearly , a better estimation of mixing matrix is required to reduce these spurious peaks . instead of looking for a better mixing matrix, we propose to solve the following optimization problem for a better inverse of the matrix , where is the identity matrix .the constraint is used to reduce the negative values introduced in the source recovery .( [ inverse ] ) is a linearly constrained quadratic program and it can be solved by a variety of methods including interior point , gradient projection , active sets , etc . in this paper ,interior point algorithm is used .once the minimizer is obtained , we solve for the sources by .the method proposed above works well for mixing matrix whose condition number is up to .if the mixing matrix is much more ill - conditioned , the problem ( [ lmm ] ) becomes under - determined .it appears that solving the equation exactly for is hopeless even an accurate is provided . however , a meaningful solution is possible if the actual source signals are structurally compressible , meaning that they essentially depend on a low number of degrees of freedom .although the source signals ( rows of ) are not sparse , the columns of possess sparsity due to the dominant intervals condition . hence , we seek the sparsest solution for each column of as here ( 0-norm ) represents the number of nonzeros . because of the non - convexity of the 0-norm , we minimize the -norm : which is a linear program because is non - negative . the fact that data may in general contain noise suggest solving the following unconstrained optimization problem , for which bregman iterative method with a proper projection onto non - negative convex subset can be used to obtain a solution . under certain conditions of matrix , it is known that solution of -minimization ( [ loneu ] ) gives the exact recovery of sufficiently sparse signal , or solution to ( [ lzero ] ) , .though our numerical results support the equivalence of and minimizations , the mixing matrix does not satisfy the existing sufficient conditions .in this section , we report the numerical examples solved by the method . we compute three examples . the data of the first two examples are synthetic , while the third example uses real nmr data .in the first example , two sources are to be separated from two mixtures .the mixtures are constructed from two real nmr source signal by simulating the linear model ( [ lmm ] ) .the two columns of mixing matrix are nearly parallel , and its condition number is about .the true mixing matrix , its estimation via clustering , and the improved estimate by solving ( [ inverse ] ) are ( for ease of comparison , the first rows of are scaled to be same as that of ) clearly is a better estimate .the mixtures are plotted in fig .[ eg1_mix ] , and the results are presented in fig .[ eg1_rec ] . in the second example , three sources are to be separated from three mixtures .the mixing matrix satisfies the ocdc condition , i.e. , one of its columns is a nonnegative linear combination of the other two . to test the robustness of the method , we added gaussian noise ( snr = 60 db ) to the data .the mixtures and their geometric structure are plotted in fig .[ eg2_mix ] .first the data clustering was used to obtain an estimation of the mixing matrix , then an optimization problem is solved to retrieve the sources .the results are shown in fig .[ eg2_res ] .it can be seen that the recovered sources agree well with the ground truth . for the third example, we provide a set of real data to test our method .the data is produced by diffusion ordered spectroscopy ( dosy ) which is an nmr spectroscopy technique used by chemists for mixture separation .however , the three compounds used in the experiment ( quinine , geraniol , and camphor ) have similar chemical functional groups ( i.e. there is overlapping in their nmr spectra ) , for which dosy fails to separate them .it is known that each of the three sources has dominant interval(s ) over others in its nmr spectrum .this can also be verified from the three isolated clusters formed in their mixed nmr spectra ( see the geometry of their mixtures in fig . [ real_data_3 ] ) . herewe separate three sources from three mixtures .[ real_data_3 ] plots the mixtures ( rows of ) and their geometry ( columns of ) where three clusters of points can be spotted .then the columns of are identified as the center points of three clusters .the solutions are presented in fig .[ real_result ] , the results are satisfactory comparing with the ground truth . as a comparison , the source signals recovered by nn is shown in fig .[ real_result_nn ] where , here the inverse is moore - penrose ( the least squares sense ) pseudo - inverse which produces some negative ( erroneous ) peaks in .minimization ( right column).,title="fig:",width=302,height=283 ] minimization ( right column).,title="fig:",width=302,height=283 ] are identified as the three center points in blue circles attracting most points in scatter plots of the columns of x ( left ) , and the three rows of x ( right).,title="fig:",width=283,height=283 ] are identified as the three center points in blue circles attracting most points in scatter plots of the columns of x ( left ) , and the three rows of x ( right).,title="fig:",width=302,height=283 ] ( left ) and the ground truth ( right).,title="fig:",width=302,height=264 ] ( left ) and the ground truth ( right).,title="fig:",width=302,height=264 ]this paper presented novel methods to retrieve source signals from the nearly degenerate mixtures .the motivation comes from nmr spectroscopy of chemical compounds with similar diffusion rates . inspired by the nmr structure of these chemicals , we propose a viable source condition which requires dominant interval(s ) from each source signal over the others .this condition is well suited for many real - life signals . besides, the nearly degenerate mixtures are assumed to be generated from the following two types of mixing processes : 1 ) all the columns of the mixing matrix are parallel ; 2 ) one column of the mixing matrix is the nonnegative linear combination of others .we first use data clustering to identify the mixing matrix , then we develop two approaches to improve source signals recovery .the first approach minimizes a constrained quadratic program for a better mixing matrix , while the second method seeks the sparsest solution for each column of the source matrix by solving an optimization .numerical results on nmr spectra data show satisfactory performance of our method and offer promise towards understanding and detecting complex chemical spectra .though the methods are motivated by the nmr spectroscopy , the underlying ideas may be generalized to different data sets in other applications .9999999 r. barton , j. nicholson , p. elliot , and e. holmes , _ high - throughput 1h nmr - based metabolic analysis of human serum and urine for large - scale epidemiological studies : validation study _ , int .j. epidemiol .37(2008)(suppl 1)pp .i31i40 .s. choi , a. cichocki , h. park , and s. lee , _ blind source separation and independent component analysis : a review _ , neural inform . process ., 6 ( 2005 ) , pp . 157 .a. cichocki and s. amari , _ adaptive blind signal and image processing : learning algorithms and applications _ ,john wiley and sons , new york , 2005 .p. comon , _ independent component analysis a new concept ?_ , signal processing , 36 ( 1994 ) pp . 287314 .i. koprivaa , i. jeri , and v. smreki , _ extraction of multiple pure component 1h and 13c nmr spectra from two mixtures : novel solution obtained by sparse component analysis - based blind decomposition _ , analytica chimica acta , 653 ( 2009 ) pp .d. d. lee and h. s. seung , _ learning of the parts of objects by non - negative matrix factorization _ ,nature , 401 ( 1999 ) pp . 788791 .j. liu , j. xin , y - y qi , _ a dynamic algorithm for blind separation of convolutive sound mixtures _ ,neurocomputing , 72(2008 ) , pp 521 - 532 .j. liu , j. xin , y - y qi , _ a soft - constrained dynamic iterative method of blind source separation _ , siam j. multiscale modeling simulations , vol . 7 , no .4 , pp 1795 - 1810 , 2009 .j. liu , j. xin , y - y qi , f - g zeng , _ a time domain algorithm for blind separation of convolutive sound mixtures and l-1 constrained minimization of cross correlations _ , comm .math sci , vol . 7 , no . 1 , 2009 , pp 109 - 128 . s. moussaouia , h. hauksdttir , f. schmidt , c. jutten , j. chanussot , d. briee , s. dout , and j.a .benediktsson , _ on the decomposition of mars hyperspectral data by ica and bayesian positive source separation _ , neurocomputing , 71(2008 ) , pp 21942208 .d. nuzillard , s. bourgb and j.m .nuzillard , _ model - free analysis of mixtures by nmr using blind source separation _ , j. magn .reson . 133( 1998 ) pp .. m. plumbley , _conditions for non - negative independent component analysis _ , ieee signal processing letters , 9 ( 2002 ) pp. 177180 .y. sun and j. xin,_unique solvability of under - determined sparse blind source separation of nonnegative and partially overlapped data _ , iasted international conference on signal and image processing , 710 - 017 , august 2325 , 2010 , hawaii , usa . y. sun and j. xin , _ nonnegative sparse blind source separation for nmr spectroscopy by data clustering , model reduction , and minimization _ , preprint .w. wu , m. daszykowski , b. walczak , b.c .sweatman , s. connor , j. haselden , d. crowther , r. gill , m. lutz , _ peak alignment of urine nmr spectra using fuzzy warping _ , j. chem .inf . model . , 46(2006 ). w. yang , y. wang , q. zhou , and h. tang , _ analysis of human urine metabolites using spe and nmr spectroscopy _ , sci china ser b - chem , 51(2008 ) pp . 218225 .
|
in this paper , we develop a novel blind source separation ( bss ) method for nonnegative and correlated data , particularly for the nearly degenerate data . the motivation lies in nuclear magnetic resonance ( nmr ) spectroscopy , where a multiple mixture nmr spectra are recorded to identify chemical compounds with similar structures ( degeneracy ) . there have been a number of successful approaches for solving bss problems by exploiting the nature of source signals . for instance , independent component analysis ( ica ) is used to separate statistically independent ( orthogonal ) source signals . however , signal orthogonality is not guaranteed in many real - world problems . this new bss method developed here deals with nonorthogonal signals . the independence assumption is replaced by a condition which requires dominant interval(s ) ( di ) from each of source signals over others . additionally , the mixing matrix is assumed to be nearly singular . the method first estimates the mixing matrix by exploiting geometry in data clustering . due to the degeneracy of the data , a small deviation in the estimation may introduce errors ( spurious peaks of negative values in most cases ) in the output . to resolve this challenging problem and improve robustness of the separation , methods are developed in two aspects . one technique is to find a better estimation of the mixing matrix by allowing a constrained perturbation to the clustering output , and it can be achieved by a quadratic programming . the other is to seek sparse source signals by exploiting the di condition , and it solves an optimization . we present numerical results of nmr data to show the performance and reliability of the method in the applications arising in nmr spectroscopy . = -1truecm = -2truecm
|
the willmore energy of an immersed compact oriented surface with boundary is defined as where is the mean curvature vector on , the geodesic curvature on , and , the induced area and length metrics on , . the willmore energy of surfaces with or without boundary plays an important role in geometry , elastic membranes theory , strings theory , and image processing . among the many concrete optimization problems where the willmore functional appears ,let us mention for instance the modeling of biological membranes , the design of glasses , and the smoothing of meshed surfaces in computer graphics .the willmore energy is the subject of a long - standing research not only due to its relevance to some physical situations but also due to its fundamental property of being conformal invariant , which makes it an interesting substitute to the area functional in conformal geometry .critical points of with respect to interior variations are called willmore surfaces .they are solutions of the euler - lagrange equation whose expression is particularly simple when : , being the gauss curvature .it is known since blaschke and thomsen that stereographic projections of compact minimal surfaces in are always willmore surfaces in . however , pinkall exhibited in an infinite series of compact embedded willmore surfaces that are not stereographic projections of compact embedded minimal surfaces in .yet kusner conjectured that stereographic projections of lawson s -holed tori in should be global minimizers of among all genus surfaces .this conjecture is still open , except of course for the case where the round sphere is known to be the unique global minimizer .the existence of smooth surfaces that minimize the willmore energy spanning a given boundary and a conormal field has been proved by schtzle in . following the notations in , we consider a smooth embedded closed oriented curve together with a smooth unit normal field and we denote as and their possible orientations .we assume that there exist oriented extensions of , , that is , there are compact oriented surfaces with boundary and conormal vector field on .we also assume that there exists a bounded open set such that the set is not empty . the condition on energy ensures that is an embedding .it follows from , corollary 1.2 , that the willmore boundary problem associated with in has a solution , i.e. , there exists a compact , oriented , connected , smooth surface with , on , and there have been many contributions to the numerical simulation of willmore surfaces in space dimension . among them , hsu , kusner and sullivan have tested experimentally in the validity of kusner s conjecture : starting from a triangulated polyhedron in that is close to a lawson s surface of genus , they let it evolve by a discrete willmore flow using brakke s surface evolver and check that the solution obtained after convergence is -stable .recent updates that brakke brought to its program give now the possibility to test the flow with various discrete definitions of the mean curvature .mayer and simonett introduce a finite difference scheme to approximate axisymmetric solutions of the willmore flow .rusu and clarenz et al . use a finite elements approximation of the flow to compute the evolution of surfaces with or without boundary . in both works ,position and mean curvature vector are taken as independent variables , which is also the case of the contribution by verdera et al . , where a triangulated surface with a hole in it is restored using the following approach : by the coarea formula , the willmore energy ( actually a generalization to other curvature exponents ) is replaced with the energy of an implicit and smooth representation of the surface , and the mean curvature term is replaced by the divergence of an unknown field that aims to represent the normal field .droske and rumpf propose a finite element approach to the willmore flow but replace the standard flow equation by its level set formulation .the contribution of dziuk is twofold : it provides a finite element approximation to the willmore flow with or without boundary conditions that can handle as well embedded or immersed surfaces ( turning the surface problem into a quasi - planar problem ) , and a consistency result showing the convergence of both the discrete surface and the discrete willmore energy to the continuous surface and its energy when the approximated surface has enough regularity .bobenko and schrder use a difference strategy : they introduce a discrete notion of mean curvature for triangulated surfaces computed from the circles circumscribed to each triangle that shares with the continuous definition a few properties , in particular the invariance with respect to the full mbius group in .this discrete definition is vertex - based and a discrete flow can be derived .based also on several axiomatic constraints but using a finite elements framework , wardetzky et al . introduce an edge - based discrete willmore energy for triangulated surfaces .olischlger and rumpf introduce a two step time discretization of the willmore flow that extends to the willmore case , at least formally , the discrete time approximation of the mean curvature motion due to almgren , taylor , and wang , and luckhaus and sturzenhecker . the strategy consists in using the mean curvature flow to compute an approximation of the mean curvature and plug it in a time discrete approximation of the willmore flow .grzibovskis and heintz , and esedoglu et al . discuss how 4th order flows can be approximated by iterative convolution with suitable kernels and thresholding .while all the previous approaches yield approximations of critical points of the willmore energy , our motivation in this paper is to approximate global minimizers of the energy .this is an obviously nontrivial task due to the high nonlinearity and nonconvexity of the energy . yet , for the simpler area functional , sullivan has shown with a calibration argument that the task of finding minimal surfaces can be turned into a linear problem .even more , when a discrete solution is seeked among surfaces that are union of faces in a cubic grid partition of , he proved that the minimization of the linear program is equivalent to solving a minimum - cost circulation network flow problem , for which efficient codes have been developed by boykov and kolmogorov after ford and fulkerson .sullivan did not provide experiments in his paper but this was done recently by grady , with applications to the segmentation of medical images .the linear formulation that we propose here is based on two key ideas : the concept of surface continuation constraints that has been pioneered by sullivan and grady , and the representation of a triangular surface using pairs of triangles . with this representation and a suitable definition of discrete mean curvature ,we are able to turn into a linear formulation the task of minimizing discrete representations of any functional of the form among discrete immersed surfaces with boundary constraints : in the expression of , denotes the space variable , the normal vector field on and the mean curvature vector .the linear problem we obtain involves integer - valued unknowns and does not seem to admit any simple graph - based equivalent .we will therefore discuss whether classical strategies for linear optimization can be used .the paper is organized as follows : in section [ sec:1 ] we discuss both the chosen representation of surfaces and the definition of discrete mean curvature . in section [ sec:2 ]we present a first possible approach yielding a quadratic energy .we present in section [ sec:3 ] our linear formulation and discuss whether it can be tackled by classical linear optimization techniques .the equivalence shown by sullivan between finding minimal surfaces and solving a flow problem holds true for discrete surfaces defined as a connected set of cell faces in a cellular complex discrete representation of the space .we will consider here polyhedral surfaces defined as union of triangles with vertices in ( a finite subset of ) the cubic lattice where is the resolution scale .not all possible triangles are allowed but only those respecting a specified limit on the maximal edge length .we assume that each triangle , as well as each triangle edge , is represented twice , once for each orientation .we let denote the collection of oriented triangles , its cardinality , and the number of oriented triangle edges .the constrained boundary is given as a contiguous oriented set of triangle edges .the orientation of the boundary constrains the spanning surfaces since we will allow only spanning triangles whose orientation is compatible . in this framework, one can represent a triangular mesh as a binary indicator vector where means that the respective triangle is present in the mesh , that it is not .obviously , not all binary indicator vectors can be associated with a triangular surface since the corresponding triangles may not be contiguous . however , as discussed by grady and , in a slightly different setting , by sullivan , it is possible to write in a linear form the constraint that only binary vectors that correspond to surfaces spanning the given boundary are considered .we will see that using the same approach here turns the initial boundary value problem into a quadratic program .another formulation will be necessary to get a linear problem . to define the set of admissible indicator vectors , we first consider a relationship between oriented triangles and oriented edges which is called _ incidence _: a triangle is positive incident to an edge if the edge is one of its borders and the two agree in orientation .it is negative incident if the edge is one of its borders , but in the opposite orientation .otherwise it is not incident to the edge .for example , the triangle in figure [ fig : incidence ] is positive incident to the edge , negative incident to and and not incident to . being defined as above the set of oriented triangles and their oriented edges , we introduce the matrix whose element gives account of the incidence between triangle and edge .more precisely the knowledge of which edges are present in the set of prescribed boundary segments is expressed as a vector with with these notations set up we can now describe the equation system defining that a vector encodes an oriented triangular mesh with the pre - specified oriented boundary .this system has one equation for each edge .if the edge is not contained in the given boundary , this equation expresses that , among all triangles indicated by that contain the edge , there are as many triangles with same orientation as the edge as triangles with opposite orientation .if the edge is contained in the boundary with coherent orientation , there must be one more positive incident triangle than negative incident .if it is contained with opposite orientation , there is one less positive than negative incident . altogether the constraint for edge can be expressed as the linear equation and the entire system as so far , we did not incorporate the conormal constraint .actually not all conormal constraints are possible , exactly like not all discrete curves can be spanned in our framework but only union of edges of dictionary triangles , i.e. the collection of triangles defined in the previous section that determine the possible surfaces . for the conormal constraint ,only the conormal vectors that are tangent to dictionary triangles sharing an edge with the boundary curve are allowed. then the conormal constraint can be easily plugged into our formulation by simply imposing the corresponding triangles to be part of the surface , see figure [ fig : tribound ] , and by defining accordingly a new boundary indicator vector . denoting as the collection of those additional triangles , the complete constraint reads we discuss in the next sectionhow discrete mean curvature can be evaluated in this framework .the various definitions of discrete mean curvature that have been proposed in the literature obviously depend on the chosen discrete representations of surfaces .presenting and discussing all possible definitions is out of the scope of the present paper .the important thing to know is that there is no fully consistent definition : the pointwise convergence of mean curvature can not be guaranteed in general but only in specific situations . among the many possible definitions, we will use the edge - based one proposed by polthier for it suits with our framework . recalling that , in the smooth case but also for generalized surfaces like varifolds , the first variation of the area can be written in terms of the mean curvature, the definition due to polthier of the mean curvature vector at an interior edge of a simplicial surface reads where is the edge - length , is the dihedral angle between the two triangles adjacent to , and is the angle bisecting unit normal vector , i.e. , the unit vector collinear to the half sum of the two unit vectors normal to the adjacent triangles ( see figure [ fig : local - triangle ] ) . remark that this formula is a discrete counterpart of the continuous depending on the principal curvatures , which is used in many papers for simplicity as definition of mean curvature .when the correct continuous definition is used , the formulas above and hereafter should be adapted .the justification of formula by polthier is as follows : it is exactly the gradient at any point of the area of the two triangles and adjacent to , and this gradientdoes not depend on the exact position of .indeed , one can subdivide , in four triangles , having as a vertex and such that and .the area of each triangle is half the product of the opposite edge s length and the height .therefore , if is the positively oriented edge opposite to in the triangle and , the rotations in the planes of , by , the area gradients of , at are , , , .the sum is the total area gradient of at and equals , which coincides with the formula above . as discussed by wardetsky et al .using the galerkin theory of approximation , this discrete mean curvature is an integrated quantity : it scales as when each space dimension is rescaled by .a pointwise discrete mean curvature rescaling as is given by ( see ) where denotes the total area of the two triangles adjacent to .the factor comes from the fact that , when the mean curvatures are summed up over all edges , the area of each triangle is counted three times , once for each edge .then a discrete counterpart of the energy is given by in particular , the edge - based total squared mean curvature is we are aiming at casting the optimization problem in a form that can be handled by standard linear optimization software . having in mind the framework described above where a discrete surface spanning the prescribed discrete boundary is given as a collection of oriented triangles satisfying equation and chosen among a pre - specified collection of triangles , a somewhat natural direction at first glance seems to be solving a _quadratic program_. like in section [ sec : mesh ] , let us indeed denote as the collection of binary variables associated to the `` dictionary '' of triangles and define * the common edge to two adjacent triangles and ; * the corresponding dihedral angle ; * the angle bisecting unit normal ; * the total area of both triangles .then a continuous energy of the form can be discretized as with \tilde\varphi(t_i , n_i)&\mbox{if }\\[1 mm ] 0&\mbox{otherwise}\end{array}\right. ] is very hard to solve : terms of the form with are indefinite , so ( unless has a dominant diagonal ) the objective function is a non - convex one . moreover ,a solution to the relaxed problem would not be of practical use : already for the 2d - problem of optimizing curvature energies over curves in the plane , the respective quadratic program favors fractional solutions .the relaxation would therefore not be useful for solving the integer program .however , in this case amini et al . showed that one can solve a linear program instead .this inspired us for the major contribution of this work : to cast the problem as an integer linear program .the key idea of the proposed integer linear program is to consider additional indicator vectors . aside from the indicatorvariables for basic triangles , one now also considers entries corresponding to _ pairs _ of adjacent triangles . such a pair is called _ quadrangle _ in the following. we will denote the augmented vector where run over all indices of adjacent triangles .the cost function can be easily written in a linear form with this augmented vector , i.e. it reads with ( see the notations of the previous section ) the major problem to overcome is how to set up a system of constraints that guarantees consistency of the augmented vector : the indicator variable for the pair of triangles and should be if and only if both the variables and are .otherwise it should be .in addition , one again wants to optimize only over indicator vectors that correspond to a triangular mesh . to encode this in a linear constraint system ,a couple of changes are necessary .first of all , we will now have a constraint for each pair of triangle and adjacent edge .secondly , edges are no longer oriented .still , the set of pre - specified indices implies that the orientation of the border is fixed - we still require that for each edge of the boundary an adjacent ( oriented ) triangle is fixed to constrain the conormal information . to encode the constraint systemwe introduce a modified notion of incidence .we are no longer interested in incidence of triangles and edges .instead we now consider the incidence of both triangles and quadrangles to pairs of triangles and ( adjacent ) edges . for convenience, we define that triangles are positive incident to a pair of edge and triangle , whereas all quadrangles are negative incident .we propose an incidence matrix where lines correspond to pairs ( triangle , edge ) and columns to either triangles or quadrangles .the entries of this incidence matrix are either the incidence of a pair ( triangle , edge ) with a triangle , defined as or the incidence of a pair ( triangle , edge ) with a quadrangle , defined as the columns of this incidence matrix are of two types : either with only 0 s and exactly three ( a column corresponding to a triangle , whose three edges are found at lines , , ) , or with only 0 s and exactly two s ( a column corresponding to a quadrangle that matches with lines and ) . again, both the conormal constraints and the boundary edges can be imposed by imposing additional triangles indexed by a collection of indices .the general constraint has the form where the right - hand side depends whether the edge is shared by two triangles of the surface ( and even several quadrangles in case of self - intersection ) , or belongs to the new boundary indicated by the additional triangles .if is an inner edge , then the sum must be zero due to our definition of , otherwise there is an adjacent triangle , but no adjacent quadrangle , so the right - hand side should be : to sum up , we get the following integer linear program : where is the total number of entries in , namely all triangles plus all pairs of adjacent triangles .it is worth noticing that such formulation allows triangle surfaces with self - intersection .solving integer linear programs is an np - complete problem , see e.g. .this implies that , to the noticeable exception of a few particular problems , no efficient solutions are known .as a consequence one often resorts to solving the corresponding linear programming ( lp ) relaxation , i.e. one drops the integrality constraints . in our casethis means to solve the problem : or , equivalently , by suitably augmenting and in order to incorporate the second constraint , : there are various algorithms for solving this problem , the most classical being the simplex algorithm and several interior point algorithms .let us now discuss the conditions under which these relaxed solutions are also solutions of the original integer linear program .recalling the basics of lp - relaxation , the set of admissible solutions is a polyhedron , i.e. a finite intersection of half - spaces in .a classical result states that minimizing solutions for the linear objective functions can be seeked among the extremal points of only , i.e. its vertices . denoting the integral envelope of , that is the convex envelope of , another classical result states that has integral vertices only ( i.e. vertices with integral coordinates ) if and only if since , according to theorem 19.3 in , a _ sufficient _ condition for having is the property of being totally unimodular , i.e. any square submatrix has determinant either , or . under this condition ,any extremal point of that is a solution of }\langle w,\hat x\rangle\ ] ] has integral coordinates therefore is a solution of the original integer linear program theorem 19.3 in mentions an interesting characterization of total unimodularity due to paul camion : a matrix is totally unimodular if , and only if , the sum of the entries of every eulerian square submatrix ( i.e. with even rows and columns ) is divisible by four . unfortunately , we can prove that , as soon as the triangle space is rich enough , the incidence matrix does not satisfy camion s criterion , therefore is not totally unimodular , and neither are the matrices for richer triangles spaces . as a consequence , there are choices of the triangle space for which the polyhedron may have not only integral vertices , or more precisely one can not guarantee this property thanks to total unimodularity .this is summarized in the following theorem . the incidence matrix associated with any triangle space where each triangle has a large enough number of adjacent neighbors is not totally unimodular .we show in figure [ fig : counterex ] a configuration and , in table [ tab ] , an associated square submatrix of the incidence matrix .the sum of entries over each line and the sum over each column are even , though the total sum of the matrix entries is not divisible by four . by a result of camion ,the incidence matrix is not totally unimodular which yields the conclusion according to [thm 19.3 ] .clearly , any triangle space for which this configuration can occur is also associated to an incidence matrix that is not totally unimodular .it is worth noticing that the previous theorem does not imply that the extremal points of the polyhedron are necessarily not all integral .it only states that this can not be guaranteed as usual by the criterion of total unimodularity .we will discuss in the next section what additional informations about integrality can be obtained from a few experiments that we have done using classical solvers for addressing the relaxed linear problem ..a square incidence matrix associated with the configuration in figure [ fig : counterex ] .it is eulerian , i.e. the sum along each line and the sum along each column are even , but the total sum is not divisible by four . according to camion , the matrix is not totally unimodular . [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^",options="header " , ] our results above indicate that , necessarily , integer linear solvers should be used .these commonly start with solving the linear programming relaxations , then derive further valid inequalities ( called _ cuts _ ) and/or apply a branch - and - bound scheme . due to the small number of fractional values that we have observed in our experiments , it is quite likely that the derivation of a few cuts only would give integral solutions . however, we did not test this so far because of the running times of this approach : in cases where we get fractional solutions the dual simplex method often needs as long as two weeks and up to gb memory ! from experience with other linear programming problems we consider it likely that the interior point methods implemented in commercial solvers will be much faster here ( we expect less than a day ) . at the same time, we expect the memory consumption to be considerably much higher , so the method would most probably be unusable in practice .we strongly believe that a specific integer linear solver should be developed rather than using general implementations .it is well known that , for a few problems like the knapsack problem [chapter 24.6 ] , their specific structure gives rise to ad - hoc efficient approaches . recalling that our incidence matrix is very sparse and well structured ( the nonzero entries of each column are either exactly two , or exactly three ) we strongly believe that an efficient integer solver can be developed and our approach can be amenable to higher - resolution results in the near future .we have shown that the minimization under boundary constraints of mean curvature based energies over surfaces , and in particular the willmore energy , can be cast as an integer linear program .unfortunately , this integer program is not equivalent to its relaxation so the classical lp algorithms offer no warranty that the integer optimal solution will be found .this implies that pure integer linear algorithms must be used , which are in general much more involved .we believe however that the particular structure of the problem paves the way to a dedicated algorithm that would provide high - resolution _ global _ minimizers of the willmore boundary problem and generalizations .this is the purpose of future research .y. boykov and v. kolmogorov .an experimental comparison of min - cut / max - flow algorithms for energy minimization in computer vision . in a.k .jain m. figueiredo , j. zerubia , editor , _ int .workshop on energy minimization methods in computer vision and pattern recognition ( emmcvpr ) _ , volume 2134 of _ lncs _ , pages 359374 .springer verlag , 2001 .
|
we consider the problem of finding ( possibly non connected ) discrete surfaces spanning a finite set of discrete boundary curves in the three - dimensional space and minimizing ( globally ) a discrete energy involving mean curvature . although we consider a fairly general class of energies , our main focus is on the willmore energy , i.e. the total squared mean curvature . most works in the literature have been devoted to the approximation of a surface evolving by the willmore flow and , in particular , to the approximation of the so - called willmore surfaces , i.e. , the critical points of the willmore energy . our purpose is to address the delicate task of approximating _ global _ minimizers of the energy under boundary constraints . the main contribution of this work is to translate the nonlinear boundary value problem into an integer linear program , using a natural formulation involving pairs of elementary triangles chosen in a pre - specified dictionary and allowing self - intersection . the reason for such strategy is the well - known existence of algorithms that can compute _ global minimizers _ of a large class of linear optimization problems , however at a significant computational and memory cost . the case of integer linear programming is particularly delicate and usual strategies consist in relaxing the integral constraint into $ ] which is easier to handle . our work focuses essentially on the connection between the integer linear program and its relaxation . we prove that : * one can not guarantee the total unimodularity of the constraint matrix , which is a sufficient condition for the global solution of the relaxed linear program to be always integral , and therefore to be a solution of the integer program as well ; * furthermore , there are actually experimental evidences that , in some cases , solving the relaxed problem yields a fractional solution . these facts prove that the problem can not be tackled with classical linear programming solvers , but only with pure integer linear solvers . nevertheless , due to the very specific structure of the constraint matrix here , we strongly believe that it should be possible in the future to design ad - hoc integer solvers that yield high - definition approximations to solutions of several boundary value problems involving mean curvature , in particular the willmore boundary value problem .
|
hyperchaotic attractors are characterized by at least two positive les and are considered to be much more complex in terms of topological structure and dynamics compared to low dimensional chaotic attractors . in the last one decade or so , hyperchaotic systems have attracted increasing attention from various scientific and engineering communities due to a large number of practical applications .these include secure communication and cryptography , synchronistion studies using electro - optic devices and as a model for chemical reaction chains . in all these applications, the complexity of the underlying attractor has a major role to play .though the concept of hyperchaos was introduced many years ago by rssler , a systematic understanding of the topological and fractal structure of the attractors generated from the hyperchaotic systems is lacking till date .studies in this direction have been very few except a series of papers by kapitaniak et al . in which the authors have discussed many aspects of the structure and transition to hyperchaos using a system of unidirectionally coupled oscillators as a model .hyperchaotic attractors are , in general , higher dimensional with the fractal dimension and trajectories diverging in at least two directions as the system evolves in time .hence the detection of hyperchaos is generally done using the les with the transition to hyperchaos marked by the crossing of the second largest le above zero .one of our aims in this paper is to try and get a more quantitative information regarding the structure of the hyperchaotic attractor in terms of the spectra of dimensions and use this information to detect the transition to hyperchaos .recently , we have done a detailed dimensional analysis of several standard hyperchaotic models and have established some results which are common to all these systems . for example , we have shown that the topological structure of the underlying attractor changes suddenly as the system makes a transition from chaos to hyperchaos and the attractor becomes a network of local clusters combined together to form the global structure .further , the attractor develops fractal structure at two seperate range of scales , with different fractal dimensions corresponding to intra cluster scaling and inter - cluster scaling . herewe first show that the hyperchaotic attractor is a multifractal corresponding to the two length scales mentioned above with distinct spectra .in other words , the overall fractal structure of a hyperchaotic attractor can be characterized by two superposed spectrums .it is well known that , unlike ideal fractals , real world systems and limited point sets exhibit self similarity only over a finite range of scales .thus in the present case , statistical self similarity and hence the multifractal behavior changes between two finite range of scales .multifractality is commonly related to a probability measure that can have different fractal dimensions on different parts of the support of this measure .many authors have discussed the standard multifractal approach in detail and we briefly summarise the main results below for a point set ( such as , an attractor generated by a chaotic system ) .let the attractor be partitioned into dimensional cubes of side , with being the number of cubes required to cover the attractor . if is the probability that the trajectory passes through the cube , then , where is the number of points in the cube and the total number of points on the attractor .we now assume that satisfies a scaling relation where is the scaling index for the cube .we now ask how many cubes have the same scaling index or have scaling index within and ( if is assumed to vary continuously ) .let this number , say , scales with as where is a characteristic exponent. obviously , behaves as a dimension and can be interpreted as the fractal dimension for the set of points with scaling index .this also implies that the attractor can be characterized by a spectrum of dimensions normally denoted by ( where can , in principle , vary from to ) , that can be related to through a legendre transformation . the plot of as a function of gives a one hump curve with maximum corresponding to , the simple box counting dimension of the attractor .note that , in the above arguement , the scaling exponent measures how fast the number of points within a box decreases as is reduced .it therefore measures _ the strength of a singularity _ for . for a realistic attractor , with limited number of data points ,the limit is not accessible and hence one chooses a suitable scaling region for to compute and .correlation dimension computed using modified box counting scheme for three different hyperchaotic time series .the top panel shows the scaling region for computing for ( a ) the chen hyperchaotic flow ( b ) the m - g system and ( c ) the ikeda system .each curve is for an embedding dimension that varies from 1 to 10 .the bottom panel shows the variation of with for the three cases . ]this is where a hyperchaotic attractor becomes different from an ordinary chaotic attractor , as per our numerical results .we find that , to characterize the multifractal structure of a hyperchaotic attractor , two seperate scaling regions are to be considered corresponding to two length scales , say , and .the former represents the covering of trajectory points within clusters by boxes at very small length scales and represents how fast the points in these boxes decrease with .similarly , the latter represents the covering of the clusters within the whole attractor and the corresponding represents how fast the cluster of points decrease as is reduced . in both cases ,the scaling is different at different parts of the attractor , leading to multifractality .we present the numerical results in the next section .it should be noted that such _multiscales _ exhibited by multifractals have recently become an interesting area of research and have been discussed in various contexts .for example , the importance of multiscale multifractal analysis ( mma ) has been demonstrated in the study of human heart rate variability time series , where the multifractal properties of the measured signal depends on the time scale of fluctuations or the frequency band .also , multiscale multifractal intermittent turbulence in space plasmas has been investigated in the time series of velocities of solar wind plasma .but the most interesting example of a real world system that shows multifractal structure at distinct length scales is the distribution of galaxies in the observable universe , which we discuss in detail in v. top panel shows the values ( points ) and its best fit curve ( continuous line ) of the m - g attractor for ( in the chaotic phase ) , calculated from a time series consisting of data points with .the lower panel shows the spectrum computed from the best fit curve . ]the specific information regarding the structure of the hyperchaotic attractor provides the possibility of generating hyperchaos by coupling two chaotic attractors , a result already shown in the literature . herewe present a general scheme for this to get both hyperchaos and high dimensional chaos by varying a control parameter .finally , we generate the dual multifractal structure both synthetically and as models of real world distribution .our paper is organized as follows : in the next section , we discuss the details of numerical computations of the multifractal spectrum to show how the structure of a hyperchaotic attractor varies from that of an ordinary chaotic attractor .this result is utilized for the generation of hyperchaos by coupling two chaotic attractors , whose details are given in iii . in iv and v , we present two examples to show that the dual multifractal structure is possible in structures other than hyperchaos . while the first one is a cantor set generated using a specific scheme , the second one is an example from the real world which confirms that such a structure really exists in nature .the paper is concluded in we now present our main results by computing the generalized dimensions and the spectrum of the hyperchaotic attractor from time series .we use the time series from three standard hyperchaotic systems for computation , namely , the chen hyperchaotic flow , the mackey - glass ( m - g ) time delayed system and the ikeda time delayed system . for the hyperchaotic flow ,we fix the parameters as studied in detail in to generate the hyperchaotic time series . for m - g and ikeda systems, we use the time delay as the control parameter with the other parameters fixed as for m - g and for ikeda respectively .we have studied the transition to hyperchaos in these two time delayed systems in detail and here we choose for m - g and for ikeda for generating the hyperchaotic time series . before going into the computation of the multifractal spectrum, we discuss very briefly our results on obtained using the modified box counting scheme , where the scaling region for computing is fixed algorithmically . in fig .[ f.1 ] ( top panel ) , we show the scaling region computed by the scheme for the above three systems in the hyperchaotic phase . here , the weighted box counting sum is plotted against box size .each curve corresponds to an embedding dimension , which varies from to .two scaling regions ( denoted i and ii ) are evident in all cases , for .the values computed from the two scaling regions as a function of are also shown in fig .[ f.1 ] ( bottom panel ) .it has been shown numerically that the dual scaling regions are a consequence of the attractor breaking up into a large number of clusters at the transition to hyperchaos .thus , the region i represents scaling within the clusters with high value of ( say , ) and region ii corresponds to inter - cluster scaling with comparatively lower value .we now show that the hyperchaotic attractor not only has two values , but the entire spectrum of and the associated spectrum corresponding to the two different range of scales . in other words ,the nature of statistical self similarity changes ( as the range of scales varies ) from intra cluster to inter - cluster .top panel shows the values ( points ) and the best fit curve ( line ) for the two scaling regions of the chen hyperchaotic flow with .the lower panel shows the corresponding spectra computed from the best fit curve . ]same as the previous figure for m - g and ikeda systems . ] to compute the and the spectrum from the time series , we use the automated algorithmic scheme proposed by us recently .a brief discussion of the scheme is given below which is based on the grassberger - procaccia ( g - p ) algorithm .as the first step , the spectrum of generalised dimensions is computed from the time series using the equation where are the generalised correlation sum .this is done by choosing the scaling region algorithmically as discussed earlier .we then use an entirely different algorithmic approach for the computation of the smooth profile of the spectrum .the function is a single valued function between and and also has to satisfy several other conditions , such as , it has a single maximum and .a simple function that can satisfy all the necessary conditions is where , , , and are a set of parameters characterizing a particular curve .it can be shown that only four of these parameters are independent and any general curve can be fixed by four independent parameters . moreover , by imposing the conditions on the curve , it can also be shown that .the scheme first takes and as input parameters from the computed values and choosing an initial value for in the range ] . from this , a smooth versus curve can be obtained by inverting using the legendre transformation equations , which is then fitted to the spectrum derived from the time series .the parameter values are changed continuously until the curve matches with the spectrum from the time series and the statistically best fit curve is chosen . from this , the final curve can be evaluated . to illustrate the scheme , we first compute the values and the associated spectrum for the m - g attractor in the chaotic phase for with embedding dimension .the results are shown in fig .the values ( points)and the best fit curve ( continuous line ) are shown in the upper panel while the associated spectrum computed from the best fit curve is shown in the lower panel .we now apply the scheme to the hyperchaotic attractors and compute the and the spectrum for the two scaling regions seperately , by fixing and for scaling region i ( ii ). the embedding dimension used is and the number of data points in the time series is in all cases .[ f.3 ] shows the results of computations of the chen hyperchaotic flow while fig .[ f.4 ] shows that of the two time delayed systems in the hyperchaotic phase . in both cases ,the curves corresponding to the two scaling regions are given in the upper panel and the associated spectra are given in the lower panel .our results indicate that the fractal structure of a hyperchaotic attractor is much more complex compared to that of an ordinary chaotic attractor with a superposition of multifractals .the result that a hyperchaotic attractor is a superposition of two multifractals provides us with a possibility for constructing a hyperchaotic system by coupling two chaotic systems .there are already a few papers in the literature where the authors propose the construction of a hyperchaotic system either by coupling two chaotic systems or by introducing a time delayed transformation .it has also been suggested that in some cases , even the coupling is not required ; just an amalgam of two chaotic attractors can sometimes become hyperchaotic , which strongly supports our results .this method is applied in the synchronization studies by multiplexing two chaotic signals . herewe consider the prospect of generating hyperchaos by coupling any two regular chaotic systems and study under what conditions and coupling schemes one can achieve this . the coupling of two chaotic systems is more generally employed in the synchronization studies rather than hyperchaos generation where the coupling strength has to be sufficiently high . herewe consider two individual chaotic systems evolving at two different time scales coupled together .the general scheme that we propose is given by here with representing the intrinsic dynamics of the two systems and denoting the coupling function .the parameters and represent the coupling strengths between the two systems and the parameters and are the two time scale parameters indicating that the two systems are evolving at two different time scales . without loss of generality , we can take and . the two individual systems and can be any two low dimensional chaotic systems , identical or different .we have done a detailed numerical analysis taking some standard low dimensional chaotic systems , such as , lorenz , rssler and ueda as individual systems with and identical as well as different .we use the standard parameters for the individual systems in the chaotic regime .we have tried both diffusive coupling and the linear coupling , the two commonly used coupling schemes . in the former case ,the feedback terms used for coupling are and while in the latter , these are and . in both cases , the 2-way or mutual coupling as well as the 1-way or drive - response couplinghave been tested .the parameter should be sufficiently small in order to avoid the synchronization of the dynamics between the two systems and the ampllitude death .we vary the value of in the range to . the time step of integration should be sufficiently small to capture the small scale properties of the hyperchaotic attractor and hence we fix the value of in our numerical simulations .our control parameter is .we have found that hyperchaos can be generated in all the different coupling schemes mentioned above for a range of values of depending on the individual systems , nature of coupling and strength of coupling . to show the above results explicitly , we consider two specific cases . in the first case ,we choose the diffusive coupling of two lorenz systems as in drive - response mode given by with the parameters in the chaotic regime as . with ,the control parameter is varied and for each , trajectory points are used for computing the scaling region using the modified box counting code after discarding the first points as transients . since we already take the time step of integration to be very small , if , the evolution of the second system turns out to be very slow and the effect of coupling will be a very small perturbation on the first system . in effect , the resulting attractor is found to be chaotic with close to that of a single system . on the other hand , when , the second system evolves very fast and often swamps out the dynamics arising out of the evolution of the first system .the result is again a chaotic attractor , but with dimension higher than the individual systems , resulting in high dimensional chaos . in between , for a range of values of , the resulting attractoris found to display hyperchaotic behavior . in fig .[ f.5 ] , we show the results for two values of , namely , and , the former being hyperchaotic and the latter chaotic as reflected by the change in the scaling regions for the two cases .the coupled lorenz model with two different time scales as we have considered , but with two way coupling , has been used earlier as ocean - atmospheric model in climate studies .the model represents the interactive dynamics of a fast changing atmosphere and slow fluctuating ocean .we have also analysed this model numerically with for a range of values of .our results indicate that the underlying dynamics can be hyperchaotic or high dimensional chaotic depending on the value of the time scale parameter .the second case we show is a mutual diffusive coupling between the standard lorenz and rssler chaotic attractors given by with and . hereagain we find that the resulting attractor is hyperchaotic for a range of intermediate values of and two typical cases , one hyperchaotic and the other high dimensional chaotic , are shown in fig .thus we find that any two regular chaotic attractors can be coupled to generate hyperchaos as well as high dimensional chaos by varying the value of .top panel shows the attractors obtained by drive - response coupling two lorenz systems , with , as in eq .( 6 ) for ( a ) and ( b ) .the corresponding scaling regions for the weighted box counting sum from time series are shown in the lower panels ( c ) and ( d ) respectively .the change in the scaling region is obvious for the two values .while the left one is a four wing hyperchaotic attractor , the one on the right is chaotic . ]top panel is a hyperchaotic attractor arising out of the mutual diffusive coupling between the lorenz and rssler attractor for and the bottom panel is a chaotic attractor for the same coupling with . in both cases , value of . ]here we show that it is possible to generate synthetically a set of points in the unit interval ] is divided into two parts with probabilities and ( where ) and assigned to two fractional length scales and respectively in the first step .this process is repeated to each of the lengths and in the second step . by continuing this process times ( with ) , one gets the 2 - scale cantor set which is a multifractal . construction of 2 - level 2 - scale cantor set .a unit interval is divided into two fractional lengths and with probabilities of measure and respectively .this process is repeated for steps . for the next steps , a different set of parameters resulting set is the 2- level 2 - scale cantor set which is a superposition of two multifractals for an intermediate range of values . ]scaling region of the 2 - level 2 - scale cantor set constructed in the previous figure .the solid triangles marked i represent the scaling region for large corresponding to the ordinary 2 - scale cantor set with parameters in level i while the solid squares marked ii correspond to small for the level ii cantor set .the solid circles having dual slope represent the scaling region of the resulting set for which is a superposition of the two cantor sets for levels i and ii . ]we now modify this process slightly .the above step is repeated times where is a finite number which is the _ level i _ of construction . in the _ level ii _ , we change the probabilities and to new values and and the fractional length scales from to and continue the procedure for the next steps .the resulting set is the 2 - level 2 - scale cantor set , whose construction is shown in fig .[ f.7 ] . to implement this numerically, we use the set of parameter values : , so that the fractal dimensions of the sets generated by the two levels are widely different .we construct the set using different values of and fixing to be a large value ( ) .we find that if is very small ( say , ) , the scaling region for the weighted box counting sum as a function of has a single slope corresponding to the fractal dimension of the cantor set for level ii and for large ( ) , the cantor set corresponding to level i prevails .however , for an intermediate range of values , the scaling region displays two slopes with a smooth transition from one to the other implying that the resulting set is a superposition of two cantor sets involving both levels i and ii .the above results are shown in fig .[ f.8 ] where the intermediate value of used is .our numerical experiment shows that the 2-level 2-scale cantor set offers a typical geometric construction of a fractal set that displays a multifractal spectrum with two superimposed components .finally , we present an example for the presence of hierarchical multifractal structure from the real world , namely , the distribution of galaxies in the universe . it should be noted that the structure of galaxy distribution is believed to be the result of tiny random density fluctuations occured at the initial stages of the evolution of the universe and hence has nothing to do with hyperchaos or hyperchaotic evolution .our aim is just to show an analogy that such a structure actually exists in the universe . a typical galaxy distribution data generated from the dark matter n - body simulations . ]complete scaling region obtained by applying our algorithmic scheme to compute from the simulated data of galaxy distribution shown in the previous figure .two realisations of the simulated data are used for the computation as shown . ]top panel shows the best fit curves for the two scaling regions of the galaxy simulation data .the lower panel shows the associated spectra for the two scaling regions . ]the standard model of cosmology rests on the assumption that the universe , at very large scales , is homogeneous and isotropic which is known as _ the cosmological principle_. butmost of the galaxy surveys show definitive evidence for fractal structure even with the largest scales probed so far , with the presence of structures at different scales , such as , clusters and super clusters . herewe analyse an ensemble of galaxy data samples generated from dark matter n - body simulations as a model for galaxy distribution .each data sample consists of data points in the form of a 3 column data contained in a cube of mpc . a typical data is shown in fig . [ f.9 ] .we analyse these data samples using the algorithmic schemes discussed above to compute and the spectrum , fixing the embedding dimension .the scaling region determined by the scheme for computing is shown in fig .[ f.10 ] for two data samples .two scaling regions are evident .the lower part ( scaling region i ) is dominated by scaling within the galaxy clusters while the upper part ( region ii ) corresponds to inter - cluster scaling .we compute the dimensions and the associated spectrum corresponding to the two scaling regionsseperately as discussed in ii and the results are shown in fig . [ f.11 ] .it is clear that , within the scales of analysis , the galaxy distribution has a dual multifractal structure , analogous to that of a hyperchaotic attractor .multifractality exhibited in multiscales has become an important tool for the analysis of complex systems . in this paper , we show numerically that a hyperchaotic attractor is a dual multifractal with distinct spectra over two different range of scales .this is shown explicitly by computing the spectra for two different class of hyperchaotic attractors .hyperchaos is normally characterized by computing the spectra of les and looking at the transition of the second largest le above zero .though this method works for synthetic systems , it becomes much more difficult when the system is represented as a time series .our numerical results indicate that there is also a structural change for the underlying attractor as the system makes a transition to hyperchaos .this structural change can be more easily identified using the spectrum of dimensions , especially for systems analysed using time series .thus , our results also offer a method to identify transition to hyperchaos through correlation dimension analysis of time series data . as another application, we present a general scheme by coupling two chaotic attractors to get chaos , hyperchaos and high dimensional chaos by varying a time scale parameter . just like the dual positive les, a dual multifractal spectrum also appears to be characteristic of every hyperchaotic attractor as per our numerical results .however , such a structure is not unique to hyperchaotic attractors .we explicitly show an example where such a structure actually exists in the real world and another example where it can be synthetically generated from a cantor set . to our knowledge , the concept of a superposition of multifractal set to characterize the structure of a fractal object is novel and has not been discussed in the literature either in the context of real world fractals or those generated from dynamical systems .the results presented here clearly show that the fractal structure of a hyperchaotic attractor is qualitatively different from that of a chaotic attractor and may serve as a first step towards a better understanding of the highly complex structure of hyperchaotic attractors in high dimensional systems .the authors thank b. pandey of iucaa , pune for doing the dark matter n - body simulations .we thank r. e. amritkar for the idea of generating dual multifractal from a cantor set .kph acknowledges the hospitality and computing facilities in iucaa , pune .
|
in the context of chaotic dynamical systems with exponential divergence of nearby trajectories in phase space , hyperchaos is defined as a state where there is divergence or stretching in at least two directions during the evolution of the system . hence the detection and characterization of a hyperchaotic attractor is usually done using the spectrum of lyapunov exponents ( les ) that measure this rate of divergence along each direction . though hyperchaos arise in different dynamical situations and find several practical applications , a proper understanding of the geometric structure of a hyperchaotic attractor still remains an unsolved problem . in this paper , we present strong numerical evidence to suggest that the geometric structure of a hyperchaotic attractor can be characterized using a multifractal spectrum with two superimposed components . in other words , apart from developing an extra positive le , there is also a structural change as a chaotic attractor makes a transition to the hyperchaotic phase , as the attractor changes from a simple multifractal to a dual multifractal . this result supports the claim by many authors that coupling of two regular chaotic systems can generate hyperchaos . based on our results , we present a general scheme to generate both hyperchaotic as well as high dimensional chaotic attractors by coupling two low dimensional chaotic attractors and tuning a time scale parameter . finally , to show the existence of such structures , we present two examples - one synthetically generated set of points in the unit interval and the other representing a distribution in the real world - both displaying dual multifractal spectrum . * the concept of hyperchaos was introduced by rssler to represent higher complexity compared to chaos , with at least two directions of stretching during the evolution of a chaotic system . though the idea initially started as a theoretical curiosity , the interest in the study of hyperchaotic systems increased in the last one decade or so due to a large number of practical applications , many of these involving the structural complexity of hyperchaotic attractors . in this paper , we undertake a detailed multifractal analysis of different classes of hyperchaotic systems and present numerical evidence that a hyperchaotic attractor is not a simple multifractal , but involves dual spectra . we also illustrate that such structures can be generated synthetically as well as present in the real world . *
|
designs for multi - user communications networks and systems have been extensively used during the past decade to implement low - cost , scalable , and limited - message - passing networks . in doing so, transmitter and receiver pairs with local information determine their transmission strategies in an autonomous manner . to deploy such designs , it is essential to know whether they converge to a ( preferably unique ) equilibrium , and evaluate their performances at the emerging equilibrium / equilibria .strategic non - cooperative game theory provides an appropriate framework for analyzing and designing such environments where users ( i.e. , transmitter - receiver pairs ) are rational and self - interested players that aim to maximize their own utilities by choosing their transmission strategies . the notion of nash equilibrium ( ne ) ,at which no user can attain a higher utility by unilaterally changing its strategy , is frequently used to analyze the equilibrium point of non - cooperative games . to derive the conditions for ne s existence and uniqueness , different approaches such as fixed point theory , contraction mapping and _ variational inequalities ( vi ) _ have been widely applied in both wired and wireless communication networks , including applications to flow and congestion control , network routing , and power control in interference channels .however , there are numerous sources of uncertainty in measured parameter values of communication systems and networks such as joining or leaving new users , delays in the feedback channel , estimation errors and channel variations .therefore , obtaining accurate values of users interactions may not be practical , and considering uncertainty and proposing a robust approach are essential in designing reliable communications systems and networks . to make a ne robust against uncertainties ,two distinct approaches have been proposed in the literature : the bayesian approach where the statistics of uncertain parameters are considered and the utility of each user is probabilistically guaranteed , and the worst - case approach where a deterministic closed region , called the uncertainty region , is considered for the distance between the exact and the estimated values of uncertain parameters , and the utility of each user is guaranteed for any realization of uncertainty within the uncertainty region .both of these approaches have been applied to the power allocation problem in spectrum sharing environments and cognitive radio networks to study the conditions for ne s existence and uniqueness , where the uncertain parameters are interference levels and channel gains . however , to incorporate robustness in communications systems and networks , there exist multiple challenges such as : 1 ) how to implement robustness in a wider class of problems in communication systems ?2 ) how to derive the conditions for existence and uniqueness of the robust ne ( rne ) ?3 ) what is the impact of considering uncertainties on the system s performance at its equilibrium compared to that of the case with no uncertainty ?4 ) how to design a distributed algorithm for reaching the robust equilibrium ? in this paper , we aim to answer the above questions using the worst - case robust optimization . in doing so, we consider a general class of games where the impact of users on each other is an additive function of their actions , which causes couplings between users .we refer to this class of game as the additively coupled games ( acgs ) . in the acg , we consider that the users observations of such impacts are uncertain due to variations in system parameters and changes in other users strategies . via the worst - case approach, we assume that uncertain observations by each user are bounded in the uncertainty region , and each user aims to maximize its utility for the worst - case condition of error .we refer to an acg that considers uncertainty as a robust acg ( racg ) , and an acg that does not consider uncertainty as the nominal acg ( nacg ) .to study the conditions for existence and uniqueness of the rne , we apply _ vi _ , and show that with bounded and convex uncertainty , the rne always exists . we also show that the rne is a perturbed solution of _ vi _ , and derive the condition for rne s uniqueness based on the condition for ne s uniqueness . furthermore , we compare the performance of the system at the rne with that at the ne in terms of two measures : 1 ) the difference between the users strategies at the rne and the ne , 2 ) the difference between the social utility at the rne and at the ne .when the rne is unique , we derive the upper bound for the difference between social utilities at the rne and at the ne , and show that the social utility at the rne is always less than that at the ne .however , obtaining these two measures is not straightforward when the ne is not unique . in this case, we demonstrate a condition in which the social utility at a rne is higher than that at the corresponding ne .finally , we apply the proximal response map to propose a distributed algorithm for reaching the rne , and derive the conditions for its convergence .the rest of this paper is organized as follows . in sectionii , we summarize the system model of the nacg . in section iii, we introduce the racg and its rne .section iv covers the existence and uniqueness conditions if the rne . in sectionv , we show that when the utility function is logarithmic , the rne can be obtained via affine _ vi _ ( _ avi _ ) , and its uniqueness condition is simplified . in sectionvi , we propose distributed algorithm for reaching rne . in section vii , we discuss the effects of robustness for the case of multiple nash equilibria , followed by section viii , where we provide simulation results to illustrate our analytical developments for the power allocation problem and for the jackson networks .finally , conclusions are drawn in section ix .consider a set of communication resources divided into orthogonal dimensions , e.g. , frequency bands , time slots , and routes , which are shared between a set of users denoted by , where each user consists of a transmitter and a receiver .we assume that users do not cooperate with each other , and formulate the resource allocation problem as a strategic non - cooperative game , where is the set of players ( users ) in the game , is the joint strategy space of the game , and is the strategy space of user in which the strategy of each user is limited in each dimension .the sum of strategies of each user over all dimensions is bounded , i.e. , \,\qquad \text{and } \qquad\,\sum_{k=1}^{k}a_{n}^{k}\leq a_{n}^{\texttt{max}}\}\ ] ] where and is the minimum and the maximum transmission strategy of each user in each dimension and is the bound on the sum of strategies of user over all dimensions , e.g. , the maximum transmit power of each user .the function is the utility function of user and depends on the chosen strategy vector of all users ] is the vector of the additive impact of other users on user with the following elements where ] , and represents the system s parameters between user and user in dimension , e.g. , the channel gain between user and user in sub - channel ; ] traffic classes , and the input rate and service rate for class are and , respectively . here, is the strategy of player in dimension .the total rate is subject to the minimum rate constraint , i.e. , .a packet of class completing service at node is routed to node with probability , or exit the network with probability . in this scenario , we denote {nm}=r_{nm}^{k} ] .it can be shown that the user s utility for minimizing queueing delay can be expressed by , where ]. the optimization problem can be rewritten by maximizing subject to the minimum data rate constraint for each user .as stated earlier , users may encounter different sources of uncertainty caused by variations in and/or , which cause variations in the utility function of each user , and prevent users from attaining their expected performance . to deal with such issues , we assume that all uncertainties for a given user can be modeled by variations in the user s observation , i.e. , where ] , and ] denotes the linear norm with order . in communication and network systems , the ellipsoid region ,i.e. , , has been commonly used to model uncertainty .we also use the norm with in our robust game , and denote the uncertainty region by so as to indicate that it is an additive function of the actions of other users and system parameters , i.e. , it is not a fix region .the effect of uncertainty in is highlighted by a new variable in the utility function of each user as in such a way that the objective of the worst - case approach is to find the optimal strategy for each user that optimizes its utility under the worst condition of error in the uncertainty region . in this approach , from * a2 * , the optimization problem of each user can be formulated as where is the achieved utility of user in the worst - case approach .the domain of optimization problem ( [ optrobsut ] ) is defined by which is a function of other users strategy .we represent the racg by where .the solution to ( [ utilityrobsut ] ) for user is a pair that satisfies which is the saddle point of ( [ utilityrobsut ] ) .using the above , the equilibrium of the robust game is defined below . * definition 2 . *the rne of racg corresponds to the strategy profile if and only if for any other strategy profile we have we denote the achieved utility of user at the rne by and the social utility at the rne by .now we derive the characteristics of the rne in the racg from the ne in the nacg . for convenience , in what follows , we omit the arguments and in .to analyze the existence of rne , we encounter two problems .first , by considering uncertainty in the utility of each user , the utility may become non - convex , and analyzing rne may become impossible .second , the strategy space of user changes to which is not a fix set and is a function of the other users actions .therefore , convexity of the optimization problem of each user is not a sufficient condition for the existence of rne , meaning that we need to utilize _ vi _ in the sequel .* lemma 1 . *1 ) for the uncertainty region in ( [ iii-1 ] ) , the strategy of each user is a convex , bounded , and closed set .2 ) is a concave and continuous differentiable function of for every , where , and where ] , and is defined as the robust game is .see appendix a. * theorem 1 * : for any set of system parameters and strategy space of users , there always exists an rne for . from part 2 ) in lemma 1 , rne is an instance of the generalized nash equilibrium ( gne ) ( see ( 2 ) in ) , and is the rne iff it is a solution to , where and . since is a convex set and is a concave and continuous differentiable function with respect to , the necessary convexity assumptions for the existence of a solution to hold ( theorem 1 in ) , meaning that a rne always exists . since the closed form solution to ( [ optrobsut ] )can not be obtained , the fixed - point algorithm and the contraction mapping can not be applied as in to derive the conditions for rne s uniqueness . to overcome these difficulties ,we show that the rne can be considered as a perturbed ne of the nacg , and that the condition for rne s uniqueness can be derived without a closed form solution to ( [ optrobsut ] ) .* lemma 2 .* is a perturbed bounded version of mapping .see appendix b. from lemma 1 , is a closed and convex set , and form lemma 2 , is a perturbed bounded mapping .therefore , the rne is a perturbed solution to .consequently , rne s uniqueness condition can be obtained from the perturbed ne s uniqueness condition . * theorem 2 .* when is a matrix , for any bounded value of ] ; 2 ) when ( [ conditionproposition1 ] ) holds , the rne of is unique for any bounded ; 3 ) when ( [ conditionproposition1 ] ) holds , the total utility of each user at the rne is always less than that at the ne , and the upper bound on the strategy space of each user is , where where is the minimum eigenvalue of matrix , ] , where is the solution to following optimization problem with respect to ,\ ] ] where ] , denotes the transmission strategy of user at iteration , and is the observation of user at measured by its receiver and sent to the transmitter . in theorem 4below , we obtain the conditions for convergence of the iterative algorithm .* theorem 4 . * as ,the distributed algorithm in table [ distributedalgorith ] converges to the unique rne from any initial strategy , if is a matrix , and .see appendix f. note that the condition holds for the jackson network . for the power control game ,when is a matrix , interference in the system is very low , and consequently , the signal to interference and noise ratio of each usr is high . for this case , , andthe utility function of each user is which also meets .as we see in theorem 4 , the distributed algorithm converges to the unique ne when is a -matrix irrespective of the size of the uncertainty region , so long as the uncertainty region is closed and convex .* remark 4 .* in lemma 1 , we showed that is concave .also the proximal response map is strictly convex .therefore , the lagrange function can be used to derive the solution to ( [ proximalresponsemapn ] ) for each user at each iteration as where is the lagrange multiplier for user that satisfies the solution to ( [ lagrangeofproximal ] ) with respect to is user solves ( [ solution ] ) to obtain in each iteration . for ( [ utilitylogconvex ] ) , the proximal map s solution is where .for example , the proximal map s solution to the power control game is {a_{nk}^{\text{min}}}^{a_{nk}^{\text{max}}},\ ] ]so far , based on ne s uniqueness condition , we obtained rne s uniqueness condition .now we study the characteristics of rnes when nacg has multiple nes . in general , doing so is not straightforward since the _ vi _ mapping for nacg is non - monotone and non - smooth , which makes it difficult to study the characteristics of the perturbed nes ( rnes ) . to compare the case of multiple nes with that of a single ne , consider the power control game , when is not a matrix ( e.g. , ) .the mapping is non - monotone for both users . as we see in fig .[ 4 ] , there are multiple local optima on the surface of the utility function that correspond to multiple nes for this game . in this case , at the nominal ne , the convergence points for users 1 and 2 are and , respectively , and .when uncertainty is , the rne converges to and , and .this example points out that introducing uncertainty may increase the social utility at the rne when the nacg has multiple nes , which is in line with simulation results in .this is because rne relates to one local optima on the surface of the utility function , which is different from the local optima at the ne . also , considering uncertainty results in users interfering less with each other at the rne compared to the ne .this observation shows the benefit of implementing racg in communication systems which may increase the social utility as compared to that of nacg .but , obtaining the conditions under which the social utility increases is not easy .this is because utility function of each user is a non - linear function with respect to its uncertainty region and the other users uncertainty regions .so , we focus on a special case where the strategy of each user is a decreasing function of the bound of uncertainty region . in proposition 2 below , we obtain the condition for increasing the social utility of the racg as compared to that of the nacg. * proposition 2 .* when is a semi negative definite matrix and for all users , the social utility at the rne is higher than that at the ne .see appendix g. proposition 2 implies that when the reduction in the social utility due to the decrease in user s strategies is less than their increase in the social utility due to the decrease in other usrs strategies , introducing robustness in the game increases the social utility .note that this is one case in which the social utility at rne is higher than the social utility at the corresponding ne , and there may be other cases as well .* remark 5 .* when the solution of affine _ vi _ is a monotone decreasing function of , the _ avi _ mapping is a semi - negative matrix ( see appendix h ) . for the power control game ,proposition 2 is simplified to this means that when all interference channel gains are sufficiently greater than the direct channels gains , introducing robustness increases the social utility .this is an opportunistic phenomenon in robust games when the game encounters multiple nes . in order to benefit from this and increase the social utility , we propose an opportunistic distributed algorithm in table [ tableopportunesticalgorithm ]obviously , checking the conditions of proposition 2 in a distributive manner is not easy .in addition the social utility may increase under other conditions .therefore , all users play the game without considering the uncertainty . if for user , i.e. , the nacg has multiple nes , users assume uncertainty in their observations . when their utilities increase , they expand their uncertainty regions .otherwise , they interrupt the algorithm . in this way, all users make an effort to escape from their local optima in a distributed manner by playing the robust game . to implement this algorithm, we assume that users update their transmit strategy at discrete time slots with duration of .the vectors and are the transmit strategy and the observation of each user at the end of the iteration time . besides , users exchange the values of and at the end of each iteration .we use simulations in the two examples in table i to provide an insight into the performance of for different bounds on uncertainty region as compared to that of . in the following simulations , the value of is normalized to the nominal value of , i.e. , , each uncertainty regionis considered as a linear norm with order 2 , i.e. , an ellipsoidal region , and uncertainty for all users is assumed to be the same , denoted by . for the power control game , we begin by studying the effect of uncertainty on its performance in both robust and non - robust approaches in terms of utility variations at their equilibria .to do so , we consider users and , and the amount of uncertainty is assumed to be at the rne .after convergence to the rne and to the ne , the system parameter varies uniformly from to , which causes variations in the utility of each user at the ne and at the rne .variations in the social utility are shown in fig . [ 5 ] .note that the social utility varies considerably at the ne of the nominal game for both values of ] , ] .we have note that inequality ( [ ps2 ] ) is based on concavity of with respect to .therefore , is a concave function of . based on concavity of , the lagrange dual function of ( [ psi2 ] ) for the uncertainty regionis where is the nonnegative lagrange multiplier that satisfies ( [ iii-1 ] ) , i.e. , the solution to ( [ lagrangedualfunction ] ) for can be obtained by the optimality condition of the optimization problem without the constraint , i.e. , , which is equivalent to considering ( [ solution2 ] ) in ( [ lagrangemultiplier ] ) , the uncertain parameter is , where ] , and is using in the utility function , we have comparing ( [ psi2 ] ) with indicates that the difference between and the utility function of the nominal game is the extra term . from * a2 * , is continuous .therefore is continuous with respect to .the derivative of with respect to is where is a vector whose elements are equal to one .the last term in ( [ derivativepsi3 ] ) contains . from * a3 * , the term exists .therefore , is differentiable with respect to .now , the optimization problem of each user can be rewritten as .therefore , the game can be reformulated as .for the racg , we have , and , where is obtained by ( [ derivativepsi3 ] ) for user .let denote variations in the system s of parameters and other users strategies for user where and are variations of and , respectively . when , we have and vice versa . from * a1 * and * a2 * , the mapping is continuous and differentiable around the uncertain parameters .we use the taylor series of the uncertain parameter and write {\hat{\textbf{z}}_n=0}+ \varepsilon_{n}[\nabla_{\hat{\textbf{z}}_n } \widetilde{\mathcal{f}}_n(\textbf{a})]_{\hat{\textbf{z}}_n=0 } + [ \sum_{i=2}^{\infty } \frac{1}{i ! } ( \varepsilon_{n})^i(\nabla^i_{\hat{\textbf{z}}_n } \widetilde{\mathcal{f}}_n)]_{\hat{\textbf{z}}_n=0}\ ] ] for , ( [ tailorseriesofmappingf ] ) is equivalent to {(\varepsilon_n=0 ) } \\ \label{tailorseriesmappingf2 } & & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\ ! -\frac{\varepsilon_n}{2}[\nabla_{\textbf{a}_n\boldsymbol{f_n}}^2 u_n(\textbf{a}_n , \textbf{f}_n- \varepsilon_n \boldsymbol{\vartheta}_{n } ) \nabla_{\textbf{z}_n } \boldsymbol{f_n } - \varepsilon_n \nabla_{\textbf{f}_n \textbf{f}_n}^2 u_n(\textbf{a}_n , \textbf{f}_n- \varepsilon_n \boldsymbol{\vartheta}_{n } ) \nabla_{\textbf{z}_n } \boldsymbol{\vartheta}_{n } ] _ { ( \varepsilon_n=0 ) } \\ & & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\ !\label{tailorseriesmappingf3 } - \frac{\varepsilon^2_n}{3!}[\nabla^3_{\partial \textbf{a}_n \boldsymbol{f_n}^2 } u_n(\textbf{a}_n , \textbf{f}_n- \varepsilon_n \boldsymbol{\vartheta}_{n } ) ( \nabla_{\textbf{z}_n } \boldsymbol{f_n } ) ^2 \times \textbf{1}_k^{{\scriptsize{\textnormal{t } } } } + \nabla_{\textbf{a}_n \boldsymbol{f_n}}^2 u_n(\textbf{a}_n , \textbf{f}_n- \varepsilon_n \boldsymbol{\vartheta}_{n } ) \nabla^2_{\textbf{z}_n \textbf{z}_n } \textbf{f}_n \times \textbf{1}_k^{{\scriptsize{\textnormal{t}}}}]_{(\varepsilon_n=0)}\\ & + & o \nonumber \end{aligned}\ ] ] from ( [ utilityrobust2 ] ) , the first term in ( [ tailorseriesmappingf ] ) is equal to . since is a linear function of system parameters , the last term in ( [ tailorseriesmappingf3 ] ) is equal to zero . from ( [ mappingg ] ) , we have -\frac{\varepsilon^2_n}{3!}[\nabla_{\textbf{a}_n \boldsymbol{f_n}^2}^3 v_n(\textbf{a}_n , \textbf{f}_n ) \times ( \nabla_{\textbf{z}_n}\boldsymbol{f_n})^2\times \textbf{1}_k^{{\scriptsize{\textnormal{t } } } } ] + o , \end{aligned}\ ] ] from * a1 * , all the derivatives of are bounded .therefore , the last terms in ( [ perturbedtailorseries ] ) are bounded , and is the perturbed bounded version of .\1 ) consider the bounded perturbation of mappings and caused by variations in system parameters as .since the strategy space of all users in each dimension is bounded as in ( [ an ] ) , and the uncertainty region is bounded and convex , this region is also bounded , i.e. , . any solution tothe worst - case robust optimization in ( [ utilityrobsut ] ) corresponds to a realization of , where , and depends on obtained by ( [ utilityrobsutsaddlepoint ] ) for each user , and always .when is continuous and strictly monotone on the closed convex set , meaning that is a matrix , the solution to , denoted by , is a monotone and single - valued mapping on its domain ( exercise 2.9.17 in ) , i.e. , thus , when , we have , which is single valued on , i.e. , a unique solution for all .this completes the proof of the uniqueness of rne under the property of .2 ) recall that when is a matrix , is strictly monotone , and the utility is strictly convex .since is convex in , and is a continuous mapping on , the solution to is always a compact and convex set ( corollary 2.6.4 in ) .also , since is the optimum value of this convex set for , i.e. , , any point in this set is less than , which is a solution to .note that belongs to this set . since is a matrix and strictly monotone , we have which is also valid for . as such , the utility at the rne is less than that at the ne .3 ) since is strongly monotone , there is a unique solution denoted by , which can be considered as the worst - case robust solution to for . now, both and must satisfy where and is the all zero vector . by rearranging [ inequality1 ] ,we get since is the co - coercive function of ( proposition 2.3.11 in ) , the left hand side of ( [ inequality3 ] ) is always less than .using schwartz inequality for the right hand side , we have since and correspond to and , respectively , ( [ upperbound of variations ] ) can be obtained .4 ) since the difference between utility functions of each user at rne and at ne is equal to first term of the taylor series of with respect to all variations in the strategies of user and other users , we have which is equivalent to when is sufficiently small , the derivative of the strategy of each user is approximately equal to by expanding ( [ theoream42 ] ) for all users in the game , we have by replacing ( [ upperbound of variations ] ) into ( [ diffrencebetweenustilituesappecndix ] ) , the approximation ( [ diffrencebetweenustilitues ] ) is obtained .\1 ) from ( [ utilitylogconvex ] ) , the best - response of the nacg is {a_{nk}^{\text{min}}}^{a_{nk}^{\text{max } } } , \ ] ] where the lagrange multiplier for each user is so chosen to satisfy the sum constraint .therefore , the best response of this problem can be written as an _ avi _ , denoted by , where is obtained from ( [ avilog ] ) .2 ) for this case , the game has a unique ne when is strongly monotone or when is positive definite ( sections 2.3 and 2.4 in ) . by some rearrangements, we have where {nm}=\frac{x^{k}_{nm}}{x^{k}_{nn}} ] , we have all for all are positive definite matrices ( proposition 1 in ) . by rearranging ( [ proofpro14 ] ) ,we obtain ( [ conditionproposition1 ] ) .therefore , when ( [ proofpro14 ] ) holds , _ avi _ has a unique solution and consequently , the ne is unique .\1 ) from lemma 1 , the map of racg is the perturbed map of nacg .since the map of nacg is linear for the utility function ( [ utilitylogconvex ] ) , the perturbed map is where and are the elements of and , respectively .now , ( [ mappingm3 ] ) can be rewritten as therefore , the map at the rne is ( [ avilog ] ) .2 ) since is bounded in ] , $ ] , and is the column gradient vector .the last two terms in ( [ utilityrne ] ) are always positive , because of * a3 * and * a2*. the first term in ( [ utilityrne ] ) is always negative because is increasing according to and .the second term in ( [ utilityrne ] ) is always positive because is a decreasing function of and .therefore , the social utility increases when the negative terms of ( [ utilityrne ] ) are less than the positive terms . by some rearrangement and matrix manipulation , the condition for negative semi - definiteness of can be obtained .\1 ) consider , where is a closed convex set , is the monotone map related to , , and is the vector with bounded positive values . let be the solution to .when is strongly monotone , is monotone ( corollary 2.9.17 in [ 31 ] ) .when is a monotone and decreasing function , we have subtracting ( [ h1 ] ) from ( [ h2 ] ) , we get the above inequality leads to . because of the affinity in _ avi _ , ( [ h3 ] ) is where is a negative vector and .since is a convex and closed region , we have which is equivalent to the semi - negative matrix definition .2 ) from above , when the strategy of each user in the power control game in ( [ propostion1avi ] ) is a decreasing function , is a semi - negative matrix . therefore , the social utility increases when the strategy of each user is reducedobviously , is semi negative when is semi - positive , which leads to ( [ conditionproposition1forpowercontrolgame ] ) .f. meshkati , a. j. goldsmith , h. v. poor , and s. c. schwartz , `` a game - theoretic approach to energy - efficient modulation in cdma networks with delay qos constraints , '' _ ieee journal on selected areas in communications _ , vol . 25 , no . 6 , pp . 10691078 ,2007 .g. scutari , d. p. palomar , and s. barbarossa , `` optimal linear precoding strategies for wideband noncooperative systems based on game theory - part i : nash equilibria , '' _ ieee transactions on signal processing _ ,56 , no . 3 , pp .12301249 , march 2008 . , `` optimal linear precoding strategies for wideband noncooperative systems based on game theory - part ii : algorithms , '' _ ieee transactions on signal processing _ , vol . 56 , no . 3 ,pp . 12501267 , march 2008 .e. a. gharavol , y .- c .liang , and k. mouthaan , `` robust downlink beamforming in multiuser miso cognitive radio networks with imperfect channel - state information , '' _ ieee transactions on vehicular technology _ , vol . 59 , no . 6 , pp . 2852 2860 , july 2010 .g. scutari , d. p. palomar , f. facchinei , and j .- s .pang , `` convex optimization , game theory , and variational inequality theory in multiuser communication systems , '' _ ieee signal processing magazine _ , vol . 27 , no . 3 , pp .3549 , may 2010 .f. giannessi , a. maugeri , and p. m. pardalos , eds ., _ equilibrium problems : nonsmooth optimization and variational inequality models ( nonconvex optimization and its applications ) _ , 1st ed.1em plus 0.5em minus 0.4emspringer , january 2002 .m. fukushima and g .- h .lin , `` smoothing methods for mathematical programs with equilibrium constraints , '' _ proceedings of the 12th international conference on informatics research for development of knowledge society infrastructure _ , july 2004 , pp .
|
we study the robust nash equilibrium ( rne ) for a class of games in communications systems and networks where the impact of users on each other is an additive function of their strategies . each user measures this impact , which may be corrupted by uncertainty in feedback delays , estimation errors , movements of users , etc . to study the outcome of the game in which such uncertainties are encountered , we utilize the worst - case robust optimization theory . the existence and uniqueness conditions of rne are derived using finite - dimensions variational inequalities . to describe the effect of uncertainty on the performance of the system , we use two criteria measured at the rne and at the equilibrium of the game without uncertainty . the first is the difference between the respective social utility of users and , the second is the differences between the strategies of users at their respective equilibria . these differences are obtained for the case of a unique ne and multiple nes . to reach the rne , we propose a distributed algorithm based on the proximal response map and derive the conditions for its convergence . simulations of the power control game in interference channels , and jackson networks validate our analysis . resource allocation , robust game theory , variational inequality , worst - case robust optimization .
|
_ ( approximate ) nonnegative matrix factorization _( nmf ) is the problem of approximating a given nonnegative matrix by the product of two low - rank nonnegative matrices : given a matrix , one has to compute two low - rank matrices such that this problem was first introduced in 1994 by paatero and tapper , and more recently received a considerable interest after the publication of two papers by lee and seung .it is now well established that nmf is useful in the framework of compression and interpretation of nonnegative data ; it has for example been applied in analysis of image databases , text mining , interpretation of spectra , computational biology and many other applications ( see e.g. and references therein ) . +how can one interpret the outcome of a nmf ?assume each column of matrix represents an element of a data set : expression ( [ approx ] ) can be equivalently written as where each element is decomposed into a nonnegative linear combination ( with weights ) of nonnegative basis elements ( , the columns of ) .nonnegativity of allows interpretation of the basis elements in the same way as the original nonnegative elements in , which is crucial in applications where the nonnegativity property is a requirement ( e.g. where elements are images described by pixel intensities or texts represented by vectors of word counts ). moreover , nonnegativity of the weight matrix corresponds to an essentially additive reconstruction which leads to a _ part - based representation _ : basis elements will represent similar parts of the columns of .sparsity is another important consideration : finding sparse factors improves compression and leads to a better part - based representation of the data .we start this paper with a brief introduction to the nmf problem : section [ nmf ] recalls existing complexity results , introduces two well - known classes of methods : multiplicative updates and hierarchical alternating least squares and proposes a simple modification to guarantee their convergence .the central problem studied in this paper , nonnegative factorization ( nf ) , is a generalization of nmf where the matrix to be approximated with the product of two low - rank nonnegative matrices is not necessarily nonnegative .nf is introduced in section [ secnf ] , where it is shown to be np - hard for any given factorization rank , using a reduction to the problem of finding a maximum edge biclique .stationary points of the nf problem used in that reduction are also studied .this section ends with a generalization of the nmf multiplicative updates rules to the nf problem and a proof of their convergence .this allows us to shed new light on the standard multiplicative updates for nmf : a new interpretation is given in section [ intermu ] , which explains the relatively poor performance of these methods and hints at possible improvements .finally , section [ mbfa ] introduces a new type of biclique finding algorithm that relies on the application of multiplicative updates to the equivalent nf problem considered earlier .this algorithm only requires a number of operations proportional to the number of edges of the graph per iteration , and is shown to perform well when compared to existing methods .given a matrix and an integer , the _ nmf optimization problem _ using the frobenius norm is defined as denotes the set of real matrices of dimension ; the set of nonnegative matrices i.e. with every entry nonnegative , and the zero matrix of appropriate dimensions . + a wide range of algorithms have been proposed to find approximate solutions for this problem ( see e.g. ) .most of them use the fact that although problem is not convex , its objective function is convex separately in each of the two factors and ( which implies that finding the optimal factor corresponding to a fixed factor reduces to a convex optimization problem , and vice - versa ) , and try to find good approximate solutions by using alternating minimization schemes .for instance , nonnegative least squares ( nnls ) algorithms can be used to minimize ( exactly ) the cost function alternatively over factors and ( see e.g. ) .actually , there exist other partitions of the variables that preserve convexity of the alternating minimization subproblems : since the cost function can be rewritten as , it is clearly convex as long as variables do not include simultaneously an element of a column of and an element of the corresponding row of ( i.e. and for the same index ) .therefore , given a subset of indexes , is clearly convex for both the following subsets of variables and its complement however , the convexity is lost as soon as one column of ( ) and the corresponding row of ( ) are optimized simultaneously , so that the corresponding minimization subproblem can no longer be efficiently solved up to global optimality .vavasis studies in the algorithmic complexity of the nmf optimization problem ; more specifically , he proves that the following problem , called _ _ exact nonnegative matrix factorization _ _ , which is the minimum value of for which there exists and such that ( see ) .] , is np - hard : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( exact nmf ) given a nonnegative matrix of rank , find , if possible , two nonnegative factors and of rank such that . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the nmf optimization problem is therefore also np - hard , since when the rank is equal to the rank of the matrix , any optimal solution to the nmf optimization problem can be used to answer the exact nmf problem ( the answer being positive if and only if the optimal objective value of the nmf optimization problem is equal to zero ) .the np - hardness proof for exact nmf relies on its equivalence with a np - hard problem in polyhedral combinatorics , and requires both the dimensions of matrix and its rank to increase to obtain np - hardness . in contrast , in the special cases when rank is equal to or , the exact nmf problem can always be answered in the affirmative : 1 . when , it is obvious that for any nonnegative rank - one matrix there is nonnegative factors and such that . moreover ,the nmf optimization problem with can be solved in polynomial time : the perron - frobenius theorem implies that the dominant left and right singular vectors of a nonnegative matrix are nonnegative , while the eckart - young theorem states that the outer product of these dominant singular vectors is the best rank - one approximation of ; these vectors can be computed in polynomial - time using for example the singular value decomposition .when nonnegative matrix has rank , thomas has shown that exact nmf is also always possible ( see also ) .the fact that any rank - two nonnegative matrix can be exactly factorized as the product of two rank - two nonnegative matrices can be explained geometrically as follows : viewing columns of as points in , the fact that has rank implies that the set of its columns belongs to a two - dimensional subspace .furthermore , because these columns are nonnegative , they belong to a two - dimensional pointed cone , see figure [ rank2 ] .since such a cone is always spanned by two extremes vectors , this implies that all columns of can be represented exactly as nonnegative linear combinations of two nonnegative vectors , and therefore the exact nmf is always possible is that a -dimensional cone is not necessarily spanned by a set of vectors when . ] .+ ): and .,width=302 ] + moreover , these two extreme columns can easily be computed in polynomial time ( using for example the fact that they define an angle of maximum amplitude among all pairs of columns ) .hence , when the optimal rank - two approximation of matrix is nonnegative , the nmf optimization problem with can be solved in polynomial time . however , this optimal rank - two approximation is not always nonnegative , so that the complexity of the nmf optimization in the case is not known .furthermore , to the best of our knowledge , the complexity of the exact nmf problem and the nmf optimization problem are still unknown for any fixed rank or greater than . in their seminal paper , lee and seungpropose multiplicative update rules that aim at minimizing the frobenius norm between and . to understand the origin of these rules , consider the karush - kuhn - tucker first - order optimality conditions for where is the hadamard ( component - wise ) product between two matrices , and injecting ( [ gradf ] ) in ( [ mix ] ) , we obtain from these equalities , lee and seung derive the following simple multiplicative update rules ( where }{[.]} ] is stationary points of . + for , one can also check that the singular values of are disjoint and that the second pair of singular vectors is positive .since it is a positive stationary point of the unconstrained problem , it is also a stationary point of .as goes to infinity , it must get closer to a biclique of ( theorem [ sdinf ] ) .moreover is symmetric so that the right and left singular vectors are equal to each other .figure [ contsd ] shows the evolution associated with a positive singular value are continuously deformed with respect to .] of this positive singular vector of with respect to .it converges to and then the product of the left and right singular vector converges to ..,width=264 ] in this section , the mu of lee and seung presented in section [ lsalgo ] to find approximate solutions of are generalized to .other than providing a way of computing approximate solutions of , this result will also help us to understand why the updates of lee and seung are not very efficient in practice .+ the karush - kuhn - tucker optimality conditions of the problem are the same as for ( see section [ lsalgo ] ) .of course , any real matrix can be written as the difference of two nonnegative matrices : with .this can be used to generalize the algorithm of lee and seung .in fact , and become and using the same idea as in section [ lsalgo ] , we get the following multiplicative update rules : [ nfmultth ] for and with , the cost function is nonincreasing under the following update rules : }{[v w w^t+n w^t]}\ , , \qquad w \leftarrow w \circ \frac{[v^t p]}{[v^t v w+v^t n]}\,.\ ] ] we only treat the proof for since the problem is perfectly symmetric. the cost function can be split into independent components related to each row of the error matrix , each depending on a specific row of , and , which we call respectively , and .hence , we can treat each row of v separately , and we only have to show that the function is nonincreasing under the following update }{[v_0ww^t + n w^t ] } , \ ; \forall v_0 > 0.\ ] ] is a quadratic function so that with and . let be a quadratic model of around : with }{[v_0 ] } \big) ] is psd ( see also ) . since }{[v_0 ] } \big)$ ] is a diagonal nonnegative matrix for and , is also psd ._ ( 2 ) _ the global minimum of is given by : }{[v_0ww^t + n w^t ] } \\ & = & v_0 \circ \frac{[pw^t]}{[v_0ww^t + n w^t]}.\end{aligned}\ ] ] as with standard multiplicative updates , convergence can be guaranteed with a simple modification : [ nfmulteps ] for every constant and for with , is nonincreasing under }{[v w w^t + nw^t]}\big ) , \ ; w \leftarrow \max\big(\epsilon , w \circ\frac{[v^t p]}{[v^t v w + v^tn ] } \big)\,\ ] ] for any . moreover , every limit point of this algorithm is a stationary point of the optimization problem .we use exactly the same notation as in the proof of theorem [ nfmult ] , so that remains valid . by definition, is a diagonal matrix implying that _ is the sum of independent quadratic terms _ , each depending on a single entry of .therefore , }{[v_0ww^t+nw^t]}\big),\ ] ] and the monotonicity is proved .+ let be a limit point of a sequence generated by .the monotonicity implies that converges to since the cost function is bounded from below .moreover , where which is well - defined since . one can easily check that the stationarity conditions of for are finally , by , we have either and , or and , .the same can be done for by symmetry . in order to implement the updates, one has to choose the matrices and .it is clear that such that , there exists a matrix such that the two components and can be written and .when goes to infinity , the above updates do not change the matrices and , which seems to indicate that smaller values of are preferable .indeed , in the case , one can prove that is an optimal choice : [ r1opt ] s.t . , and : }{[v w w^t+m_- w^t ] } \quad \textrm { and } \quad v_2 = v \circ \frac{[p w^t]}{[v w w^t+n w^t]}.\ ] ] the second inequality of is a consequence of theorem [ nfmult ] .for the first one , we treat the inequality separately for each entry of i.e. we prove that let define as the optimal solution of the unconstrained problem i.e. and , , as noting that , we have suppose .therefore , moreover , since is a better solution than ( theorem [ nfmult ] ) .finally , the case is similar .unfortunately , this result does not hold for .this is even true for nonnegative matrices , i.e. one can improve the effect of a standard lee and seung multiplicative update by using a well - chosen matrix .+ with the following matrices we have where ( resp . ) is updated following using and ( resp . and ) . however , in practice , it seems that the choice of a proper matrix is nontrivial and can not accelerate significantly the speed of convergence .in this section , we use theorem [ nfmultth ] to interpret the multiplicative rules for and show why the hals algorithm performs much better in practice .the aim of the mu is to improve a current solution by optimizing alternatively ( fixed ) , and vice - versa . in order to prove the monotonicity of the mu , was shown to be nonincreasing under an update of a single row of ( resp .column of ) since the objective function can be split into ( resp . ) independent quadratic terms , each depending on the entries of a row of ( resp .column of ) ; cf .proof of theorem [ nfmultth ] . however , there is no guarantee , a priori , that the algorithm is also nonincreasing with respect to an individual update of a column of ( resp .row of ) .in fact , each entry of a column of ( resp .row of ) depends on the other entries of the same row ( resp .column ) in the cost function .the next theorem states that this property actually holds .[ corls ] for , is nonincreasing under }{[vww_{k:}^t ] } , \qquad w_{k : } \leftarrow w_{k : } \circ \frac{[v_{:k}^t m]}{[v_{:k}^t vw ] } , \quad \forall k,\ ] ] i.e. under the update of any column of or any row of using the mu .this is a consequence of theorem [ nfmultth ] using and .+ in fact , corollary [ corls ] sheds light on a very interesting fact : the multiplicative updates are also trying to optimize alternatively the columns of and the rows of using a specific cyclic order : first , the columns of and then the rows of .we can now point out two ways of improving the mu : 1 . when updating a column of ( resp .a row of ) , the columns ( resp . rows ) already updated are not taken into account : the algorithm uses their old values ; 2 .the multiplicative updates are not optimal : and ( cf .theorem [ r1opt ] ) .moreover , there is actually a closed - form solution for these subproblems ( cf .hals algorithm , section [ sechals ] ) .therefore , using theorem [ r1opt ] , we have the following new improved updates [ nfr1 ] for , is nonincreasing under }{[v_{:k } w_{k : } w_{k:}^t+(r_{k})_- w_{k:}^t ] } , \quad w_{k : } \leftarrow w_{k : } \circ \frac{[v_{:k}^t ( r_{k})_+]}{[v_{:k}^t v_{:k } w_{k:}+v_{:k}^t ( r_{k})_- ] } , \ ; \forall k,\ ] ] with .moreover , the updates perform better than the updates , but worse than the updates ( [ gilvcol1]-[gilvcol ] ) which are optimal .this is a consequence of theorem [ nfmultth ] and [ r1opt ] using and .+ in fact , .figure [ cbcl ] shows an example of the behavior of the different algorithms : the original mu ( section [ lsalgo ] ) , the improved version ( corollary [ nfr1 ] ) and the _ optimal _ hals method ( section [ sechals ] ) .the test was carried out on a commonly used data set for nmf : the cbcl face database , mit center for biological and computation learning . + available at :- . ] ; 2429 faces ( columns ) consisting each of pixels ( rows ) for which we set and we used the same _ scaled _ ( see remark [ scaled ] below ) random initialization and the same cyclic order ( same as the mu i.e. first the columns of then the rows of ) for the three algorithms .-[gilvcol ] ) applied to the cbcl face database.,width=264 ] we observe that the mu converges significantly less rapidly than the two other algorithms .there do not seem to be good reasons to use either the mu or the method of corollary [ nfr1 ] since there is a closed - form solution ( [ gilvcol1]-[gilvcol ] ) for the corresponding subproblems . finally , the hals algorithm has the same computational complexity and performs provably much better than the popular multiplicative updates of lee and seung .of course , because of the np - hardness of and the existence of numerous locally optimal solutions , it is not possible to give a theoretical guarantee that hals will converge to a better solution than the mu : although its iterations are _ locally _ more efficient , they could still end up at a worse local optimum .[ power ] for , one can check that the three algorithms above are equivalent for .moreover , they correspond to the power method which converges to the optimal rank - one solution , given that it is initialized with a vector which is not perpendicular to the singular vector corresponding to the maximum singular value . [scaled ] we say that is scaled if the optimal solution to the problem is equal to 1 . obviously , any stationary point is scaled ; the next theorem is an extension of a result of ho et al . .[ scth ] the following statements are equivalent * is scaled ; * is on the boundary of , the ball centered at of radius ; * ( and then ) .the solution of can be written in the following closed form where is the scalar product associated with the frobenius norm . since , so that ( 1 ) and ( 2 ) are equivalent .for the equivalence of ( 1 ) and ( 3 ) , we have if and only if .theorem [ scth ] can be used as follows : when you compute the error of the current solution , you can scale it without further computational cost .in fact , note that the third term of can be computed in operations since where .this is especially interesting for sparse matrices since only a small number of the entries of ( which could be dense ) need to be computed to evaluate the second term of .in this section , an algorithm for the maximum edge biclique problem whose main iteration requires operations is presented .it is based on the multiplicative updates for nonnegative factorization and the strong relation between these two problems ( theorems [ thp ] , [ th3v ] and [ sdinf ] ) .we compare the results with other algorithms with iterates requiring operations using the dimacs database and random graphs . for sufficiently large , stationary points of are close to bicliques of ( theorem [ sdinf ] ). moreover , the two problems have the same cost function .one could then think of applying an algorithm that finds stationary points of in order to localize a large biclique of the graph generated by .this is the idea of algorithm [ mbfa ] using the multiplicative updates with a priori , it is not clear what value should take .following the spirit of homotopy methods , we chose to start the algorithm with a small value of and then to increase it until the algorithm converges to a biclique of . , , , , .}{[v^{}||w^{}||_2 ^ 2+d ( \mathbf{1}_{m } ||w^{}||_1 - m_b{w^{}}^t ) ] } \label{a } \\ w & \leftarrow & w^ { } \circ \frac{[{v}^tm_b]}{[||v^{}||_2 ^ 2 w^{}+d ( \mathbf{1}_{n } ||v^{}||_1 - { v^{}}^tm_b ) ] } \label{b } \\d \ ; & = & \ , \alpha d \nonumber \end{aligned}\ ] ] we observed that initial value of should not be chosen too large : otherwise , the algorithm often converges to the trivial solution : the empty biclique .in fact , in that case , the denominators in and will be large , even during the initial steps of the algorithm , and the solution is then forced to converge to zero . moreover , since the denominators in and depend on the graph density , the denser the graph is , the greater can be chosen and vice versa . on the other hand , since our algorithm is equivalent to the power method for ( cf .remark [ power ] ) , if is chosen too small , it will converge to the same solution : the one initialized with the best rank - one approximation of . for the stopping criterion, one could , for example , wait until the rounding of coincides with a feasible solution of .+ we briefly present here two other algorithms to find maximal bicliques using operations per iteration . in , the generalized motzkin - strauss formalism for cliquesis extended to bicliques by defining the optimization problem where , and .+ nonincreasing multiplicative updates for this problem are then provided : this algorithm does not necessarily converge to a biclique : if and are not sufficiently small , it may converge to a dense bipartite subgraph ( a bicluster ) .in fact , for , it converges to an optimal rank - one solution of the unconstrained problem as our algorithm does for . in , it is suggested to use and around 1.05 .finally , will favor one side of the biclique .we will use .the simplest heuristic one can imagine is to add , at each step , a vertex which is connected to most vertices in the other side of the bipartite graph .each time a vertex is selected , the next choices are restricted in order to get a biclique eventually : the vertices which are not connected to the one you have just chosen are deleted .the procedure is repeated on the remaining graph until you get a biclique .one can check that this produces a maximal biclique .we first present some results for graphs from the dimacs graph dataset: . ] .we extracted bicliques in those ( not bipartite ) graphs using the preceding algorithms .we performed 100 runs , 200 iterations each , for the two algorithms with the same initializations .we tried to choose appropriate parameters and for the dimacs graphs and and for the random graphs were tested and all gave worse results .small changes to the parameters of the mult .algorithm led to similar results , so that it seems less sensitive to the choice of its parameters than the m .- s . algorithm . ] for both algorithms .table [ tabledimacs ] displays the cardinality of the biclique extracted by the different algorithms .table [ tablerand ] shows the results for random graphs : we have generated randomly 100 graphs with 100 vertices for different densities ( the probability of an edge to belong to the graph is equal to the density ) .the average numbers of edges in the solutions for the different algorithms are displayed for each density .we kept the same configuration as for the dimacs graphs ( same initializations , 100 runs for each graph , 200 iterations ) .it seems that the multiplicative updates generates , in general , better solutions , especially when dealing with dense graphs .the algorithm based on the motzkin - strauss formalism seems less efficient and is more sensitive to the choice of its parameters . & ( m ) & & ( ) & ( , ) + .solutions for dimacs data : number of edges in the bicliques . [cols="^ " , ]we have introduced nonnegative factorization ( nf ) , a new variant of nonnegative matrix factorization ( nmf ) , and proved its np - hardness for any fixed rank by reduction to the maximum edge biclique problem .the multiplicative updates for nmf can be generalized to nf and provide a new interpretation of the algorithm of lee and seung , which explains why it does not perform well in practice .we also developed an heuristic algorithm for the biclique problem whose iterations require operations , based on theoretical results about stationary points of a specific rank - one nonnegative factorization problem and the use of multiplicative updates .to conclude , we point out that none of the algorithms presented in this paper is guaranteed to converge to a globally optimal solution ( and , to the best of our knowledge , such an algorithm has not been proposed yet ) ; this is in all likelihood due to the np - hardness of the nmf and nf problems .indeed , only convergence to a stationary point has been proved for the algorithms of sections [ nmf ] and [ secnf ] , a property which , while desirable , provides no guarantee about the quality of the solution obtained ( for example , nothing prevents these methods from converging to a stationary but rank - deficient solution , which in most cases could be further improved ) .finally , no convergence proof for the biclique finding algorithm introduced in section [ mbfa ] is provided ( convergence results from the preceding sections no longer hold because of the dynamic updates of parameter ) ; however , this heuristic seems to give very satisfactory results in practice . + * acknowledgment .* we thank pr .paul van dooren and pr .laurence wolsey for helpful discussions and advice . ,_ nonnegativity constraints in numerical analysis_. paper presented at the symposium on the birth of numerical analysis , leuven belgium . to appear in the conference proceedings , to be published by world scientific press , a. bultheel and r. cools , eds
|
nonnegative matrix factorization ( nmf ) is a data analysis technique which allows compression and interpretation of nonnegative data . nmf became widely studied after the publication of the seminal paper by lee and seung ( learning the parts of objects by nonnegative matrix factorization , nature , 1999 , vol . 401 , pp . 788791 ) , which introduced an algorithm based on multiplicative updates ( mu ) . more recently , another class of methods called hierarchical alternating least squares ( hals ) was introduced that seems to be much more efficient in practice . in this paper , we consider the problem of approximating a not necessarily nonnegative matrix with the product of two nonnegative matrices , which we refer to as nonnegative factorization ( nf ) ; this is the subproblem that hals methods implicitly try to solve at each iteration . we prove that nf is np - hard for any fixed factorization rank , using a reduction to the maximum edge biclique problem . we also generalize the multiplicative updates to nf , which allows us to shed some light on the differences between the mu and hals algorithms for nmf and give an explanation for the better performance of hals . finally , we link stationary points of nf with feasible solutions of the biclique problem to obtain a new type of biclique finding algorithm ( based on mu ) whose iterations have an algorithmic complexity proportional to the number of edges in the graph , and show that it performs better than comparable existing methods . * keywords : * nonnegative matrix factorization , nonnegative factorization , complexity , multiplicative updates , hierarchical alternating least squares , maximum edge biclique .
|
support vector machine ( svm ) was , at first , introduced by vladimir for a binary classification tasks in order to construct , in the input space , the decision functions based on the theory of structural risk minimization , ( and ) .afterwards , svm has been extended to support either the multi - class classification and regression tasks .svm consists of constructing one or several hyperplanes in order to separate the data into the different classes .nevertheless , an optimal hyperplane must be found in order to separate accurately the data into two classes .+ defined the optimal hyperplane as the decision function with maximal margin .indeed , the margin can be defined as the shortest distance from the separating hyperplane and the closest vectors to the couple of classes .the application of svm to the automatic speech recognition ( asr ) problem has shown a competitive performance and accurate recognition rates . in the sound system of a language , a phoneme is considered as the smallest distinctive unit which is able to communicate a possible meaning .thus , the success of the phoneme recognition task is important to the development of language systems. nevertheless , during the signal acquisition process , the speech signal may be affected by the speaker characteristics such as his gender , accent , and style of speech . also , there are other external factors which can admittedly have an impact on the speech recognition such as the noise coming from a microphone or the variation in the vocal tract shape . + the standard formulation of svm may not determine accurately the identity of the tested phoneme . indeed , the speech signal is accompanied by all sorts of unpleasant variations during the acquisition .those variations affect badly the recognition rates since the recognition mechanism may not be taken into account those changes in the phoneme data .for example , in the real - application problems , the english pronunciation differences and the differences in accents may lead to increase significantly the error rate of any learning algorithm since all phoneme data are handled identically .thus , the standard svm may find an optimal hyperplane without considering the influences of the differences accompanied by the speech signals .thus , the identified optimal hyperplane can lead to loss of accuracies .+ in this paper , we propose a novel approach in order to incorporate a belief function into the standard svm algorithm which involves integrating confidence degree of each phoneme data . to fulfill this new formulation ,we have , beforehand , compute the geometric distance between the centers of each possible class of the tested phoneme .indeed , the benefit of hybrid approaches relies in their support to the decision - making and their ability to confirm the robustness of the recognition system , .the experimental results with all phoneme datasets issued from the timit database show that the b - svm outperforms the standard svm and produces a better recognition rates .the rest of this paper is organized as follows : section [ 1 ] presents an overview of the method support vector machines ( svm ) .section [ 2 ] presents the steps of the phoneme processing and the problems which accompanying the speech processing .section [ bsvm ] presents the new formulation b - svm algorithm ; section [ sys ] describes the hierarchical phoneme recognition system ; section [ res ] presents the experimental results and a comparison between the standard svm and b - svm in a multi - class phoneme recognition problem .the final section is the conclusion .the support vector machines ( svm ) is a learning algorithm for pattern recognition and regression problems whose approaches the classification problem as an approximate implementation of the structural risk minimization(srm ) induction principle .+ svm approximates the solution to the minimization problem of srm through a quadratic programming optimization . it aims to maximize the margin which is the distance from a separating hyperplane to the closest positive or negative sample between classes . +hence the hyperplane that optimally separates the data is the one that minimises : where is a penalty to errors and is a positive slack variable which measures the degree of misclassification .+ subject to the constraints : for the phoneme classification , the decision function of svm is expressed as : the above decision function gives a signed distance from a phoneme x to the hyperplane . however , when the data set is linearly non - separable , solving the parameters of this decision function becomes a quadratic programming problem .the solution to this optimization problem can be cast to the lagrange functional and the use of lagrange multipliers , we obtain the lagrangian of the dual objective function : where is the kernel of data and and the coefficients are the lagrange multipliers and are computed for each phoneme of the data set .they must be maximised with respect to .it must be pointed out that the data with nonzero coefficients are called support vectors .they determine the decision boundary hyperplane of the classifier .+ moreover , applying a kernel trick that maps an input vector into a higher dimensional feature sapce , allows to svm to approximate a non - linear function and . in this paper , we use svm with the radial basis function kernel ( rbf).this kernel choice was made after doing a case study in order to find the suitable kernel with which svm may achieve good generalization performance as well as the parameters to use . based on this principle ,the svm adopts a systematic approach to find a linear function that belongs to a set of functions with lowest vc dimension ( the vapnik chervonenkis dimension measure the capacity of a statistical classification algorithm ) .speech recognition is the process of converting an acoustic signal , captured by a microphone , to a set of words , syllables or phonemes .the speech recognition systems can be used for applications such as mobiles applications , commands , control , data entry , and document preparation .the steps of the speech processing are described in the figure 1 : + the phoneme processing consists , first , on converting the speech captured by a microphone to a sequence of feature vectors .then , a segmentation step is applied consisting on converting the continued speech signal to a set of units such as phonemes .once the train and test data sets are prepared , a classifier is applied to classify the unknown phonemes .however , the phoneme recognition systems can be characterised by many parameters and problems which have the effect of making the task of recognition more difficult .those factors can not be taking into account by the classifier since their accompanying the captured speech .+ in fact , the speech contains disfluencies , or periods of silence , and is much more difficult for the classifier to recognise than speech periods . in the other hand ,the speaker is not able to say phrases in the same or similar manner each time .thus , the phoneme recognition systems learn barely to recognize correctly the phoneme .the speaker s voice quality , such as volume and pitch , and breath control should also be taken into account since they distorted the speech .hence , the physiological elements must be taken into account in order to construct a robust phoneme recognition .+ regrettably , the classifier is not able to take into account all those external factors which are inherent in the signal speech in the recognition process which may lead to a confusion inter - phonemes problem . in this paper, we propose to incorporate a confidence degree which will help the standard classifier svm to find the optimal hyperplane and classify the phoneme into its class .the formulation of the proposed method b - svm is described in three steps ; the first step consists of computing the euclidean distance between the center of the different classes and the phoneme to be classified .the second step is to compute the confidence degree of the membership of the phoneme into the class .then , those confidence degrees are incorporated into svm to help to find the optimal hyperplane .we propose to calculate the geometric distance between and the center of the class where .we consider that there is a possibility to which the phoneme belongs to one of the classes .the geometric distance noted is calculated using euclidian distance .+ the higher value of is assigned to the most distant class from the phoneme and the lower value is associated with the closer class to the phoneme .this step consists on calculating the confidence degree of each phoneme .it tells the possibility that belongs to the class .this proposed algorithm allows the generation of confidence degree for each phoneme : + _ calculate confidence degrees _ .... begin set of phoneme samples with lables ; initialize confidence degree of samples : 1 if in the ith class , 0 otherwise ; : = center of the ith class ; end . .... in a space where the data sets are not linearly separable and a multi - class classification problem , svm constructs classifiers for the training data set . in order to convert the multi - class problem into multiple binary problems , the approach one - against - one is used . in the proposed b - svm , we incorporate the confidence degree of each phoneme samples into the constraints since the identity is not affected by a scalar multiplication .we normalized the hyperplane to satisfy : in fact , the incorporation of the confidence degree allows to to reduce the restrictions when the phoneme have a high degree into the class . in the other hand , the dual representation of the standard svm allows to maximise the of each phonemethus , with high value of the confidence degree , the subject to can be easily satisfied which allows to consider this one as support vector which be helping to decide on the hyperplane .+ in the proposed b - svm , we optimize this formulation to obtain a new dual representation : in the standard svm , the class of a phoneme is determined by the sign of the decision function . in the proposed b - svm , the new decision function thus becomes : this new formulation will help for the decision making on the sign of phoneme in order to classify into its class .the architecture of our hierarchical phoneme recognition systems is described in the figure 2 : the recognition system proceeds as follows : ( 1 ) conversion from the speech waveform to a spectrogram ( 2 ) transforming the spectogram to a mel - frequency cepstral coefficients ( mfcc ) spectrum using the spectral analysis ( 3 ) segmentation of the phoneme data sets to sub - phoneme data sets ( 4 ) initiating the phoneme recognition at the first level of the system using b - svm to recognize the class of the unknown phoneme ( vowels or consonant ) ( 5 ) and , finally , initiate the phoneme recognition at the second level of the system using b - svm to recognize the identity of the unknown phoneme ( i.e. aa , ae , ih , etc ) .for the proposed recognition system , we have used the mel frequency cepstral coefficients ( mfcc ) feature extractor in order to convert the speech waveform to a set of parametric representation .+ davis and mermelstein were the first who introduced the mfcc concept for automatic speech recognition .the main idea of this algorithm consider that the mfcc are the cepstral coefficients calculated from the mel - frequency warped fourier transform representation of the log magnitude spectrum .the delta and the delta - delta cepstral coefficients are an estimation of the time derivative of the mfccs . including the temporal cepstral derivative aim to improve the performance of speech recognition system .+ those coefficients have shown a determinant capability to capture the transitional characteristics of the speech signal that can contribute to ameliorate the recognition task .the experiments using svm are done using libsvm toolbox .the table 1 recapitulate our main choice of experiments conditions : .experimental setup [ cols= " < , < " , ] [ tab : ressys3 ] to investigate the accuracy of the proposed method b - svm , we applied the standard svm and b - svm to timit database .it must be pointed out that for the prediction , we used a test samples which were not included in the training stage .we compare the performance of both methods and we note that the performance of b - svm is better than the standard svm for all data sets used .thus , the following results in the table 1 provides a summary through which we note that the proposed b - svm shows a remarkable improvement over standard svm .in our paper , we have proposed a new formulation of svm using the confidence degree for each object .we have , also , built an hierarchical phoneme recognition system .+ the new method b - svm seems to be more effective than the standard svm for all tested data sets . the new formulation of svm succeeded in improving phoneme recognition since the allocation of belief weights for each phoneme have the ability for modeling the similarity between phonemes in order to reduce the confusions inter - phonemes .we compare the performance of both methods and we note that the performance of b - svm is better than the standard svm for all data sets used .foster , i. , kesselman , c. : the grid : blueprint for a new computing infrastructure .morgan kaufmann , san francisco ( 1999 ) vapnik v : the nature of statistical learning theory ._ springer - verlag new york _ , 8(6):188 , ( 1995 ) .garofolo js , lamel lf , fisher wm , fiscus jg , pallett ds , dahlgren nl , zue v : timit acoustic - phonetic continuous speech corpus . in _texas instruments ( ti ) and massachusetts institute of technology ( mit ) _ ( 1993 ) .amami r , ben ayed d , ellouze n : phoneme recognition using support vector machine and different features representations .the 9th international conference distributed computing and artificial intelligence ( dcai).advances in intelligent and soft computing , springer berlin heidelberg , 151:587595 salamanca , spain ( 2012 ) .x. li , l. wang , e. sung : adaboost with svm - based compnent classifers .engineering applications of artificial intelligence , 21:785795 , 2008 .davis sb , mermelstein p : comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences ._ acoust speech signal processing _, 28(4):357366 ( 1980 ) .r. amami , d. ben ayed , n. ellouze : practical selection of svm supervised parameters with different feature representations for vowel recognition , international journal of digital content technology and its applications , vol .418 - 424 , 2013 .borrajo , b. baruque , e. corchado , j. bajo , j.m .corchado : hybrid neural intelligent system to predict business failure in small - to - medium - size enterprises .international journal of neural systems , 21 ( 04 ) , 277 - 296,2011 .ajith abraham : special issue : hybrid approaches for approximate reasoning .journal of intelligent and fuzzy systems 23(2 - 3 ) : pp .41 - 42 , 2012 .
|
the support vector machine ( svm ) method has been widely used in numerous classification tasks . the main idea of this algorithm is based on the principle of the margin maximization to find an hyperplane which separates the data into two different classes.in this paper , svm is applied to phoneme recognition task . however , in many real - world problems , each phoneme in the data set for recognition problems may differ in the degree of significance due to noise , inaccuracies , or abnormal characteristics ; all those problems can lead to the inaccuracies in the prediction phase . unfortunately , the standard formulation of svm does not take into account all those problems and , in particular , the variation in the speech input . + this paper presents a new formulation of svm ( b - svm ) that attributes to each phoneme a confidence degree computed based on its geometric position in the space . then , this degree is used in order to strengthen the class membership of the tested phoneme . hence , we introduce a reformulation of the standard svm that incorporates the degree of belief . experimental performance on timit database shows the effectiveness of the proposed method b - svm on a phoneme recognition problem . , phoneme , belief , timit
|
the sydney morning herald published a cartoon by george molnar after the opening of the parkes telescope in 1961 , in which a character on horseback , looking at the telescope , explains to a mate `` it s the telescope of the future .it can look back millions of years . ''( the cartoon is reproduced in robertson 1993 . ) in the weeks before the parkes 50th symposium i happened to read ( in the sydney morning herald ! )danish philosopher sren kierkegaard s statement life can only be understood backwards , but it must be lived forwards. this paper , based on a presentation at the parkes 50th symposium , attempts to combine these viewpoints to look back over the preceding 5 decades to determine how the telescope of the future has contributed to the development of astronomy by selecting a small number of highlights or incidents from each year , and placing them in the context of other national and international events of note from the time . in most cases , the paper selected for each year is the one which has had the greatest impact , as assessed by the number of subsequent citations amassed .this work has made extensive use of the sao / nasa astrophysics data system ( ads ) , and it is worth repeating their caveat that `` the citation database in the ads is not complete .please keep this in mind when using the ads citation lists . ''it should be noted that ads citation counts are less accurate in the first decades of the observatory s existence , and also that searching for `` parkes '' in the abstract of papers will inevitably miss many papers which appeared in journals such as nature and science ... but will pick up many papers not reporting the results of observations with the parkes telescope(s ) . as a result, this paper does not purport to be a complete listing of the highest impact papers , but does endeavour to illustrate both the nature and breadth of high impact research conducted at the observatory .october 31st saw the official opening of the parkes 210-foot radio - telescope , and commissioning work undertaken .john bolton returned from owens valley to become officer - in - charge ( oic ) of the australian national radio astronomy observatory ( anrao ) .the world s population passed 3 billion , yuri gagarin orbited the earth , the ( first version of the ) berlin wall was constructed , and joseph heller s _ catch 22 _ was published .telescope commissioning ended in 1962 and early observations yielded a number of fundamental results . in `` polarization of 20-cm wavelength radiation from radio sources ''gardner & whiteoak ( 1962 ) noted that their observations of linear polarization `` ... considerably strengthens the hypothesis that the synchrotron mechanism is responsible for the radiation from the nonthermal sources . ''observations of this linear polarization as a function of frequency quickly resulted in the detection of faraday rotation : `` faraday rotation effects associated with the radio source centaurus a '' ( cooper & price 1962 ) and `` polarization in the central component of centaurus a '' ( bracewell et al . 1962 ) .elsewhere , john glenn orbited the earth , marilyn monroe died , the cuban missile crisis was played out , and rod laver won all four tennis grand slam tournaments in the same calendar year .a series of occultations of 3c273 by the moon enabled the location of this bright radio source to be located with sufficient accuracy for its optical counterpart to be identified : as a result 3c273 and 3c48 became the first recognised quasars : `` investigation of the radio source 3c 273 by the method of lunar occultations '' ( hazard , mackey & shimmins 1963 ) .gardner & whiteoak ( 1963 ) continued their polarisation studies ; `` polarization of radio sources and faraday rotation effects in the galaxy '' , inferring a galactic magnetic field from the measurements of faraday rotation as a function of galactic coordinates . in `` a radio source with a very unusual spectrum , ''bolton , gardner & mackey ( 1963 ) presented the first study of the source which would become the atca s primary flux density calibrator at cm - wavelengths , 1934 .the year started with the bogle & chandler mystery : the discovery on new year s day of the bodies of ( csiro scientist ) gilbert bogle & margaret chandler , and in november us president john f. kennedy was assassinated .the first zone of 408mhz survey was published ( bolton , gardner & mackey 1964 ) , and following the discovery of the oh main lines at 1665 and 1667 mhz , gardner , robinson , bolton & van damme ( 1964 ) reported `` detection of the interstellar oh lines at 1612 and 1720 mc / sec '' .the beatles toured australia , and the summer olympics were held in tokyo , with dawn fraser and betty cuthbert among australian gold medal winners . in 1965the kennedy 60-foot ( 18-m ) antenna became operational .the antenna was built by the company founded by donald snow kennedy in cohasset , massachusetts the company operated from 1947 to 1963 , with an advertisement in the september 1956 _ scientific american _ promising `` down - to - earth solutions to out - of - this - world problems '' , and another 60-foot telescope becoming the george r. agassiz radio telescope of harvard observatory ( bok 1956 ) . in a one - page paper , `` the supernova of a.d .1006 '' , gardner and milne ( 1965 ) solved a 959 year old mystery by identifying the remnant of sn1006 with the polarized , extended radio source 1459 .the tidbinbilla deep space tracking station and was officially opened by prime minister sir robert menzies ; who in the same year committed australian troops to vietnam and reintroduced conscription . the `` 21 cm hydrogen - line survey of the large magellanic cloud .distribution and motions of neutral hydrogen '' of mcgee & milton ( 1966 ) was carried out over several years with a 48 channel line receiver with a spectral resolution of 7kms .kellermann s ( 1966 ) discovery of radio emission from uranus which is the atca s primary flux density calibrator in the mm bands meant that with cm and mm calibrators identified it would only take another 25 years for the telescope that would use them to come along !the astronomical society of australia ( asa ) was founded , with harley wood as its first president .the business of finding optical counterparts to radio sources was booming : pks 0106 + 01 , identified by bolton , clarke , savage & vron ( 1965 ) became the most distant object then known , with a redshift of =2.1 being determined by burbidge ( 1966 ) .decimal currency was introduced in australia , replacing pounds and pence .st kilda won the victorian football league ( vfl ) grand final and england won the soccer / football world cup : neither feat has been repeated !the 48-channel spectrometer was also used by kerr & vallak ( 1967 ) in presenting their `` hydrogen - line survey of the milky way i. the galactic center '' .the farthest known object became another parkes source , pks0237 at =2.22 ( arp , bolton & kinman 1967 ) .the byrds released their lp record _younger than yesterday _ , which is notable for the song `` cta 102 '' , written following press reports of speculation that this radio source contained transmissions from an extra - terrestrial civilisation ( see kellermann 2002 for more details ) .the molonglo cross radiotelescope began full operation at 408mhz .the world fair was held in montreal : barnes & jackson ( 2008 ) note that `` expo 67 marks australia s return to international exhibitions after nearly thirty years . planned during the period of economic disengagement from britain ,the pavilion reveals how the register for australia s self - representation had unfolded since 1939 .australia now emphasised its scientific and technical proficiency with large - scale models of the snowy mountains hydroelectric scheme and the parkes radio telescope , as well as evidence of manufacturing capacity through examples of modernist furniture and product design '' ( see figure 1 ) . in december ,prime minister harold holt disappeared while swimming , with john mcewen subsequently becoming prime minister .the 408mhz survey completed its northernmost zone , + 20 + 27 ( shimmins & day 1968 ) with follow - up optical identifications in close pursuit ( bolton , shimmins & merkelijn 1968 ) .the discovery of pulsars was announced and parkes first contribution was to correct the 5th significant figure of the period of the first pulsar : `` measurements on the period of the pulsating radio source at 1919 + 21 '' ( radhakrishnan , komesaroff , cooke 1968 ) .the prague spring saw political liberalisation begin in czechoslovakia in january , and end in august with the invasion by the soviet union .martin luther king was assassinated in april and robert kennedy in june .the summer olympics were held in mexico city , with michael wenden winning the 100 m and 200 m freestyle events .radhakrishnan & cooke ( 1969 ) described `` magnetic poles and the polarization structure of pulsar radiation '' , and parkes observations of the first vela glitch were reported ( though not in those words ) in `` detection of a change of state in the pulsar psr 0833 '' ( radhakrishnan & manchester 1969 ) . the first vlbi observations with parkes were undertaken in 1969 , with fringes on the baseline to owens valley being found from observations in april ( kellermann et al .one of the many technical challenges was described : `` time in australia was synchronized with time in owens valley via the nasa tracking stations at tidbinbilla , australia , and goldstone , california .the tracking stations themselves were synchronized to an accuracy of a few microseconds by the transmission of a radar signal from goldstone to tidbinbilla via the moon . ''apollo 11 was launched on july 16 ( ut ) , landed on the moon on july 20 , with the `` small step and giant leap '' relayed via parkes on july 21 ( sarkissian 2001 ) .apollo 12 , which followed in november the same year , was supported by the team pictured in figure 2 .the discovery of recombination lines is described by robinson ( 1994 ) : parkes may have missed the opportunity to have first observed these but was quick to follow - up , with `` a survey of h 109 recombination line emission in galactic hii regions of the southern sky '' ( wilson , mezger , gardner & milne 1982 ) being made with the nrao 6 cm cooled receiver .the metric conversion act is passed , instantly converting the 210-foot telescope into a 64 m telescope !the first stage of resurfacing the dish with perforated aluminium panels started .apollo 13 was launched on april 11 ( ut ) , limping back to earth 6 days later , again with significant , and hastily arranged , parkes contributions ( e.g. , bolton 1994 ; sherwen 2010 ) .the parkes 2700 mhz survey was in full swing , with `` catalogues for the declination zone and for the selected regions '' published by wall , shimmins & merkelijn ( 1971 ) .bolton stepped down as oic in 1971 , with john shimmins taking on the role , however bolton stayed on as `` astronomer at large '' until 1981 .the year also saw first mcdonalds opened in australia , the south sydney rabbitohs win the rugby league grand final , and evonne goolagong win wimbledon .the first five papers ( and 166 pages ) of the astrophysical journal supplement volume 24 reported results from the parkes hydrogen line interferometer , in which the `` signal from a remotely controlled movable 18-m paraboloid is cross - correlated with the signal from the stationary 64-m reflector . ''the papers were all titled `` the parkes survey of 21-cm absorption in discrete - source spectra '' with paper i describing the parkes hydrogen - line interferometer ( radhakrishnan , brooks , goss , murray , & schwarz 1972 ) ; ii .galactic 21-cm observations in the direction of 35 extragalactic sources ( radhakrishnan , murray , lockhart , & whittle 1972 ) ; iii .21-centimeter absorption measurements on 41 galactic sources north of declination ( radhakrishnan , goss , murray & brooks 1968 ) ; iv .21-centimeter absorption measurements on low - latitude sources south of declination ( goss , radhakrishnan , brooks & murray 1968 ) ; and v. note on the statistics of absorbing hi concentrations in the galactic disk ( radhakrishnan & goss 1968 ) . the second stage of resurfacing the dish with panels to a diameter of 37 m was carried out .gough whitlam became prime minister ; the summer olympics were overshadowed by the `` munich massacre '' of 11 israeli athletes , coaches and officials .the growing number of discoveries of organic molecules in space led to a collaboration between radiophysics astronomers and monash university chemists , resulting in the discoveries of interstellar methanimine , ch2nh , at 5290mhz ( godfrey , brown , robinson , & sinclair 1973 ) , and thioformaldehyde , ch2s , at 3139mhz ( sinclair , fourikis , ribes , robinson , brown & godfrey 1973 ) .the iau general assembly adopted the jansky as the unit of measurement for spectral flux density .the australian 1 coin was introduced , replacing the 172,000 .dave cooke became oic , the berlin wall came down , and australian pilots went on strike , severely impacting domestic travel . the pkscat90 version of the parkes catalog was released ( wright & otrupcek 1990 ) , containing radio and optical data for the 8264 radio sources in the parkes 2700 mhz survey , covering all the sky south of a declination of + 27 but largely excluding the galactic plane and the magellanic cloud regions .the 4850 mhz receiver that had been on the nrao 300-foot telescope at the time of its collapse was brought to parkes for the parkes - mit - nrao ( pmn ) surveys , carried out in june and november 1990 , covering the sky between dec .the australia telescope national facility was established , and the hubble space telescope was launched .nelson mandela was released after 27 years incarceration . te lintel hekkert et al .( 1991 ) conducted a `` 1612 mhz oh survey of iras point sources .i observations made at dwingeloo , effelsberg and parkes '' .a total of 2703 iras sources were observed , with 738 oh / ir stars being detected , 597 of which were new discoveries .( the first ) ten millisecond pulsars were discovered in the globular cluster 47 tucanae ( manchester et al . 1991 ) .ron ekers planted an apple tree near the ( then ) entrance to the visitors centre .the tree was a direct descendent of the apple tree which is reputed to have stimulated newton s development of a theory of gravitation .the tree has struggled with drought ( and being run over ! ) but is now complemented by additional trees in the garden outside the new vc entrance , with signage telling the story behind the tree s arrival at parkes .the year also saw first light with mopra , the public debut of the world wide web , the launch of the compton gamma - ray observatory , and paul keating become prime minister .johnston et al . ( 1992 ) reported the discovery of `` psr 1259 a binary radio pulsar with a be star companion . ''the 47ms pulsar is in a highly eccentric orbit around its massive companion , with the pulsar eclipsed by the companion star s stellar wind near periastron . the high - frequency ( 1500mhz high frequency for pulsar astronomers ! ) survey of 800 square degrees of the southern galactic plane that yielded the discovery of psr 1259 was also published ( johnston et al .one - person - in - the - tower operation of the telescope commenced .`` beyond southern skies : radio astronomy and the parkes telescope '' ( robertson 1992 ) was published .the astro - ph server started , and the nasa / ads website was launched .the summer olympics were held in barcelona , with kieren perkins winning the 1,500 m freestyle , the oarsome foursome winning the men s coxless fours , and australia earning the gold medal in the equestrian three - day team event . the first pmn paper , `` the parkes - mit - nrao ( pmn ) surveys .i the 4850 mhz surveys and data reduction '' was published ( griffith & wright 1993 ) .one of the limitations of the survey was recognised after the observations as being due to `` complex , off - axis sidelobes of a radio telescope caused by feed - support legs '' ( hunt & wright 1992 ) which allowed enough radiation from the sun to enter the feed to compromise a small part of the surveyed area .the european union was established , and bill clinton became us president .the second pmn paper : `` ... source catalog for the southern survey '' was published ( wright , griffith , burke & ekers 1994 ) .marcus price became oic , however csiro budget cuts resulted in six staff being made redundant at parkes during the year . ``parkes , thirty years of radio astronomy '' ( goddard & milne 1994 ) was published .the premature deaths of kurt cobain and ayrton senna were mourned by the music and sporting worlds , respectively .`` relativistic motion in a nearby bright x - ray source '' was reported by tingay et al .( 1995 ) , based on target of opportunity vlbi observations of gro j1655 - 40 over four days in august .the sheve ( southern hemisphere vlbi experiment ) array included parkes , tidbinbilla , hobart , mopra and atca .project phoenix used parkes ( as the primary station ) and mopra ( for rapid independent follow - up of candidates ) from february to june ( tarter 1997 ) .observing was shut - down for two months for the new focus cabin installation .the dvd format was announced and ebay was founded .the former has been deployed in archiving telescope data , and the latter has proved useful in sourcing otherwise hard - to - find components for more than one piece of astronomical equipment !this year saw publication of `` the parkes 21 cm multibeam receiver '' ( staveley - smith et al .the paper does not present any results from parkes publications ( so in the strictest sense should not qualify for inclusion ) but is notable for the large number of citations , and for describing what went on to become what is probably the observatory s most productive receiver .the paper concludes : `` documentation of the parkes multibeam receiver , including more details on scientific goals , observing and data - reduction techniques , can be found on the world wide web .the address ( as of july 1996 ) is http://www.atnf.csiro.au/research/multibeam/ multibeam.html . ''the foresight of acknowledging that urls may not be permanent was well - founded , as that address now yields a `` 404 '' , however http://www.atnf.csiro.au/research/multibeam/ multibeam.html does ( at the time of writing ! ) still exist .the dish spent much of the year tracking galileo , which arrived at jupiter in december 1995 , for nasa .henry parkes image appears on the australian one - dollar coin of 1996 , commemorating the 100th anniversary of his death .( it is worth noting that the original settlement in 1853 was named currajong , which later became bushman s lead , or simply bushman s .it was not until 1873 that the town was renamed parkes , after the then premier of nsw . )john howard became prime minister .the summer olympics were held in atlanta , with back - to - back gold medals for kieren perkins , the oarsome foursome , and the equestrian team !ct , freeman , carignan & quinn ( 1997 ) scanned src j films to find dwarf irregular galaxy candidates in the sculptor and centaurus groups of galaxies , and obtained redshifts with parkes hi and optical h observations to report the `` discovery of numerous dwarf galaxies in the two nearest groups of galaxies '' .galileo tracking continued , and the mb20 receiver was installed for the first time .the halca satellite of the vlbi space observatory programme ( vsop ) was launched , the first harry potter novel published , princess diana died , and the adelaide crows won the afl grand final . the first results from 20 cm multibeam receiver observations included `` tidal disruption of the magellanic clouds by the milky way '' by putman et al .( 1998 ) , which revealed a stream of atomic hydrogen leading the motion of the clouds ( i.e. , on the opposite side of the magellanic stream ) .john reynolds became oic , and the gst ( goods & services tax ) was introduced .stanimirovic , staveley - smith , dickey , sault , & snowden ( 1999 ) combined single - beam parkes 21 cm data from 1996 with an atca mosaic to study `` the large - scale hi structure of the smc . ''the world population topped 6 billion , the euro currency was established , the mars climate orbiter was lost , and it was feared that `` y2k '' would wreak havoc on computers .the `` discovery of two high magnetic field radio pulsars '' with the multibeam receiver was reported by camilo et al .the sydney olympics were held , with the the olympic torch receiving a ride on the dish as the torch relay made its way past parkes .the movie `` the dish '' was the top - grossing movie in australia for the year , and the expanded visitor centre opened just in time to welcome the increased numbers inspired to visit by the movie !this year marked the centenary of federation , with henry parkes role commemorated by his picture appearing on a special 1 coin .blind searches for periodicities in fermi gamma - ray data were resulting in the discovery of gamma - ray pulsars , and camilo et al .( 2009 ) used archival parkes data and green bank telescope observations to report the `` radio detection of lat psrs j1741 and j2032 + 4127 : no longer just gamma - ray pulsars . ''csiro astronomy and space science was formed , destructive bushfires burned across victoria , the emergence of a new h1n1 strain caused a swine flu pandemic , and michael jackson died .it is likely that another fermi pulsar paper incorporating parkes data will end up as the most cited paper from this year , but it is notable that papers from the next generation of radio astronomers `` a radio - loud magnetar in x - ray quiescence '' ( levin et al .2010 ) , and `` 12.2-ghz methanol masers towards 1.2-mm dust clumps '' ( breen et al . 2010 ) are also having an appreciable scientific impact .s - pass observing ended , and the eyjafjallajokull volcano erupted , significantly disrupting air travel .this year of course saw the parkes 50th celebrations , which included opera at the dish , attended by governor general quentin bryce .parkes telescopes were pictured on the google banner in australia on october 31st .the radioastron satellite was launched , japan was rocked by a major earthquake and tsunami and the world population passed 7 billion .it is clear from this year - by - year review that the parkes observatory has produced high impact science in a wide range of fields , both those anticipated when the telescope was first planned hi , galactic structure , studies of the lmc & smc , snrs , surveys and those unforeseen quasars , masers , planets , molecular lines , radio recombination lines , pulsars , vlbi .bok ( 1957 ) wrote `` one could readily justify the establishment of observatories with large telescopes in the southern hemisphere because only there can one study the magellanic clouds '' and high impact papers from the years 1966 , 1974 , 1984 , 1988 , 1998 and 1999 have confirmed this .almost a third of the papers highlighted for each year have involved hi observations , and about a quarter have concerned pulsars .there is a good mix of galactic and extragalactic , spectral line and continuum , surveys and single objects .there have been notable contributions by the 18 m kennedy antenna , and vlbi observations , with higher frequency observations ( ) constituting about a quarter of the papers .the number of authors on the highlighted paper for each year is plotted in figure 3 , demonstrating the move to larger research teams over time .what factors have contributed to the impact of the parkes observatory ?the continual upgrading of dish surface , front - end receivers , and backend processors is clear , highlighting the importance of the funding brought in by spacecraft tracking contracts , csiro support , and collaborations with the international user community , not to mention the enabling contributions toward the telescope s construction .a large majority of high - impact papers have at least one csiro - affiliated co - author , confirming the belief that local knowledge and experience help to maximise the effective use of facilities .( and on the other hand , the fact there are high - impact papers unaffiliated with csiro makes it clear that the observatory is not a `` closed shop '' and that documentation and user support give all observers the chance to do good science . ) the excellence of support staff , both at the parkes and at radiophysics / atnf / cass in marsfield was referred to by a number of speakers over the week of the parkes 50th symposium , and is undoubtedly an important factor in the productivity of the observatory .the character in the molnar cartoon claimed the telescope could look back millions of years .we now know it can in fact look back billions of years , and the highlights from its first fifty years ensure it can look forward to many more !aaronson , m. , bothun , g. d. , cornell , m. e. , dawe , j. a. , dickens , r. j. , hall , p. j. , sheng , h. m. , huchra , j. p. , lucey , j. r. , mould , j. r. , murray , j. d. schommer , r. a. , wright , a. e.1989 , , 338 , 654 allen , d. a. , hyland , a. r. , longmore , a. j. , caswell , j. l. , goss , w. m. , haynes , r. f. 1977 , , 217 , 108 arp , h. c. , bolton , j. g. , & kinman , t. d. 1967 , , 147 , 840 and erratum 148 , l165 barnes , d. g. , staveley - smith , l. , de blok , w. j. g. , et al .2001 , , 322 , 486 barnes , c. , & jackson , s. 2008 , `` a significant mirror of progress : modernist design and australian participation at expo 67 and expo 70 '' in `` seize the day : exhibitions , australia and the world '' eds . k. darian - smith , r. gillespie , c. jordan , and e. willis ( melbourne : monash university epress ) .. 20.1-20.19 batchelor , r. a. , caswell , j. l. , haynes , r. f. , wellington , k. j. , goss , w. m. , knowles , s. h. 1980 , australian journal of physics , 33 , 139 bok , b. j. 1956 , , 178 , 232 bolton , j. g. , gardner , f. f. , & mackey , m. b. 1963 , , 199 , 682 bolton , j. g. , gardner , f. f. , & mackey , m. b. 1964 , aust . j. phys ., 17 , 340 bolton , j. g. , clarke , m. e. , sandage , a. , & vron , p. 1965a , , 142 , 1289 bolton , j. g. , shimmins , a. j. , & merkelijn , j. 1968 , australian journal of physics , 21 , 81 bolton , j. g. , savage , a. , & wright , a. e. 1979 , australian journal of physics astrophysical supplement , 46 , 1 bolton , j. g. 1994 , in `` parkes , thirty years of radio astronomy '' ed.s d.e .goddard and d.k .milne , pp .134137 burbidge , e. m. 1966 , , 143 , 612 camilo , f. , kaspi , v. m. , lyne , a. g. , manchester , r. n. , bell , j. f. , damico , n. , mckay , n. p. f. , crawford , f. 2000 , , 541 , 367 camilo , f. , ransom , s. m. , halpern , j. p. , & reynolds , j. 2007 , , 666 , l93 camilo , f. , ray , p. s. , ransom , s. m. , et al .2009 , , 705 , 1 caswell , j. l. , murray , j. d. , roger , r. s. , cole , d. j. , & cooke , d. j. 1975 , , 45 , 239 caswell , j. l. , & lerche , i. 1979 , , 187 , 201 caswell , j. l. , & haynes , r. f. 1987 , , 171 , 261 clark , d. h. , & caswell , j. l. 1976 , , 174 , 267 gardner , f. f. , & whiteoak , j. b. 1963 , , 197 , 1162 gardner , f. f. , robinson , b. j. , bolton , j. g. , & van damme , k. j. 1964 , physical review letters , 13 , 3 gardner , f. f. , & milne , d. k. 1965 , , 70 , 754 gardner , f. f. , & whiteoak , j. b. 1974 , , 247 , 526 ghisellini , g. , ghirlanda , g. , tavecchio , f. , fraternali , f. , & pareschi , g. 2008 , , 390 , l88 griffith , m. r. , & wright , a. e. 1993 , , 105 , 1666 goddard , d. e. , & milne , d. k. 1994 , parkes : thirty years of radio astronomy ( csiro publishing ) greenhill , l. j. , booth , r. s. , ellingsen , s. p. , herrnstein , j. r. , jauncey , d. l. , mcculloch , p. m. , moran , j. m. , norris , r. p. , reynolds , j. e. , tzioumis , a. k. 2003 , , 590 , 162 haisch , b. m. , slee , o. b. , siegman , b. c. , nikoloff , i. , candy , m. , harwood , d. , verveer , a. , quinn , p. j. , wilson , i. , linsky , j. l. 1981 , , 245 , 1009 haslam , c. g. t. , salter , c. j. , stoffel , h. , & wilson , w. e. 1982 , , 47 , 1 haynes , r. f. , caswell , j. l. , & simons , l. w. j. 1978 , australian journal of physics astrophysical supplement , 45 , 1 hazard , c. , mackey , m. b. , & shimmins , a. j. 1963 , , 197 , 1037 hunt , a. , & wright , a. 1992 , , 258 , 217 johnston , s. , manchester , r. n. , lyne , a. g. , bailes , m. , kaspi , v. m. , qiao , g. , damico , n. 1992a , , 387 , l37 johnston , s. , lyne , a. g. , manchester , r. n. , kniffen , d. a. , damico , n. , lim , j. , ashworth , m. 1992b , , 255 , 401 kellermann , k. i. 1966 , icarus , 5 , 478 kellermann , k. i. , jauncey , d. l. , cohen , m. h. , et al .1971 , , 169 , 1 kellermann , k. i. 2002 , pasa , 19 , 77 kerr , f. j. , & vallak , r. 1967 , aust . j. phys. astrophys ., 3 , 3 kerr , f. j. , bowers , p. f. , jackson , p. d. , & kerr , m. 1986 , , 66 , 373 levin , l. , bailes , m. , bates , s. , bhat , n. d. r. , burgay , m. , burke - spolaor , s. , damico , n. , johnston , s. , keith , m. , kramer , m. , milia , s. , possenti , a. , rea , n. , stappers , b. , van straten , w. 2010 , , 721 , l33 lyne , a. g. , burgay , m. , kramer , m. , possenti , a. , manchester , r. n. , camilo , f. , mclaughlin , m. a. , lorimer , d. r. , damico , n. , joshi , b. c. , reynolds , j. , freire , p. c. c. 2004 , science , 303 , 1153 manchester , r. n. , damico , n. , & tuohy , i. r. 1985 , , 212 , 975 manchester , r. n. , lyne , a. g. , robinson , c. , bailes , m. , & damico , n. 1991 , , 352 , 219 manchester , r. n. , lyne , a. g. , camilo , f. , bell , j. f. , kaspi , v. m. , damico , n. , mckay , n. p. f. , crawford , f. , stairs , i. h. , possenti , a. , kramer , m. , sheppard , d. c. 2001 , , 328 , 17 manchester , r. n. , hobbs , g. b. , teoh , a. , & hobbs , m. 2005 , , 129 , 1993 mathewson , d. s. , cleary , m. n. , & murray , j. d. 1974 , , 190 , 291 mathewson , d. s. , ford , v. l. , & visvanathan , n. 1988 , , 333 , 617 mcculloch , p. m. , hamilton , p. a. , ables , j. g. , & hunt , a. j. 1983 , , 303 , 307 mcgee , r. x. , & milton , j. a. 1966 , australian journal of physics , 19 , 343 mclaughlin , m. a. , lyne , a. g. , lorimer , d. r. , kramer , m. , faulkner , a. j. , manchester , r. n. , cordes , j. m. , camilo , f. , possenti , a. , stairs , i. h. , hobbs , g. , damico , n. , burgay , m. , obrien , j. t. 2006 , , 439 , 817 peterson , b. a. , savage , a. , jauncey , d. l. , & wright , a. e. 1982 , , 260 , l27 preston , r. a. , morabito , d. d. , williams , j. g. , et al .1985 , , 90 , 1599 radhakrishnan , v. , komesaroff , m. m. , & cooke , d. j. 1968 , , 218 , 229 radhakrishnan , v. , & cooke , d. j. 1969 , , 3 , 225 radhakrishnan , v. , & manchester , r. n. 1969 , , 222 , 228 radhakrishnan , v. , brooks , j. w. , goss , w. m. , murray , j. d. , & schwarz , u. j. 1972 , , 24 , 1 radhakrishnan , v. , murray , j. d. , lockhart , p. , & whittle , r. p. j. 1972 , , 24 , 15 radhakrishnan , v. , goss , w. m. , murray , j. d. , & brooks , j. w. 1972 , , 24 , 49 radhakrishnan , v. , & goss , w. m. 1972 , , 24 , 161 rohlfs , k. , kreitschmann , j. , feitzinger , j. v. , & siegman , b. c. 1984 , , 137 , 343 sarkissian , j. m. 2001 , pasa , 18 , 287 shaver , p. a. , mcgee , r. x. , newton , l. m. , danks , a. c. , & pottasch , s. r. 1983 , , 204 , 53 sherwen , s. 2010 cosmos , april issue , http://www.cosmosmagazine.com/news/3403/apollo-13-australian-story stanimirovic , s. , staveley - smith , l. , dickey , j. m. , sault , r. j. , & snowden , s. l. 1999 , , 302 , 417 staveley - smith , l. , wilson , w. e. , bird , t. s. , disney , m. j. , ekers , r. d. , freeman , k. c. , haynes , r. f. , sinclair , m. w. , vaile , r. a. , webster , r. l. , wright , a. e. 1996 , pasa , 13 , 243 tarter , j. 1997 in astronomical and biochemical origins and the search for life in the universe , eds .cosmovici , s. bowyer , & d. werthimer ( editrice compositori , bologna ) , pp.633 - 643 te lintel hekkert , p. , caswell , j. l. , habing , h. j. , haynes , r. f. , haynes , r. f. , norris , r. p. 1991, , 90 , 327 tingay , s. j. , jauncey , d. l. , preston , r. a. , reynolds , j. e. , meier , d. l. , murphy , d. w. , tzioumis , a. k. , mckay , d. j. , kesteven , m. j. , lovell , j. e. j. , campbell - wilson , d. , ellingsen , s. p. , gough , r. , hunstead , r. w. , jonos , d. l. , mcculloch , p. m. , migenes , v. , quick , j. , sinclair , m. w. , smits , d. 1995 , , 374 , 141 turtle , a. j. , campbell - wilson , d. , bunton , j. d. , jauncey , d. l. , kesteven , m. j. , manchester , r. n. , norris , r. p. , storey , m. c. , & reynolds , j. e. 1987 , , 327 , 38 wall , j. v. , shimmins , a. j. , & merkelijn , j. k. 1971 , australian journal of physics astrophysical supplement , 19 , 1 wilson , t. l. , mezger , p. g. , gardner , f. f. , & milne , d. k. 1970 , , 6 , 364 wright , a. , & otrupcek , r. 1990 , pks catalog
|
the scientific output of parkes over its fifty year history is briefly reviewed on a year - by - year basis , and placed in context with other national and international events of the time .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.