article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
synthetic aperture radar ( sar ) is a prominent source of information for many remote sensing applications .the data these devices provides carries information which is mostly absent in conventional sensors which operate in the optical spectrum or in its vicinity .sar sensors are active , in the sense that they carry their own illumination source and , therefore , are able to operate any time . since they operate in the microwaves region of the spectrum , they are mostly sensitive to the roughness and to the dielectric properties of the target .the price to pay for these advantages is that these images are corrupted by a signal - dependent noise , called _ speckle _ , which in the mostly used formats of sar imagery is non - gaussian and enters the signal in a non - additive manner .this noise makes both automatic and visual analysis a hard task , and defies the use of classical features .this paper presents a new feature for sar image analysis called generalized statistical complexity .it was originally proposed and assessed for one - dimensional signals , for which it was shown to be able to detect transition points between different regimes .this feature is the product of an entropy and a stochastic distance between the model which best describes the data and an equilibrium distribution . the statistical nature of speckled data allows to propose a gamma law as the equilibrium distribution , while the model describes the observed data with accuracy . both the entropy and the stochastic distance are derived within the framework of the so - called entropies and divergences , respectively , which stem from studies in information theory .we show that the statistical complexity of sar data , using the shannon entropy and the hellinger distance , stems as a powerful new feature for the analysis of this kind of data .the multiplicative model is one of the most successful frameworks for describing data corrupted by speckle noise .it can be traced back to the work by goodman , where stems from the image formation being , therefore , phenomenological .the multiplicative model for the intensity format states that the observation in every pixel is the outcome of a random variable which is the product of two independent random variables : , the ground truth or backscatter , related to the intrinsic dielectric properties of the target , and , the speckle noise , obeying a unitary mean gamma law .the distribution of the return , , is completely specified by the distributions and obey .the univariate multiplicative model began as a single distribution for the amplitude format , namely the rayleigh law , was extended by yueh et al . to accommodate the law and later improved further by frery et al . to the distribution , that generalizes all the previous probability distributions .gao provides a complete and updated account of the distributions employed in the description of sar data . for the intensity format which we deal with in this article, the multiplicative model reduces to , essentially , two important distributions , namely the gamma and the laws .the gamma distribution is characterized by the density function being the mean , and , denoted .this is an adequate model for homogeneous regions as , for instance , pastures over flat relief .the law has density function where , , denoted .this distribution was proposed as a model for extremely heterogeneous areas , and mejail et al . demonstrated it can be considered a universal model for speckled data .data obeying the law are referred to as `` fully developed speckle '' , meaning that there is no texture in the wavelength of the illumination ( which is in the order of centimeters ) .the absolute value of the parameter in equation ( [ densgamacons ] ) is , on the contrary , a measure of the number of distinct objects of size of the order of the wavelength with which the scene is illuminated . as ,the distribution becomes the law .the information content of a system is typically evaluated via a probability distribution function ( pdf ) describing the apportionment of some measurable or observable quantity ( i.e. a time series ) .an information measure can primarily be viewed as a quantity that characterizes this given probability distribution .the shannon entropy is often used as a the natural " one . given a discrete probability distribution , with the degrees of freedom , shannon s logarithmic information measure reads = -\sum_{i=1}^{m } p_i \ln ( p_i) ] we are in position to predict with complete certainty which of the possible outcomes , whose probabilities are given by , will actually take place .our knowledge of the underlying process described by the probability distribution is then maximal .in contrast , our knowledge is minimal for a uniform distribution and the uncertainty is maximal , = { \mathrm s}_{\max}$ ] .it is known that an entropic measure does not quantify the degree of structure or patterns present in a process .moreover , it was recently shown that measures of statistical or structural complexity are necessary for a better understanding of chaotic time series because they are able to capture their organizational properties .this kind of information is not revealed by measures of randomness . the extremesperfect order ( like a periodic sequence ) and maximal randomness ( fair coin toss ) possess no complex structure and exhibit zero statistical complexity .there is a wide range of possible degrees of physical structure these extremes that should be quantified by _ statistical complexity measures ._ rosso and coworkers introduced an effective statistical complexity measure ( scm ) that is able to detect essential details of the dynamics and differentiate different degrees of periodicity and chaos .this specific scm , abbreviated as mpr , provides important additional information regarding the peculiarities of the underlying probability distribution , not already detected by the entropy .the statistical complexity measure is defined , following the seminal , intuitive notion advanced by lpez - ruiz et al . , via the product = h[p ] \cdot d[p , p_{ref } ] .\label{complexity}\ ] ] the idea behind the statistical complexity is measuring at the same time the order / disorder of the system ( ) and how far the system is from its equilibrium state ( the so - called disequilibrium ) .the first component can be obtained by means of an entropy , while the second requires computing a stochastic distance between the actual ( observed ) model and a reference one .salicr et al . provide a very convenient conceptual framework for both of these measures .let be a probability density function with parameter vector which characterizes the distribution of the ( possibly multivariate ) random variable .the ( )-entropy relative to is defined by where either is concave and is increasing , or is convex and is decreasing . the differential element sweeps the whole support . in this workwe only employ the shannon entropy , for which and .consider now the ( possibly multivariate ) random variables and with densities and , respectively , where and are parameter vectors .the densities are assumed to have the same support .the -divergence between and is defined by where is a strictly increasing function with and is a convex function such that and . the differential element sweeps the support . in the followingwe will only employ the hellinger divergence which is also a distance , for which , and .the influence of the choice of a distance when computing statistical complexities is studied in reference .following rosso et al . , we work with the hellinger distance and we define the statistical complexity of coordinate in an intensity sar image as the product where is the shannon entropy observed in under the model , and is the observed hellinger distance between the universal model ( the distribution ) and the reference model of fully developed speckle ( the law ). as previously noted , if an homogeneous area is being analyzed , the and model can be arbitrarily close , and the distance between them tends to zero .the entropy of the model is closely related to the roughness of the target , as will be seen later , that is measured by .computing these observed quantities requires the estimation of the parameters which characterize the distribution ( , the sample mean ) and the law ( and ) , provided the number of looks is known .the former is immediate , while estimating the later by maximum likelihood requires solving a nonlinear optimization problem .the estimation is done using data in a vicinity of .once obtained and , the terms in equation are computed by numerical integration .references discuss venues for estimating the parameters of the law safely .figure [ fig : results ] presents the main results obtained with the proposed measures .figure [ fig : originalimage ] shows the original image which was obtained by the e - sar sensor , an airborne experimental polarimetric sar , over munich , germany .only the intensity hh channel is employed in this study .the image was acquired with three nominal looks .the scene consists mostly of different types of crops ( the dark areas ) , forest and urban areas ( the bright targets ) .[ densga0 ] figure [ fig : entropies ] shows the shannon entropy as shades of gray whose brightness is proportional to the observed value .it is remarkable that this measure is closely related to the roughness of the target , i.e. , the brighter the pixel the more heterogeneous the area .the entropy is also able to discriminate between different types of homogeneous targets , as shown in the various types of dark shades .figure [ fig : distances ] shows the hellinger distance between the universal model and the model for fully developed speckle .as expected , the darkest values are related to areas of low level of roughness , while the brightest spots are the linear strips in the uppermost right corner , since they are man - made structures .the statistical complexity is shown in figure [ fig : complexities ] .it summarizes the evidence provided by the entropy ( figure [ fig : entropies ] ) and by the stochastic distance between models ( figure [ fig : distances ] ) .as it can be seen in the image , the values exhibit more variation than their constituents , allowing a fine discrimination of targets .as such , it stems as a new and relevant feature for sar image analysis .+ the data were read , processed , analyzed and visualized using ` r ` v. 2.14.0 on a macbook pro running mac os x v. 10.7.3 .this platform is freely available at http://www.r-project.org for a diversity of computational platforms , and its excellent numerical properties have been attested in .the statistical complexity of sar images reveals information which is not available either through the mean ( which is the parameter of the model for homogeneous areas ) or by the parameters of the model for extremely heterogeneous areas . as such, it appears as a promising feature for sar image analysis .ongoing studies include the derivation of analytical expressions for the entropy and the hellinger distance , other stochastic distances , the sample properties of the statistical complexity and its generalization for other models including polarimetric sar .allende , h. , frery , a.c . , galbiati , j. , pizarro , l. : m - estimators with asymmetric influence functions : the ga0 distribution case .journal of statistical computation and simulation 76(11 ) , 941956 ( 2006 ) almiron , m. , almeida , e.s . ,miranda , m. : the reliability of statistical functions in four software packages freely used in numerical computation .brazilian journal of probability and statistics special issue on statistical image and signal processing , 107119 ( 2009 ) , http://www.imstat.org/bjps/ feldman , d.p . ,mctague , c.s . ,crutchfield , j.p .: the organization of intrinsic computation : complexity - entropy diagrams and the diversity of natural information processing .chaos 18 , 043106 ( 2008 ) , http://dx.doi.org.ez9.periodicos.capes.gov.br/10.1063/1.2991106 kowalski , a.m. , martn , m.t . , plastino , a. , rosso , o.a . ,casas , m. : distances in probability space and the statistical complexity setup .entropy 13(6 ) , 10551075 ( 2011 ) , http://www.mdpi.com/1099-4300/13/6/1055/ lamberti , p.w . ,martn , m.t ., plastino , a. , rosso , o.a . : intensive entropic non - triviality measure .physica a : statistical mechanics and its applications 334(12 ) , 119131 ( 2004 ) , http://www.sciencedirect.com/science/article/pii/s0378437103010963 mejail , m.e . , frery , a.c . , jacobo - berlles , j. , bustos , o.h .: approximation of distributions for sar images : proposal , evaluation and practical consequences .latin american applied research 31 , 8392 ( 2001 ) mejail , m.e ., jacobo - berlles , j. , frery , a.c . , bustos , o.h .: classification of sar images using a general and tractable multiplicative model .international journal of remote sensing 24(18 ) , 35653582 ( 2003 ) rosso , o.a . , larrondo , h.a . ,martn , m.t . , plastino , a. , fuentes , m.a . : distinguishing noise from chaos .physical review letters 99 , 154102 ( 2007 ) , http://link.aps.org/doi/10.1103/physrevlett.99.154102
a new generalized statistical complexity measure ( scm ) was proposed by rosso et al in 2010 . it is a functional that captures the notions of order / disorder and of distance to an equilibrium distribution . the former is computed by a measure of entropy , while the latter depends on the definition of a stochastic divergence . when the scene is illuminated by coherent radiation , image data is corrupted by speckle noise , as is the case of ultrasound - b , sonar , laser and synthetic aperture radar ( sar ) sensors . in the amplitude and intensity formats , this noise is multiplicative and non - gaussian requiring , thus , specialized techniques for image processing and understanding . one of the most successful family of models for describing these images is the multiplicative model which leads , among other probability distributions , to the law . this distribution has been validated in the literature as an expressive and tractable model , deserving the `` universal '' denomination for its ability to describe most types of targets . in order to compute the statistical complexity of a site in an image corrupted by speckle noise , we assume that the equilibrium distribution is that of fully developed speckle , namely the gamma law in intensity format , which appears in areas with little or no texture . we use the shannon entropy along with the hellinger distance to measure the statistical complexity of intensity sar images , and we show that it is an expressive feature capable of identifying many types of targets .
nowadays , people generate vast amounts of data through the devices they interact with during their daily activities , leaving a rich variety of digital traces .indeed , our mobile phones have been transformed into powerful devices with increased computational and sensing power , capable of capturing any communication activity , including both mediated and face - to - face interactions .user location can be easily monitored and activities ( e.g. , running , walking , standing , traveling on public transit , etc . ) can be inferred from raw accelerometer data captured by our smartphones . even more complex information such as our emotional state or our stress level can be inferred either by processing voice signals captured by means of smartphone s microphones or by combining information , extracted from several sensors , which correlates with our mood .moreover , we keep track of our daily schedule by using digital calendars and we use social media to share our experiences , opinions and emotions with our friends . leveraging this rich variety of human - generated information could provide new insights on a variety of open research questions and issues in several scientific domains such as sociology , psychology , behavioral finance and medicine .for example , several works have demonstrated that online social media could act as crowd sensing platforms ; the aggregated opinions posted in online social media have been used to predict movies revenues , elections results or even stock market prices .social influence effects in social networks have been also investigated in several projects either using observational data or by conducting randomized trials .other works also use mobility traces in order to study social patterns or to model the spreading of contagious diseases .moreover , the use of smartphones is increasingly used to monitor and better understand the causes of health problems such as addictions , obesity , stress and depression .smartphones enable continuous and unobtrusive monitoring of human behavior and , therefore , could allow scientists to conduct large - scale studies using real - life data rather than lab constrained experiments . in this direction , in the authors attempt to explain sleeping disorders reported by individuals , by investigating the correlations between sociability , mood and sleeping quality , based on data captured by mobile phones sensors and surveys .also , in the authors study the links between unhealthy habits , such as poor - quality eating and lack of exercise , and the eating and exercise habits of the user s social network .however , both studies are based on correlation analysis and , consequently , they are not sufficient for deriving valid conclusions about the causal links between the examined variables . for example , an observed correlation between the eating and exercising habits of a social group does not necessarily imply that eating and exercise habits of individuals are influenced by their social group and , therefore , could be modified by changing someone s social group .instead , the observed correlation could be due to the fact that people tend to have social relationships with people with similar habits .the efficient exploitation of human generated data in order to uncover causal links among factors of interest remains an open research issue .some works have proposed the use of randomized trials . according to this technique , the causal effects of an event or _ treatment _are examined by exposing a randomly selected subset of participants ( _ treatment group _ ) to this event and comparing the result with the corresponding outcome on a control group ( i.e. , a subset of participants who have not been exposed to the event ) . by randomly assigning participants to treatment and control groupsit is assured that , on average , there will be no systematic difference on the baseline characteristics of the participants between the two groups .baseline characteristics are considered to be any characteristics of the subjects that could be related with the study ( e.g. in a clinical study the age and the previous health status of the subjects could be considered as baseline characteristics ) .while randomized trials represent a reliable way to detect causal relationships , they require the direct intervention of scientists in participants life , which is sometimes unethical or just not feasible .moreover , such experimental studies can not exploit the vast amount of observational data that are produced daily .detecting causal relationships in observational data is challenging since subjects can not be randomly exposed to an event .thus , subjects that are exposed to a treatment may systematically differ from subjects that are not . in order to eliminate any bias due to differences on the baseline characteristics of exposed and unexposed subjects , scientists need to gather and process information about several factors that could influence the result of the study .there are two main methodologies that can be applied to control such factors : _ structural equation modeling _ and _ quasi - experimental designs _ .according to the former , the causal effect is estimated using multivariate regression . in detail ,the variable representing the causal effect of an event or treatment is regressed using as predictors the variable representing the treatment as well as all the baseline characteristics of the subjects of the study that could influence the result .structural equation modeling is based on the assumption that the regression model has been correctly specified .false assumptions about the linearity or non - linearity of the model or failure to correctly specify the regression coefficients may result in misleading conclusions . onthe other hand , methods based on quasi - experimental designs do not require the specification of a model . instead , they attempt to emulate randomized trials by exploiting inherit characteristics of the observational data .this can be achieved by comparing groups of _ treated _ and _ control _ subjects with similar baseline characteristics ( _ matching design _ ) .the purpose of this work is to propose a generic causal inference framework for the analysis of human behavior using digital traces .more specifically , we demonstrate the potential of automatically processing human generated observational digital data in order to conduct causal inference studies based on quasi - experimental techniques .we support our claim by presenting an analysis of the causal effects of daily activities , such as exercising , socializing or working , on stress based on data gathered by smartphones from 48 students that were involved in the studentslife project at dartmouth college for a period of 10 weeks .the main goal of the studentslife project is the study of the mental health , academic performance and behavioral trends of this group of students using mobile phones sensor data . to the best of our knowledge ,this is the first work presenting an observational causality study using digital data gathered by smartphones .information about participants daily social interactions as well as their exercise and work / study schedule is not directly measured ; instead we use raw gps and accelerometer traces in order to infer high - level information which is considered as implicit indicator of the variables of interest .no active participation of the users is required , i.e. , answering to pop - up questionnaires .we automatically assign semantics to locations in order to group them in four categories : home , work / university , socialization venues and gym / sports center . by grouping locations into these four categories and continuously monitoring the spatio - temporal traces of userswe can derive high - level information as follows : * * work / university .* by analyzing the daily time that users spend at their workplace we can infer their working schedule .prolonged sojourn time at work / university could be an indicator of increased workload . *the time that participants spend at home could serve as a rough indicator of their social interactions .prolonged sojourn time at home could imply limited social interactions or social interactions with a restricted number of people .in general , spending time outside home usually involves some social interaction .an estimation of the total daily time that participants spend at any place apart from their home and working environment could serve as a rough indicator of their non - work - related social interactions . ** socialization venues . * by monitoring users visits at socialization venues such as pubs , bars , restaurants etc , we can infer the time that they spend relaxing and socializing outside home during a day . ** gym / sports - center .* indoor workout can be captured by tracking participants visits to gyms or sports centers .outdoor activity can be measured using accelerometer data .our causality analysis is based on rubin s counterfactual framework . according to this framework ,a causal problem is formulated as a counterfactual statement which examines what would have been the outcome if an object has been exposed to an event .since it is impossible to observe for the same object both the result of exposure and non - exposure to an event , causal inference is based on comparing the outcomes on _ equivalent _ treatment and control groups i.e. , treatment and control units with similar baseline characteristics . in this subsection, we discuss a methodology for causal inference in observational data .the first step of the analysis is the _ description of the variables _ of the study .a causality study involves the following variables : 1 ._ cause _ or _ treatment _ variable : an independent variable which influences the values of another variable .the treatment variable is usually binary , denoting whether an object of the study has been exposed to a treatment or not .treatment could be also a discrete variable in case that different levels of treatment are considered .effect _ or _ outcome _ variable : a dependent variable which can be manipulated by changing the variable that represents the cause .3 . a set of variables , which describes the baseline characteristics of the objects of the study . in the second step of the analysiswe _ define the units _ of the study .each unit corresponds to a set of attributes , derived by the variables of the study , which describe an object ( e.g. , a person or a thing ) on a specific time period .we can use multiple units describing a single object in different time intervals .thus , a unit that describes an object at time corresponds to a set of values . giventhat , in a causation study , the treatment should precede temporally the effect , i.e. , the value should correspond to the treatment that has been applied to object before time . in the remainder of the paper, the simplified notation will be used to describe a unit . in order to claim that a value of a variable has been caused by a value of a variable there should be an association between the occurrence of these two values and there should be no other plausible explanation of this association . the first part of this requirement can be examined by performing a simple statistical analysis . however , excluding any other explanation of the observed association is a hard problem since both the treatment and the effect variable may be driven by a third variable .variables that correlate with both the outcome and the treatment are called _ confounding variables _ or _confounders_. in figure [ fig : confounding ] we provide a graphical representation of the dependencies between the treatment , outcome and confounding variables .the _ identification of the confounders _ requires a correlation analysis between each variable and the variables and ., outcome and the set of confounding variables .,scaledwidth=35.0% ] an unbiased causality study requires that the assignment of units to treatments is independent of the outcome conditional to the confounding variables . while in experimental studiesthis requirement is satisfied by randomly assigning units to treatments , in observational studies we could eliminate confounding bias by comparing units with similar values on their confounding variables but different treatment value ( _ matching design _ ) .let us consider a binary treatment , a group of _ treated _ units and a group of _ control _ units such as and .let us also consider a set of confounding variables .ideally , each unit will be matched with a unit if , .however , perfect matching is usually not feasible .thus , treated units need to be matched with the _ most similar _ control units .several methods have been proposed to create balanced treated and control pairs . after applying a matching method scientists need to check whether the treated and control groups are sufficiently balanced by estimating the standardized mean difference between the groups or by applying graphical methods such as quantile - quantile plots , cumulative distribution functions plots , etc .if sufficient balance has not been achieved , the applied matching method needs to be revised . finally ,if any confounding bias has been sufficiently eliminated , the treatment effect can be estimated by comparing the effect variable of the matched treated and control units .let us define as the set of paired treated and control units and the number of pairs .then , the average treatment effect ( ate ) can be estimated as follows : in figure [ fig : flowchart ] we provide a graphical representation of the causal inference methodology .the studentslife dataset contains a rich variety of information that was captured either through smartphone sensors or through pop - up questionnaires . in this studywe use only gps location traces , accelerometer data , a calendar with the deadlines for the modules that students attend during the term and students responses to questionnaires about their stress level .students answer to these questionnaires one or more times per day .we use the location traces of the users to create location clusters .gps traces are provided either through gps or through wifi or cellular networks . for each location cluster , we assign one of the following labels : _ home _ , _ work / university _ , _ gym / sports - center _ , _ socialization venue _ and _ other_. labels are assigned automatically without the need for user intervention ( a detailed description of the clustering and location labeling process is presented at the additional file 1 ) we use information extracted from both accelerometer data and location traces to infer whether participants had any exercise ( either at the gym or outdoors ) . the studentslife dataset does not contain raw accelerometer data .instead it provides an activity classification by continuously sampling and processing accelerometer data .the activities are classified to stationary , walking , running and unknown .we also use the calendar with students deadlines , which is provided by the studentslife dataset , as an additional indicator of students workload .we define as a set of all days that the student has a deadline .we define a variable that represents how many deadlines are close to the day for a user as follows : thus , will be equal to zero if there are no deadlines within the next days , where a constant threshold ; otherwise , it will be inversely proportional to the number of days remaining until the deadline . in our experiments we set the threshold equal to 3. we found that with this value the correlation between the stress level of the participants and the variable is maximized .finally , the studentslife dataset includes responses of the participants to the big five personality test .the big five personality traits describe human personality using five dimensions : _ openness _ , _ conscientiousness _ , _ extroversion _ , _ agreeableness _ , and _ neuroticism_. the personality traits of participants can be used to describe some baseline characteristics of the units and , for this reason , we include them in the study .we apply the causal inference framework that was previously described in order to assess the causal impact of factors like exercising , socializing , working or spending time at home on stress level . initially , we define the variables that will be included in the study as follows : 1 . : denotes the total time in seconds that the user spent at home during day until time ; 2 . : denotes the total time in seconds that the user spent at university during day until time ; 3 . : denotes the total time in seconds time that the user spent in any place apart from his / her home or university during day until time ; 4 . : denotes the total time in seconds that the user spent exercising during day before time ( it is estimated using both location traces and accelerometer data ) ; 5 . : denotes the total time in seconds that the user spent at any socialization or entertainment venue during day before time ; 6 . : denotes the stress level of user that was reported on day and time .stress level is reported one or more times per day .thus , in contrast with the above mentioned variables , is not continuously measured ; 7 . : denotes the last stress level that was reported by user the day .this variable remains constant within a day ; 8 . : represents the upcoming deadlines as described in equation [ eq : deadlines ] ; 9 . , , , , : these five variables denote the extroversion , neuroticism , agreeableness , conscientiousness and openness of user based on his big five personality traits score respectively . in this study , we examine the effects of five treatments , denoted by the variables , , , and on the stress level of participants , which is described by the variable .a unit of the study corresponds to a set of attributes derived by the variables of the experiment .all the variables are sampled every 4 hours , thus there are maximum six samples per day for each participant .let a set of sampling times and the element of . then a unit corresponds to the set of variables ( , , , , , , , ) .since the variable is not continuously measured , it is not feasible to sample it for time .instead , we define as the average stress level of unit at day between time and .thus , is estimated as follows : if there are no stress level reports during this time interval , then the unit that corresponds to the set of variables will be discarded . in order to conduct a reliable causation study based on observational data we need to define the confounding variables . while there is a large number of factors that could influence the stress level of participants, the study could be biased only by factors that have a direct influence on both the stress level and the variable that is considered as treatment in the study .thus , in our case we need to specify factors that could influence both the daily activities of participants and their stress level .for example , the workload of students can influence their activities ( e.g. , in periods with increased workload some students may choose to change their workout schedule , etc . ) and their stress level .since the workload can not be directly measured using only sensor data from smartphones , we use as confounding variables other variables that provide implicit indicators of workload such as the time that students spend at home and university and their deadlines .moreover , participants choice to do an activity may exclude another activity from their schedule and it may also influence their stress level .for example , someone may choose to spend some time in a pub instead of following his / her normal workout schedule .the previous day stress level may also influence both next day s activities and stress level .finally , several studies have demonstrated that stress level fluctuations are affected by personality traits . in general ,more positive and extrovert people tend to be able to handle stress better than people with high neuroticism score .moreover , personality characteristics may correlate with the daily schedule that people follow .for example more extrovert people may spend less time at home and more time in social activities . in order to define the covariates of the study we conduct a correlation analysis on the variables of interest .since the relationship among the variables may not be linear , we apply the kendall rank correlation .the p - values of the kendall correlation are presented in table [ table : kendall_rank_correlation ] ..p - values of kendall correlation under the null - hypothesis that the examined variables are independent . [ cols="^,^,^,^,^,^,^",options="header " , ]in this work , we presented a framework for detecting causal links on human behavior using mobile phones sensor data .we have studied the causal effects of several factors , such as working , exercising and socializing , on stress level of 48 students using data captured by smartphones sensors .our results suggest that exercising and spending time outside home or university have a strongly positive causal effect on participants stress level .we have also demonstrated that the time participants stay at university has a positive causal impact on their stress level only when it is considerably lower than the average daily university sojourn time . however , this impact is not remarkable .moreover , we have observed that some of the examined factors have different impact on the stress level of students with high extroversion score and on students with high neuroticism score .more specifically , more extrovert students benefit more from spending time outside home or university while more neurotic students benefit more from exercising .our study mainly relies on raw sensor data that can be easily captured with smartphones .we have demonstrated that information extracted by simply monitoring users location and activity ( through accelerometer ) can serve as an implicit indicator of several factors of interest such as their working and exercising schedule as well as their daily social interactions .inferring this high - level information using raw sensor data instead of pop - up questionnaires has three main advantages : 1 ) it offers a more accurate representation of participants activities over time since data are collected continuously , 2 ) data are collected in an obtrusive way without requiring participants to provide any feedback ; this minimizes the risk that some users will quit the study because they are dissatisfied by the amount of feedback that they need to provide , 3 ) data gathered through pop - up questionnaires may not be objective since participants may provide either intentionally or unintentionally false responses . on the other hand , inferences based on sensor data could also be inaccurate either due to noisy sensor measurements or due to the fact that the variable of interest is inferred by the sensed data rather than directly measured .for example , in our case we assume that a visit to a sports center implies that the user had some exercise .however , the user may have visited this place to attend a sports event or just to meet friends . assessing the degree of uncertainty that information inference from sensor measurements involves and incorporating this uncertainty into the causation study represents an interesting research area for further investigation .finally , this study involves a limited number of participants who do not constitute a representative sample of the population ; therefore extrapolating general conclusions about the causal impact of the examined factors on stress level is not feasible .however , the purpose of this article is to demonstrate the potential of utilizing smartphones in order to conduct large - scale studies related to human behavior , rather than present a thorough investigation on factors that influence stress .we create location clusters using raw gps traces . in order to increase the accuracy on location estimation we consider only gps samples with accuracy less than 50 meters .moreover , we ignore any samples that were collected while the user was moving . for each new gps point , we create a cluster only if the distance of this point with the centroid of any of the existing clusters is more than 50 meters .otherwise , we update the corresponding cluster with the new gps sample .every time a new gps sample is added to a cluster , the centroid of the cluster is also updated .the pseudo code of the location clustering algorithm is presented at algorithm 1 . each location cluster is labeled as _ home _ , _ work / university _ , _ gym / sports - center _ , _ socialization venue _ or _other_. the label _ socialization venue _is used to describe places like pubs , bars , restaurants and cafeterias .the label _ other _ is used to describe any place that does not belong to the above mentioned categories .we label as _ home _ the place that people spend most of the night and early morning hours . in order to find clustersthat correspond to gyms / sports - centers or socialization venues we use the google maps javascript api .google maps javascript api enable developers to search for specific type of places that are close to a gps point .the type of place is specified using specific keywords from a list of keywords provided by this api .we use the centroid of each unlabeled cluster to search for nearby places of interest .places that correspond to _ gym / sports centers _ are specified by the keyword _ gym _ and places that correspond to socialization venues are specified by the keywords _ bar _ , _ cafe _ , _ movie_theater _ , _night_club _ and _restaurant_. for each unlabeled cluster we conduct a search for nearby points of interests .if a point of interest with distance less than 50 meters from the cluster centroid is found , we label the cluster as _ gym / sport - center _ or _ socialization venue _ depending on the point of interest type .otherwise the cluster is labeled as _ other_. any place within the university campus that is not labeled as _ gym / sport - center _ or _ socialization venue _ is labeled as _work / university_.for matching the treatment and control units we use the matchit r package which includes an implementation of the genetic matching algorithm described above .several optimization criteria can be used with genetic matching . here , the balance metric that the genetic matching algorithm optimizes is the mean standardized difference of all the confounding variables .we use matching with replacement , i.e. , each control unit can be matched to more than one treatment units .matching with replacement can reduce the bias since control units which are very similar to treatment units can be exploited more .we use a matching ratio equal to 2 .this means that each treatment unit will be matched with up to 2 control units .
smartphones have become an indispensable part of our daily life . their improved sensing and computing capabilities bring new opportunities for human behavior monitoring and analysis . most work so far has been focused on detecting correlation rather than causation among features extracted from smartphone data . however , pure correlation analysis does not offer sufficient understanding of human behavior . moreover , causation analysis could allow scientists to identify factors that have a causal effect on health and well - being issues , such as obesity , stress , depression and so on and suggest actions to deal with them . finally , detecting causal relationships in this kind of observational data is challenging since , in general , subjects can not be randomly exposed to an event . in this article , we discuss the design , implementation and evaluation of a generic quasi - experimental framework for conducting causation studies on human behavior from smartphone data . we demonstrate the effectiveness of our approach by investigating the causal impact of several factors such as exercise , social interactions and work on stress level . our results indicate that exercising and spending time outside home and working environment have a positive effect on participants stress level while reduced working hours only slightly impact stress . smartphone data , causality , human behavior , stress modeling
in many applications of quantum computers , a quantum register , composed of a fixed number of qubits , is initially prepared in some simple standard state .this initial preparation step is followed by a sequence of quantum gate operations and measurements .there are applications of quantum computers , however , notably the task of simulating the dynamics of a physical system , that may require the initialization of a quantum register in a more general state , corresponding to the initial physical state of the simulated system .this leads naturally to the question of what quantum states can be efficiently prepared on a quantum register .the memory of a classical computer can be easily put into any state by writing an arbitrary bit string into it .the situation for quantum computers is very different .the hilbert space associated with a quantum register composed of as few as 100 quantum bits ( qubits ) is so large that it is impossible to give a classical description of a generic state vector , i.e. , to list the complex coefficients defining it . in this senseit can be said that arbitrary pure states can not be prepared .it is nevertheless possible to formulate the problem of arbitrary state preparation for a register of qubits in a meaningful way .this is achieved by starting from the assumption that the state is initially defined by a set of quantum _ oracles_. by assuming that the state is given in this form , we shift the focus from the problem of describing the state to the problem of the computational resources needed to actually prepare it .in other words , we address the computational complexity rather than the algorithmic complexity of the state .for the purpose of quantifying these computational resources , we simply count the number of oracle calls .we are thereby deliberately ignoring the internal structure of the oracles and the computational resources needed in each oracle call .the algorithm we describe here is applicable to any set of oracles , i.e. , to any state .we will show that it is _ efficient _ for a large class of states of particular interest for the simulation of physical systems .let be a positive integer .we will describe a quantum algorithm for preparing a -qubit quantum register in an approximation to the state and arbitrary phases .here and throughout the paper , denote computational basis states .more precisely , given any small positive numbers and , our algorithm prepares the quantum register in a state such that , with probability greater than , the fidelity obeys the bound latexmath:[\[\label{eq : fidelitybound } to define the algorithm and to assess its efficiency for large , we need to specify in which form the coefficients and are given .we assume that we are given classical algorithms to compute the functions and for any .these classical algorithms are used to construct a set of quantum _oracles_. we will quantify the resources needed by our state preparation algorithm in terms of ( i ) the number of oracle calls , ( ii ) the number of additional gate operations , and ( iii ) the number of auxiliary qubits needed in addition to the register qubits . to analyze the asymptotic , large , behavior of our algorithm , we consider a sequence of probability functions ] , where . for any , the algorithm prepares the quantum register in a state such that , with probability greater than , the fidelity obeys the bound ( [ eq : fidelitybound ] ) , where in the definition ( [ eq : psi ] ) of the functions and are replaced by and , respectively . under the assumption that there exists a real number , , such that we show that the resources needed by our state preparation algorithm are polynomial in the number of qubits , , and the inverse parameters , and .an obvious example of a sequence of functions that do not satisfy the bound ( [ eq : etabound ] ) and for which the resources required for state preparation scale exponentially with the number of qubits is given by for some integer . in this case, it follows from the optimality of grover s algorithm that the number of oracle calls needed is proportional to .sequences that do satisfy the bound ( [ eq : etabound ] ) arise naturally in the problem of encoding a bounded probability density function \to[0,f_{\rm max}]$ ] in a state of the form where is a normalization factor .grover and rudolph have given an efficient algorithm for this problem if the function is efficiently integrable .essentially the same algorithm was found independently by kaye and mosca , who also mention that phase factors can be introduced using the methods discussed in ref .recently , rudolph has found a simple nondeterministic state - preparation algorithm that is efficient for all sequences satisfying the bound ( [ eq : etabound ] ) . for general sequences of states satisfying the bound ( [ eq : etabound ] ) , a given value of the fidelity bound , and assuming polynomial resources for the oracles , our algorithm is exponentially more efficient than the algorithm proposed by ventura and martinez and later related proposals , for which the resources needed grow like .the use of grover s algorithm for state preparation has been suggested by zeng and kuang for a special class of coherent states in the context of ion - trap quantum computers .a general analysis of the state preparation problem in the context of adiabatic quantum computation was given by aharonov and ta - shma .this paper is organized as follows . in sec .[ sec : algorithm ] we give a full description of our algorithm .a detailed derivation is deferred to sec .[ sec : derivation ] .the algorithm depends on a number of parameters that can be freely chosen . in sec .[ sec : fidelity ] we consider a particular choice for these parameters and show that it guarantees the fidelity bound ( [ eq : fidelitybound ] ) .we use the same choice of parameters in sec .[ sec : resources ] to derive worst case bounds on the time and the memory resources required by the algorithm . in sec .[ sec : conclusions ] we conclude with a brief summary .our algorithm consists of two main stages . in the first stage , the algorithm prepares the register in an approximation to the state which differs from only in the phases .more precisely , let be the largest small parameter such that and is an integer .the first stage of the algorithm prepares the register in a state such that , with probability greater than , we have that latexmath:[\[\label{thefidelitybound } describe the details of the first stage below .the second stage of the algorithm adds the phases to the state resulting from the first stage .this can be done in a straightforward way as follows .we start by choosing a small parameter such that is a positive integer .we then define a list of unitary operations , , on our quantum register by the operators are conditional phase shifts that can be realized as quantum gate sequences using the classical algorithm for computing the function .if we apply the operators sequentially to the result of the first stage , we obtain where the function satisfies the inequality for all . it can be shown ( see section [ sec : stage2fidelity ] ) that together with eq .( [ thefidelitybound ] ) this implies the bound .notice the slight abuse of notation identifying the parameter in the inequality ( [ eq : fidelitybound ] ) with the sum in the inequality ( [ eq : properfidelitybound ] ) .we now proceed to a more detailed description of the first stage of the algorithm . from now on we assume that is an integer power of 2 .this can always be achieved by padding the function with zeros .given our choice of the parameter , eq .( [ eq : epsilonbound ] ) , we define a list of _ oracles _ , , by we extend this definition beyond the domain of the function by setting for . using the classical algorithm to compute , one can construct quantum circuits implementing the unitary oracles these circuits are efficient if the classical algorithm is efficient .the list of oracles defines a new function via where by convention .the situation is illustrated in fig .[ figure1 ] , where the values have been permuted for clarity .knowledge of this permutation is not required for our algorithm .the essence of the first stage of the algorithm consists in using a number of grover iterations based on the oracles to prepare the register in an approximation to the state reflects the fact that may not be normalized . to find the number of required grover iterations for each oracle , we need an estimate of the number of solutions , , for each oracle , defined by this estimate can be obtained from running the quantum counting algorithm for each oracle .we denote the estimates obtained in this way by .the accuracy of the estimate relative to can be characterized by two real parameters , and , in such a way that , as a result of quantum counting , with probability greater than we have for each oracle , , the resources needed to achieve the counting accuracy specified by and depend on the actual number of solutions .this dependence is important for optimizing the performance of our algorithm . in this paper , however , we present a simpler analysis assuming worst case conditions for each oracle . for this analysis we use a specific choice of , which is given by eq .( [ eq : worstcase ] ) .the analysis of the algorithm is simplified if we concentrate on a subset of oracles , where and the indices are determined by the construction below .we introduce a new parameter ( see eq .( [ eq : worstcase ] ) below ) such that . the index is defined to be the smallest integer such that and , for , the index is the smallest integer such that the number is the largest value of for which these inequalities can be satisfied .the effect of eq .( [ eq : ignorepeak ] ) is to neglect narrow peaks ( corresponding to small values of in fig .[ figure1 ] ) .equation ( [ eq : increasing ] ) makes sure that the numbers form an increasing sequence even if , due to counting errors , the estimates do not ( see fig .[ figure1 ] ) . for ,define , and where we define . for every oracle , the value of is an approximation to the number of solutions , ,satisfying the bound in what follows it will be convenient to introduce the notation the oracles define a new function via where by convention .the function is a decreasing step function , with step sizes which are multiples of .the widths of the steps are given by the numbers which are determined by the oracles ( see fig . [ figure1 ] ) .the algorithm can now be completely described as follows .choose a suitable ( small ) number , , of _ auxiliary qubits _( see eq .( [ eq : worstcase ] ) below ) , and define . for , find the quantities and for , define the grover operator where is the -qubit identity operator , , and where the domain of the oracles is extended to the range by setting if .prepare a register of qubits in the state , then apply the grover operators successively to create the state latexmath:[\[\label{eq : psit } now measure the auxiliary qubits in the computational basis .if all outcomes are this stage of the algorithm is successfully prepares the desired state eq .( [ eq : psiptilde ] ) . if one of the measurements of the auxiliary qubits returns 1 , this stage of the algorithm has failed , and one has to start again by preparing the register in the state as in eq .( [ eq : psi0 ] ) . assuming the choice of parameters in eq .( [ eq : worstcase ] ) , the probability , , that the algorithm fails in this way satisfies the bound ( see eq .( [ thefailureprobability ] ) ) . before we provide detailed proofs of the aboveclaims it is helpful to give a hint of how this stage of the algorithm achieves its goal .the algorithm aims at constructing the function which is close to the function defined in eq .( [ eq : pdoubleprime ] ) . the sequence of grover operators in eq .( [ eq : psit ] ) creates a step function that is close to ( see fig .[ figure1 ] ) .in particular each operator in eq .( [ eq : psit ] ) creates a step with the correct width and a height which is close to the target height . due to a remarkable property of grover s algorithm , once the features have been developed they are not distorted by which develops . in this way the algorithm proceeds building feature after feature until it constructs all of them . at the end , because of the inherent errors , the auxiliary qubits end up having small amplitudes for nonzero values .measuring them projects the auxiliary qubits onto the zero values with a probability that can be made arbitrarily close to 1 .this also slightly changes the features due to renormalization of the state after the measurement , which we take into account when we estimate the overall loss of fidelity .in section [ sec : algorithm ] we have already explained the second stage of our algorithm . herewe present a detailed explanation of the first stage .this section is organized as follows . in subsection [ subsecbiham ]we review some properties of the grover operator introducing our notation as we go along . in subsection [ subseconestep ]we introduce a convenient mathematical form for analyzing intermediate quantum states visited by the algorithm . and finally , in subsection [ subsec : t_and_h ] we derive the values ( [ eq : t_k ] ) of the times used in our algorithm .we will be using the following result .consider an oracle , which accepts values of ( out of the total of , i.e. , ) .we shall call such values of _ good _ , as opposed to _ bad _ values of that are rejected by the oracle . using different notation for the coefficients of good and bad states, we have that after grover iterations an arbitrary quantum state is transformed into let and be the averages of the initial amplitudes of the good and the bad states respectively : and similarly for the final amplitudes let us also define in other words , and define the _ features _ of the initial amplitude functions and relative to their averages and .biham _ et .al . _ have shown that the change of the amplitudes is essentially determined by the change of the averages : where the averages and are given as follows .define the averages are given by we shall also use the separations , and , of the averages directly from the definition we obtain where the phase can be found from , in our applications we will only need the case when the initial amplitude of the bad states is flat , i.e. . in this case , the amplitude of the bad states always remains flat this fact and the fact that the initial features , , of the amplitude of the good states are preserved , see ( [ bihamequations ] ) , is crucial for understanding the rest of this paper .the preparation stage of our algorithm summarized in eq .( [ eq : psit ] ) gives rise to a sequence of states defined by be the set of solutions to the oracle : for , these states can be written in the form where since is real and positive , the value of can be determined from the normalization condition .the action of the algorithm can be visualized as shown in fig .[ unfinishedstateprep ] , which shows the result of the first three iterations .the integers [ defined in eq . ([ eq : definenk ] ) ] are the number of good values of according to oracle .we see that each operation prepares a feature of height .it follows from the conclusions of section [ subsecbiham ] that once such a feature is developed , its height remains constant throughout the computation . in this subsectionwe show how the times are related to the corresponding features .the normalization condition reads substituting ( [ a_jk ] ) , this gives us a quadratic equation for : where we define and solving this equation , and using the fact that , we obtain this formula together with eq .( [ a_jk ] ) provide an explicit expression ( [ psikminusone ] ) for in terms of the numbers and . to build the feature, we apply the grover operator to the state . we will now derive an expression for the integer `` time '' in terms of the features and the widths .given as an initial state , let us define and to be the initial average amplitudes of the good and the bad states according to the oracle : the initial separation , , between the good and the bad averages is therefore observe that developing a new feature of height is equivalent to increasing the initial separation by .the final separation ( after steps ) between the good and bad averages is therefore using ( [ delta0 ] ) and ( [ delta_tk ] ) together with ( [ deltat ] ) , we therefore have where and to achieve a good fidelity between the state that we actually prepare and our `` target '' state , we want the features to be as close as possible to the target values defined in eq .( [ deltas ] ) .this motivates the formulas ( [ eq : alpha_k][eq : t_k ] ) for which are obtained from the formulas ( [ delta0][eq : alpha_ksquared ] ) by ( i ) replacing the features by the targets , ( ii ) replacing the widths by the measured values , and ( iii ) by rounding to the nearest integer .in the above description of the algorithm , we have not specified how to choose the parameters , and as a function of ( or , alternatively , of the initially given parameters and ) .the optimal choice for these parameters depends on the estimates obtained in the quantum counting step . in this sectionwe provide a rather generous worst case analysis which shows that the choice guarantees , with probability greater than , the fidelity bound latexmath:[\[\label{eq : realfidelitybound } valid for arbitrary values of the . in most actual applications , much larger values of the accuracy parameters , and will be sufficient to guarantee this fidelity bound .we now show that eq .( [ eq : t_k ] ) , for the times which we motivated in the previous section , implies the fidelity bound ( [ eq : realfidelitybound ] ) under the assumption that the parameters , and are chosen as in eq .( [ eq : worstcase ] ) .our starting point will be two sets of expressions for the , namely the definition of the , eqs .( [ eq : alpha_k][eq : t_k ] ) , in terms of the measured values and the target values , and eqs .( [ delta0][eq : alpha_ksquared ] ) above in terms of the actual values and . in subsection[ sec : hk - minus - deltak ] we will derive an upper bound on the error .this bound shows how accurate our algorithm is in achieving the target height , , for the features .the overall accuracy of our algorithm , however , also depends on how accurate it is in achieving the correct width of the features .this accuracy is determined by the fraction of values for which ( see figure [ figure1 ] ) . in subsection[ sec : exceptions ] we obtain an upper bound on this fraction . in subsection[ sec : stage1fidelity ] , we derive the fidelity bound ( [ eq : realfidelitybound ] ) and an upper bound on the probability that the algorithm fails due to a nonzero outcome of the measurement of the auxiliary qubits . and finally , in subsection [ sec : stage2fidelity ] , we show that the bound ( [ eq : tildephibound ] ) on the phases implies the overall fidelity bound ( [ eq : properfidelitybound ] ) .it is convenient to perform the proof of the bound on in three steps .for this we note that depend on the values of , and . in subsection[ subsec : rangeforgammas ] we determine the range of possible values for and that corresponds to the uncertainty in the measured values of .similarly in subsection [ subsec : rangeforomegas ] we determine the error range for , and finally , in subsection [ subsec : proofofthebound ] we complete the proof of the bound .equation ( [ fortk ] ) provides an explicit expression for in terms of and : in what follows it will be convenient to use auxiliary quantities defined as the meaning of becomes clear in comparison with ( [ t_kf1 ] ) : are the time intervals that correspond to the target features .unlike , are not necessarily integers. it will be convenient to rewrite the definition of given in eqs .( [ eq : alpha_k][eq : t_k ] ) in a slightly modified form : where we use the following definitions : and where it is also convenient to define in this notation eq .( [ tau_kf1 ] ) can be rewritten as directly from the definitions we have by direct calculation we get the notation is an abbreviation for a double inequality ( see the appendix ) . since we obtain and similarly since , we therefore have and similarly in the above formulas we have used the mean value theorem that states that for any function that is continuous on the interval , where is some constant , and differentiable on we can write where denotes the derivative of at some point .this gives , for example , that for the error bounds ( [ approxgammatk ] ) and ( [ approxgamma0 ] ) were obtained by simplifying the somewhat tighter but unwieldy bounds using these methods .+ the aim of this subsection is to determine the ratio between and the true value given by equations ( [ tildeomega_k ] ) and ( [ omega_k ] ) respectively . using the mean value theorem we have , by definition , ^ 2 } } \big(\pm\frac{2n_k}{2^an}\frac{\eta_c}{\eta_g-\eta_c}\big)\,,\end{aligned}\ ] ] where is a real number such that .this implies since we have and therefore it now remains to find a bound on that is linear in .we have , by definition , since we obtain and therefore during the stage our algorithm creates a feature of height where is some initial phase . on the other hand , the target height is hence directly from the definition we obtain using eqs .( [ approxgammatk ] ) , ( [ approxgamma0 ] ) , and ( [ approxomega ] ) , using the fact that and using the inequality ( [ arcsininequality ] ) we have now using ( [ meanvalueexamples ] ) and the fact that we derive from the above equation since [ see eqs.([eq : worstcase ] ) and ( [ eq : epsilonbound ] ) ] , we have that , and we can therefore use the inequality ( [ arccosinequality ] ) to show that since , we obtain the bound when describing our algorithm in section [ sec : algorithm ] we have introduced functions and to distinguish two different approximations to the target function .namely , if is an approximation of which is defined by the oracles , then also takes into account the fact that we may not know the exact values of . in our algorithmwe therefore use as our target function which coincides with everywhere apart for a small fraction of values of for which . in this sectionwe obtain an upper bound on this fraction which will then be used in the next section where we derive bounds on the fidelity and the failure probability . for all , we have . for , we have , and hence we now consider , for each , all values of such that . for these values of , we have since , we have hence and finally since , we find that for at most values , where for , eq .( [ psikminusone ] ) can be rewritten in the form where let us define using this definition we have where we have used the fact that for .using ( [ h_delta_bound ] ) , and since we obtain , this implies rewriting eq .( [ eq : pdoubleprime ] ) in terms of the coefficients , we can write for all with a possible exception of at most values ( see sec . [sec : exceptions ] ) .let be the set of exceptional values of for which .we have since , and are all bounded from above by we obtain by definition of and since is normalized we get in order to continue we need to calculate .this can be done by examining the normalization condition .this condition reads using ( [ d ] ) and ( [ ppp_in_terms_of_c ] ) this can be rewritten as or where us define since is normalized , equation ( [ justbeforequadratic ] ) gives a quadratic equation for : where and since and are bounded from above by and since contains at most elements ( see sec .[ sec : exceptions ] ) , we obtain with the help of eq.([bound_on_d ] ) similarly , since we have since and , see eqs . ( [ mu ] ) and ( [ eq : worstcase ] ) respectively , we obtain together with eq .( [ fidelity_bound_almost ] ) this gives the lower bound on the fidelity , where we have observed that and used the bounds , , which follow from our settings given in eq .( [ eq : worstcase ] ) .the failure probability is we now show that the choice together with the inequality ( [ eq : tildephibound ] ) , i.e. , , implies the overall fidelity bound ( [ eq : properfidelitybound ] ) .the proof is straightforward . \big| \cr & \ge & \sum_x \sqrt{p(x)\tilde p(x ) } \ , \cos[\phi(x)-\tilde \phi(x ) ] \cr & \ge & \sum_x \sqrt{p(x)\tilde p(x ) } \ , \big ( 1- [ \phi(x)-\tilde \phi(x)]^2/2 \big ) \cr & \ge & \sum_x \sqrt{p(x)\tilde p(x ) } \ , ( 1-\epsilon'^2/8 ) \cr & = & |\langle\psi_{\tilde p}|\psi_p\rangle| \ ; ( 1-\lambda ' ) \cr & > & 1-\lambda-\lambda ' \;.\end{aligned}\ ] ]in this section we provide worst case upper bounds on the resources required by the algorithm .we distinguish between the resources that are needed for the state preparation part of the algorithm ( subsection [ subsec : stateprep ] ) and the resources that are needed by the quantum counting that precedes the actual state preparation ( subsection [ subsec : qcounting ] ) . from our settings ( [ eq : worstcase ] )we obtain we thus obtain for the number of auxiliary qubits here we give an upper bound on the time resources needed by the algorithm .the construction of one feature requires at most oracle calls . using inequality ( [ inequality112 ] )we can therefore write from ( [ eq : worstcase ] ) we have since there are at most features and because and we therefore have that the total number of oracle calls , , satisfies the bound consider an oracle on the set of possible values of .using standard techniques we can count the number of solutions of within the absolute error where is the number of auxiliary qubits needed by the standard quantum counting routine .we want , where is the counting accuracy introduced earlier .this connects the desired counting accuracy with the number of auxiliary qubits , where . solving this equation for in the case , we have where .we see that , the bigger the value of , the bigger has to be in order to give the required counting accuracy .we therefore set to the minimum , i.e. , ( this doubles the range of values to ensure reliable counting , see e.g. ) .it is easy to check that the dependence of on is monotonic .as we vary in the range 0 to , the corresponding values of vary between the limits and .it follows that the required number of auxiliary working qubits needed for counting with accuracy is .thus we choose .this choice guarantees the required accuracy of counting irrespective of the true value of .the above counting procedure does not output the correct result with probability 1 .for the procedure to work correctly with probability we have to increase the number of auxiliary qubits from to which is given by the number , , of oracle calls that is required by the counting procedure is substituting and using eq .( [ eq : worstcase ] ) we obtain since there are at most features the total number of oracle calls needed by the counting stage of our algorithm is bounded as conclusion , we have described a quantum algorithm to prepare an arbitrary state of a quantum register of qubits , provided the state is initially given in the form of a classical algorithm to compute the complex amplitudes defining the state .for an important class of states , the algorithm is efficient in the sense of requiring numbers of oracle calls and additional gate operations that are polynomial in the number of qubits .the following table lists , for each stage of the algorithm , upper bounds on the number of oracle calls and the number of auxiliary qubits needed . [ cols="<,^,^ " , ] the bounds are not tight and can be improved by a more detailed error analysis .the total number of quantum gate operations depends on the implementation of the oracles .it is proportional to the number of oracle calls times a factor polynomial in if the functions and can be efficiently computed classically .depending on the nature of the function and the prior information about , the algorithm we have described in this paper can be optimized in a number of ways .for instance , the counting stage is the most expensive in terms of both oracle calls and additional qubits .if for some reason the numbers characterizing the oracles are known in advance , the counting stage can be omitted , leading to considerable savings .furthermore , in this case the fidelity bound can be guaranteed with probability 1 , i.e. , we can set . in some casesthe algorithm can be simplified if , instead of using the oracles defined in eq .( [ eq : ok ] ) , one uses oracles that return the -th bit of the expression .the general conclusions of the paper continue to hold for this variant of the algorithm , which we analyze in detail in ref . . finally , by using generalizations of grover s algorithm in which the oracles and the inversion about the mean introduce complex phase factors it is possible to reduce the number of auxiliary qubits needed in the preparation stage of the algorithm .this leads to a reduction in the number of required oracle calls , and could also be important in implementations where the number of qubits is the main limiting factor .this work was supported in part by the european union ist - fet project ediqip .in this paper we have made a frequent use of the following convention .let , and be three numbers .the notation is then understood to be equivalent to the double inequality furthermore , let , and be functions .the notation is then equivalent to the statement that can be written in the form where for all .here we prove the following inequalities , which implies that for any by inspection of -function we have that for the maximum value of the difference is achieved for : in the case of the following equality holds applying this equality to the right hand side of ( [ maxarcsindifference ] ) we obtain let us now look for a constant such that since is a decreasing function the above requirement is equivalent to according to the mean value theorem , there exists such that and therefore the requirement ( [ oneminusnu ] ) can be rewritten as it is clear that this requirement is guaranteed to be satisfied if we set .indeed , for any nonnegative which includes all possible values of that can correspond to and . since guarantees that ( [ arccosoneminusnu ] )is satisfied , we obtain from ( [ aftergradshteynryzhik ] ) the case of negative can be treated in an analogous fashion leading to the inequality the required inequality ( [ arcsininequality ] ) follows trivially .moreover , since we also obtain ( [ arccosinequality ] ) as required .
we describe a quantum algorithm to prepare an arbitrary pure state of a register of a quantum computer with fidelity arbitrarily close to 1 . our algorithm is based on grover s quantum search algorithm . for sequences of states with suitably bounded amplitudes , the algorithm requires resources that are polynomial in the number of qubits . such sequences of states occur naturally in the problem of encoding a classical probability distribution in a quantum register .
random fluctuations due to low - copy number phenomena inside the microscopic cellular volumes have been an object of intense study in recent years .it is now widely recognized that deterministic modeling of chemical kinetics is in many cases inadequate for capturing even the mean behavior of stochastic chemical reaction networks , and several studies have explored the discrepancy between deterministic and stochastic system descriptions . despite the all - pervasive stochasticity , cellular processes and responses proceed with surprising precision and regularity , thanks to efficient noise suppression mechanisms also present within cells .the structure and function of these mechanisms has been a topic of great interest , and in many cases still remains unknown .moreover , recent theoretical works on enzymatic reaction schemes with a single or a few enzyme molecules have repeatedly shown that low - copy enzymatic reactions demonstrate a stochastic behavior that can lead to markedly different responses in comparison to the predictions of deterministic enzyme kinetic models . in this workwe investigate the properties of a possible noise suppression mechanism for an enzymatic reaction with a small and fluctuating number of active enzymes . under certain conditions , presented in , this system displays an increased sensitivity to enzyme fluctuations , a phenomenon that has been termed _stochastic focusing_. stochastic focusing has been presented as a possible mechanism for _ sensitivity amplification _ : compared to a deterministic model of a biochemical network , the mean output of the stochastic version of the system can display increased sensitivity to changes in the input , when the input species has sufficiently low abundance .consequently , it has been postulated that stochastic focusing can act as a signal detection mechanism , that converts a graded input into a `` digital '' output .the basic premise of has been that fluctuations in the `` input '' species are sufficiently rapid , so that any rates that depend on the signaling species show minimal time - correlations .we show that if this condition fails , i.e. when the fluctuations in the input signal are slow compared to the average lifetime of a substrate molecule , stochastic focusing can result in a dramatic increase in substrate fluctuations , a fact also acknowledged in the original publication .increased sensitivity to input changes does not only come at the cost of extremely high output noise levels ; as we will demonstrate here , systems operating in this regime are also extremely sensitive to variations in reaction rates , which in fact precludes robust signal detection by stochastic focusing .for the first time since its introduction we could study the steady - state behavior of this system analytically , by formulating and solving the equations for the conditional means and ( co)variances .motivated by our observations on the open - loop , stochastically focused system , we investigated the system behavior in the presence of a plausible feedback mechanism .we treated the enzyme as a noisy `` controller '' molecule whose purpose is to regulate the outflux of a reaction product by directly or indirectly `` sensing '' the fluctuations in its substrate . for the sake of simplicity and clarity, we focused on very simple and highly abstracted mechanisms , but we should remark that several possible biochemical implementations of our feedback mechanism can be considered .our premise was that the great open - loop sensitivity of a stochastically focused system with relatively slow input fluctuations creates a system with very high open - loop gain , which in turn can be exploited to generate a very robust closed - loop system once the output is connected to the input .our simulation results confirmed this intuition , revealing a dramatic decrease in noise levels and a significant increase in robustness in the steady - state mean behavior of the closed - loop system .such a system no longer functions as a signal detector , but rather behaves as a strong homeostatic mechanism .moreover , we observed that the steady - state behavior of the means in a stochastically focused system with feedback can be captured quite accurately by the corresponding deterministic system of reaction rate equations , despite the fact that the stochastic system still operates at very low copy numbers .noise attenuation through feedback and the fundamental limits of any feedback system implemented with noisy `` sensors '' and `` controllers '' have been studied theoretically in the recent years , and some fairly general performance bounds have been derived in .we should note that , despite its generality , the modeling framework assumed in does not apply in our case , since our system contains a controlled degradation reaction , whereas considers only control of production .more specifically , examines the case where a given species regulates its own production through an arbitrary stochastic signaling network . in thissetting , it is shown that , no matter the form or complexity of the intermediate signaling , the loss of information induced by stochasticity places severe fundamental limits on the levels of noise suppression that such feedback loops can achieve .on the other hand , it is still unclear what type of noise suppression limitations are present for systems such as the one studied here , and a complete analytical treatment of the problem of regulated substrate degradation seems very difficult at the moment . a first attempt to analyze the noise properties of regulated degradation was presented in , which examined such a scheme using the linear noise approximation ( lna ) . as the authors of that work pointed out , however , the lna is incapable of correctly capturing the system behavior ( i.e. means and variances ) beyond the small - noise regime , due to the nonlinear system behavior .we verified this inadequacy , not only for lna , but for other approximation schemes as well , such as the langevin equations and various moment closure approaches .perhaps this is the reason why , contrary to regulated production , the theoretical noise properties of regulated substrate degradation have received relatively little attention . with the rapid advancement of single - molecule enzymatic assays , we expect that the study of noise properties of various low - copy enzymatic reactions , including the proposed feedback mechanism described here , will soon be amenable to experimental verification .it also remains to be seen whether the proposed feedback mechanism is actually employed by cells to achieve noise attenuation in enzymatic reactions . at any rate ,the noise attenuation scheme presented here could be tried and tested in synthetic circuits through enzyme engineering .in this section we formulate and analyze a simple biochemical reaction network capable of exhibiting the dynamic phenomenon of stochastic focusing .it is shown that in the stochastic focusing regime , the system acts as a noisy amplifier with an inherent strong sensitivity to perturbations .it follows that without modifications , the network can not be used under conditions requiring precision and regularity .we consider the simple branched reaction scheme studied in and shown schematically in fig . [ fig1_ol](a ) . in this scheme ,substrate molecules enter the system at a constant influx , and can either be converted into a product or degraded under the action of a low - copy enzyme ( or , equivalently , converted into a product that leaves the system ) .while the number of enzymes in the system is assumed constant , enzyme molecules can spontaneously fluctuate between an active ( ) and an inactive ( ) form .the generality of this model and its sensitivity to variations in the active enzyme levels is further discussed in the supplement ( sections 1 and 2 ) .recent single - enzyme turnover experiments have shown that single enzyme molecules typically fluctuate between conformations with different catalytic activities , a phenomenon called _ dynamic disorder _ . in the simple model considered here, the enzyme randomly switches between two activity states .the stationary distribution of in this case is known to be binomial ; that is , , where is the total number of enzymes in the system and , such that the mean .the basic ( empirically derived ) conditions for stochastic focusing are that the magnitude of active enzyme fluctuations is significant compared to the mean number of active enzymes , while the total number of enzymes is low .moreover , it is assumed that the level of fluctuates rapidly compared to the average lifetime of and molecules . without this assumption ,the noise in can be greatly amplified by and transmitted to .the first assumption ( large enzyme fluctuations and low abundance ) is maintained in our setup .however , we shall dispense with the second assumption .we further postulate that ( possibly a product of upstream enzymatic reactions ) enters the system at a high input flux ( large ) and that there exists a strong coupling between and , in the sense that a few active enzymes can strongly affect the degradation of .( denoted by throughout ) as a function of the average number of active enzyme molecules ( for . since , the average of displays the same behavior as the substrate .red line : steady - state of the ode model for the same parameter values .the large difference ( notice the logarithmic scale ) between the blue and red lines is a consequence of stochastic focusing .* ( c ) * upper : substrate ( ) noise statistics as a function of the average number of active enzymes .blue : steady - state fano factor ( variance / mean ; notice the logarithmic scale ) .green : steady - state coefficient of variation ( standard deviation / mean ) .lower : product ( ) noise statistics as a function of the average number of active enzymes .color coding same as in upper panel .both plots were obtained for .all calculations were performed analytically , using the conditional moment equations and the known stationary distribution of . ] more concretely , the previously stated assumptions imply that the reaction rates must satisfy the following conditions : 1 . ( high influx of ) 2 . ( enzyme fluctuations are slow compared to the average lifetime of a substrate molecule ) 3 . ( strong coupling between enzyme and substrate ) 4 . is small ( e.g. below 10 ) .these conditions are motivated via a short theoretical and numerical analysis in the supplement ( section 1 ) .when they hold , we expect the amount of to fluctuate wildly as varies over time and these fluctuations to propagate to . in the rest, we will refer to this motif as the ( open - loop ) _ slowly fluctuating enzyme _ ( sfe ) system .the computational analysis of this and similar systems has thus far been hindered by the presence of the bimolecular reaction , which leads to statistical moment equations that are not closed , while the presence of stochastic focusing presents further difficulties for any moment closure method . in this work ,we circumvent these difficulties by formulating and solving the conditional moment equations for the means and ( co)variances of and conditioned on the enzyme state ( whose steady - state distribution is known ) .this enables for the first time the analytical study of the steady - state behavior of this system ( more details can be found in the supplement , section 3 ) .chiefly , the equations for the first two conditional moments of the sfe system are in fact closed , i.e. they do not depend on moments of order higher than 2 , and thus do not require a moment closure approximation despite the fact that the unconditional moment equations themselves are open .we next use these analytic equations to shed new light on the properties of the network under consideration . according to the method of conditional moments ( mcm ) ,the chemical species of a given system are divided into two classes .species of the first class , collectively denoted by , are treated fully stochastically , while species of the second class , denoted by , are described through their conditional moments given .more analytically , the mcm considers a chemical master equation ( cme ) for the marginal distribution of , , and a system of conditional means ( ] ) for the species . in the case of the sfe network , by taking and , we see that is independent of and its evolution is described by a cme whose stationary solution is known . moreover , the system of conditional means and ( co)variances of given turns out to be closed .we thus begin by examining the behavior of the system as the enzyme activation rate is varied while keeping other parameters unchanged . assuming a fixed , is directly related to , the average number of active enzymes .thus , any changes in are assumed to be driven by , which means that the two can be used interchangeably .we present the performance of the open - loop system with respect to wherever possible , as we find this more intuitive . the results in fig .[ fig1_ol](b ) show that the stationary means of and ( denoted by angle brackets throughout the paper ) depend very sensitively on , as one would expect from a stochastically focused system .moreover , owing to the relatively slow switching frequency of enzyme states , the stationary distributions of substrate and product are greatly over - dispersed , as shown in fig .[ fig1_ol](c ) . apart from the enzyme activation rate , the catalytic degradation rate ( ) is also expected to affect noise in the system , as it controls both the timescale and magnitude of substrate fluctuations : as increases , the rate of substrate consumption grows as well . on the other hand ,the impact of a change in the number of active enzymes is also magnified .we can study the interplay of and by varying both simultaneously , as shown in fig .[ 2dscan](a ) .although has a much more pronounced effect on substrate and product means and variances , the interplay of and is what determines the overall noise strength in the system , as the third row of plots shows . ) and for ( notice the logarithmic scales ) . *( b ) * closed - loop sfe system with substrate feedback : steady - state means , variances and cvs of substrate and product as a function of ( which determines the average number of active enzymes ) and for .logarithmic scales are preserved to make comparisons with fig .[ 2dscan ] easier , although the range of variation is much smaller in this case . ] from the above analysis , we deduce that the open - loop motif amplifies both small changes in the average number of active enzymes ( fig .[ fig1_ol](b ) ) , as well as temporal fluctuations in the active enzyme levels ( fig .[ fig1_ol](c ) ) : for intermediate values of , the cv and ff of and are much greater than zero .this implies that the instantaneous flux of substrate through the two alternative pathways experiences very large fluctuations , which would propagate to any reactions downstream of . the increased sensitivity of the sfe network to fluctuations in the active enzyme would suggest sensitivity with respect to variations in reaction rates . to verify this, we generated 10000 uniformly distributed joint random perturbations of all system parameters that reach up to 50% of their nominal values .that is , every parameter was perturbed according to the following scheme : ).\ ] ] for each perturbed parameter set , the steady - state conditional moment equations were solved to obtain the means , variances and noise measures for both the substrate and product .the results are summarized in fig .[ fig : pert_open_closed ] ( dashed lines ) , where the large parametric sensitivity of the system can be clearly seen . global sensitivity analysis of the mean , variance and cv histograms reveals that the total number of enzymes ( ) has the largest effect on all these quantities , with the enzyme activation / deactivation rates ( ) coming at second and third place .although one could argue that and are biochemical rates that are uniquely determined by molecular features of the enzyme , the total number of enzyme molecules would certainly be variable across a cellular population . .the black line on the top left plot denotes the ( common ) mean of substrate and product for the nominal parameters . on the top right plot ,black lines mark the nominal variance for substrate ( solid ) and product ( dashed ) .* continuous lines * : closed - loop sfe system ( ) with substrate feedback : histograms of steady - state means , variances , cvs and ffs of substrate and product , scaledwidth=80.0% ] , obtained from the same 10000 randomly sampled parameters used for the dashed line histograms .the great reduction in sensitivity of the closed - loop sfe system in comparison to the open - loop can be easily observed .[ fig : pert_open_closed ] taken together , the results of this section and the previous one suggest that the operation of the sfe reaction scheme in fig . [ fig1_ol](a ) as a signal detection mechanism ( the original point made in ) is severely compromised when the system operates in the regime defined by our set of assumptions : besides amplifying enzyme fluctuations , the system responds very sensitively to parametric perturbations .these features render the enzyme a highly non - robust controller of the substrate and product outfluxes , which can fluctuate dramatically in time .in addition , reaction rates have to be very finely tuned to achieve a certain output behavior , for example a given mean and variance , or a given average substrate outflux .it is a well - known fact in control theory that negative feedback results in a reduction of the closed - loop system gain .however , this reduction is exchanged for increased stability and robustness to input fluctuations , and a more predictable system behavior that is less dependent on parameter variations .systems with large open - loop gain tend to also display extreme sensitivity to input and parametric perturbations , and can thus benefit the most from the application of negative feedback .we shall examine the operation of the sfe network under feedback by assuming that ( or ) affects the rate of activation of the enzyme , for example by controlling its activation rate .we will call this new motif the _ closed - loop _ sfe system , to differentiate it from the open - loop system presented above . according to the closed - loop reaction scheme ( fig .[ closed_scheme](a ) ) , the activation rate of becomes ( being or ) , where models activation by , thus creating a negative feedback loop between the system input and output .our only requirement for is to be nondecreasing ( e.g. a hill function ) . to facilitate our simulation - based analysis, we will assume that arises from the local , piecewise linear approximation of a hill function , as shown in fig .[ closed_scheme](b ) . in this case , the form of is controlled by two parameters : ( the `` gain '' ) and ( the point beyond which feedback is activated ) .finally , can be thought of as the `` basal '' activation rate in the absence of the regulating molecule .we should note that the proposed form of feedback regulation is fairly abstract and general enough to have many alternative biochemical implementations .it is possible , for example , for the enzyme activity to be allosterically enhanced by the cooperative binding of or ( termed substrate and product activation respectively in the language of enzyme kinetics ) , giving rise to a hill - like relation between effector abundance and enzyme activity . in this workwe will work with the abstract activation rate function defined above .( which determines the average number of active enzymes ) and for .,scaledwidth=90.0% ] in the following we analyze the closed - loop sfe network behavior by studying how the sfe network properties described in the previous sections are transformed under feedback . herewe study the sfe network under the influence of negative feedback .we should point out that we characterize the open- and closed - loop systems with respect to the same features ( noise and robustness ) , not to directly compare them , but because these features play an important role in the function of both mechanisms .whenever we use the open - loop system as a baseline for assessing closed - loop system properties , scale - independent measures are used since this allows for the evaluation of relative distribution spreads . this principleis only disregarded in fig .[ n_pert_cfeed ] below . for our first test, we use the same settings and parameters as those of fig . [2dscan](a ) , only this time we add a feedback term from the substrate to the enzyme activation rate .increasing the gain or shifting the activation point to the left results in a decrease of both means and variances of substrate and product .for the ranges of and values considered , the means change by at most a factor of 2.5 , while the variances by about 5 times . at the same time , the cvs vary by about 50% and the corresponding fano factors by a factor of 5 .moreover , as the analysis in the supplement ( section 5 ) shows , the cvs of both species become relatively flat as and increase , while the fano factor gets very close to 1 as increases for small values of , indicating that the resulting substrate and product stationary distributions are approximately poissonian in this regime . for the analysis that follows , we fix and in the feedback function . with the above choice of feedback parameters ,we first study the sensitivity of the closed - loop sfe system to variations of the two key parameters , and . as fig .[ 2dscan](b ) demonstrates , means and variances ( and , consequently , cvs ) of substrate and product become largely independent of , except for very large values of ( similar results are obtained for product feedback ) .moreover , noise of substrate and product is dramatically reduced in comparison to the open - loop sfe system , while the variation of means and variances is now quite small , despite the large ranges of and values considered .it is also worth noting that the fano factors of both substrate and product are very close to one for a large range of parameters . interestingly ,if we quantify noise reduction by the ratio of open - loop vs. closed - loop cv , we observe that noise reduction is maximal where the open - loop sfe system noise is greatest , as demonstrated by comparing fig .[ closed_scheme](c ) with fig .[ 2dscan](a ) .another striking effect of feedback regulation of enzyme activity is that the closed - loop sfe system becomes much less sensitive to parameter variations in comparison to the open - loop case .applying the same parametric perturbations described in [ sub_pert_open ] , we obtain the histograms of fig . [fig : pert_open_closed ] ( continuous lines ) . as it becomes apparent , the histograms corresponding to the closed - loop sfe system are several orders of magnitude narrower compared to the open - loop .moreover , despite the relatively large parametric perturbations , variability in substrate and product statistics of the closed - loop sfe system is largely contained within an order of magnitude . as it was pointed out in [ sub_pert_open ] , variability in biochemical reaction rates can be considered `` artificial '' , however changes in the number of enzymes , , are to be expected in a cellular population .it is therefore interesting to study how variations in alone are propagated to the substrate and product statistics . assuming that both the open- and closed - loop systems operate with the same average number of active enzymes for the `` nominal '' value of , fig .[ n_pert_cfeed](a , b ) shows how the substrate mean and variance vary as is perturbed around this value , both in the open- and closed - loop systems ( with substrate feedback ) . to achieve the same average number of active enzymes for , the closed - loop sfe system was simulated first , and the mean number of active enzymeswas recorded .this number was then used to back - calculate an appropriate value ( keeping fixed ) for the open - loop sfe system .panel ( c ) also shows how the distribution of active enzymes differs in the two systems for .the cyan line corresponds to a binomial distribution , , where is determined by the and values of the open loop .the red distribution is obtained from simulation of the closed loop and is markedly different from a binomial .the difference is especially significant at the lower end , as small values of lead to fast accumulation of .similar results are obtained for product feedback . , , , , , and feedback parameters , .in the open - loop sfe system all parameters were kept the same , except for , which was set to 0.1572 to achieve the same mean of that the closed - loop sfe system achieves for .,scaledwidth=80.0% ] a further remarkable by - product of feedback in the closed - loop sfe system is the fact that the mean of the stochastic model ends up following very closely the predictions of the ode equations for the deterministic system .this behavior becomes more pronounced as the number of available enzymes ( ) grows , while the average number of active enzymes ( ) remains small . under this condition, one can think of enzyme activation in the original system as a zeroth - order reaction with rate , and the active enzyme abundance to be described by a birth - death process with birth rate , death rate and poisson stationary distribution with parameter .the accuracy of the ode approximation to the mean substrate levels in the case of substrate feedback can be demonstrated using the same type of parametric perturbations with those employed in [ sub_pert_open ] .all nominal parameters were kept the same for this test , except for , and , which were set equal to 1 , 5 and 100 respectively ( , ) .we compare the mean of the stochastic model with the ode prediction in the case of substrate feedback with , and define the relative error where is the equilibrium solution of the ode model . in thissetting , a set of 5000 random perturbations leads to an average relative error of 1.5% with standard deviation 0.96% , which clearly shows that the ode solution captures the mean substrate abundance with very good accuracy indeed ( note that the same holds for the mean of , since it depends linearly on the mean of ) .very similar results are obtained in the case of product feedback .the above observations are even more striking , if we take into account 1 ) the fact that the closed - loop sfe system is still highly nonlinear and 2 ) the intrinsic property of stochastically focused systems to display completely different mean dynamics when compared to the ode solutions .an explanation of this behavior can be given by examining the moment differential equations . in the limiting case considered in this section , denoting the production rate of active enzyme , we obtain the last two terms on the right - hand side of denote the covariance of the substrate enzymatic degradation rate with active enzyme and the covariance of the enzyme activation rate with the substrate .both covariances are expected to be positive at steady state , which implies that the terms act against each other in determining the steady - state covariance of substrate and active enzyme . in turn, a small value of this covariance ( compared to the product ) implies that the mean of can be approximately captured by a mean - field equation , where has been replaced by .this is indeed the case in our simulations , where turns out to be times smaller than . on the other hand ,the open - loop value of is about 30 times larger than the closed - loop one .this comes as no surprise , as one expects substrate and active enzyme to display a strong negative correlation , which is the cause of the discrepancy between stochastic and deterministic descriptions of stochastically focused systems .similar observations can be made when is small ( e.g. around 10 ) , however the relative errors become at least one order of magnitude larger .we believe that this can be attributed to the fact that the enzyme activation propensity depends both on the abundance of inactive enzyme and the substrate / product abundance , which increases the inaccuracy of the odes .as we have already demonstrated , the closed - loop sfe system is remarkably robust to parametric perturbations of the open - loop model .however , in all of our numerical experiments we have kept the parameters of the feedback function fixed to a few different values . herewe examine the opposite situation , in which only the controller parameters are free to vary while the rest are held constant .we therefore consider the problem of regulating the mean of around a fixed value with feedback from .the problem can be posed as follows : where both the mean and variance of depend on the feedback function parameters .[ optimal_feedback ] shows the contour lines of , obtained via stochastic simulation over a wide range of and values for .it can be observed that is more sensitive to than : beyond a certain value , the function quickly levels off . based on our simulation runs, the optimal feedback parameters turned out to be ( the maximum value considered for the plot ) and ( given the inevitable uncertainty in due to sampling variability , the true optimal value should be close to this ) .the optimal feedback function therefore resembles a `` barrier '' : for it is zero , while it rises very steeply beyond . for ,obtained by evaluating the function via stochastic simulation on a logarithmic grid in the parameter space.,scaledwidth=80.0% ] note that the mean and variance of both depend on , but neither quantity is available in closed form as a function of the feedback parameters or obtainable from a closed set of moment equations .thus , had to be evaluated on a grid with the help of stochastic simulation . alternatively , as we show in the supplement ( section 6 ), one can exploit the behavior presented in the previous section , and optimize a similar objective function by directly evaluating the required moments of using a simple moment closure approximation based on the _ method of moments_. this scheme , introduced in , provides very accurate approximations of the mean and variance at a fraction of the computational effort , thus allowing optimization to be carried out very efficiently .the optimal parameters for the approximate system can be used as starting points in the optimization of .in this work we have examined the behavior of a branched enzymatic reaction scheme .this system has already been shown to display stochastic focusing , a sensitivity amplification phenomenon that arises due to nonlinearities and stochasticity whenever only a few enzyme molecules are present in the system .we have additionally shown that when the enzyme activity evolves on a slow timescale compared to that of the substrate , very large fluctuations can be generated in the system . moreover ,the dynamics of the system is extremely sensitive to variations in its reaction rates .both these observations imply that this simple model is not appropriate for robust signal detection .we asked how the system behavior would change in the presence of a feedback mechanism , so that the `` controller '' molecule ( ) could sense the fluctuations in ( its substrate ) or ( the product of the alternative reaction branch ) .we have shown that noise decreases dramatically in the presence of feedback , while the robustness of the average system behavior is boosted significantly .consequently , the focused system with feedback ends up behaving almost as predictably as a mean - field ode model , even when the number of active enzymes is very small .there exist several biochemical systems which in certain aspects match the main ideas behind the sfe motif , i.e. display stochastic fluctuations in enzyme activity / abundance and substrate / product feedback activation .for example , it was recently discovered that the guanine nucleotide exchange factor sos and its substrate , the ras gtpase , are involved in a feedback loop where sos converts ras - gdp ( inactive ) to ras - gtp ( active ) and , in turn , active ras allosterically stabilizes the high - activity state of sos .another prominent example is microrna post - transcriptional regulation of gene expression , where a microrna may mediate the degradation of a target mrna , while the protein arising from this mrna in turn activates the microrna transcription .yet another instance is the heat shock response system in _ : here the factor , which activates the heat - shock responsive genes , is quickly turned over under the action of the protease ftsh at normal growth temperatures . after a shift to high temperature , is rapidly stabilized and at the same time it also activates the synthesis of ftsh .the ftsh - mediated degradation of is under negative feedback .finally , it is known that mrna decapping ( a process that triggers mrna degradation ) is controlled by the decapping enzyme dcp2 , which fluctuates between an open ( inactive ) and closed ( active ) form .experimental evidence suggests that the closed conformation of the enzyme is promoted by the activator protein dcp1 together with the mrna substrate itself .we should stress , however , that it is still unclear if any of the aforementioned examples display all the dynamic features of the motif considered in this work .speaking generally , due to the required levels of measurement accuracy , it is difficult to find examples that exactly match the conditions considered here with current experimental techniques .however , it is certainly conceivable that this will be achieved in the future .an interesting feature of our system is that homeostasis is achieved with a very small number of controller molecules ( in the order of 10 ) , which are able to maintain the output at a very low level with fluctuations that are to a good approximation poissonian .we have considered two alternative feedback schemes : in the first the substrate directly affects the activation rate of the enzyme ( a case of substrate activation ) , while in the second the product of the alternative reaction branch is used as a `` proxy '' for the substrate abundance .note that the flux through the - branch is many orders of magnitude smaller than the flux in the -regulated branch .the - branch can thus be thought to act as a `` sensor mechanism '' , used to control the high - flux branch of the system . to the best of our knowledge, this type of feedback has not yet been observed in naturally occurring reactions .finally , it is worth to note that one could achieve this type of regulation with an `` unfocused '' system , in which the coupling between enzyme and substrate ( parameter ) would be much smaller .this would imply , however , that the number of enzyme molecules needed to achieve the same substrate levels would have to be much greater , and this could entail an added cost for a living cell . in summary , to regulate a low - copy , high - flux substrate via an enzymatic mechanism such as the one considered here , there are three possibilities : a ) use of a low number of controller enzymes and strong coupling between enzyme and substrate ( which results in stochastic focusing and noise ) , b ) use of a high - copy enzyme and weak coupling ( with the associated production cost ) or c ) use of a low - copy controller with feedback : an alternative which , as we have demonstrated , leads to a remarkably well - behaved closed - loop system .it is thus conceivable that cellular feedback mechanisms have evolved to exploit the nature of stochastically focused systems to achieve regulation of low - copy substrates with the minimal number of controller molecules .we expect that the rapid development of experimental techniques in single - molecule enzymatics will soon enable the experimental verification of our findings and possibly the discovery of similar noise attenuation mechanisms inside cells .finally , our results can be seen as a first step towards the rational manipulation of noise properties in low - copy enzymatic reactions .this work was financially supported in part by the swiss national science foundation ( a.m .- a . and m.k . ) and the swedish research council within the upmarc linnaeus center of excellence ( s.e . and p.b . )the main point of our analysis here is to determine the _ sensitivity _ of a branched - reaction product to changes in the activation rate of an enzyme and in this way provide some justification for our modeling choices .the rationale behind this analysis is that one can not hope to control the mean - let alone the variance - of a product , if its statistics are not sensitive to changes in the enzyme . with this in mind , we examine the following branched reaction system : the system consists of the following : * an enzyme , that is found in low copy numbers and therefore its fluctuations have a significant impact on system behavior . * a high - copy , low - noise enzyme ( not shown ) , responsible for the conversion of to .alternatively , we can assume that `` matures '' into without the help of an enzyme .in both cases , this reaction can be considered to be first - order , even if it is enzymatic . * two products , and , which are produced from * the substrate species , , which plays the most critical role . enters the system through a zeroth - order reaction , and can have two alternate fates : it can either be converted to or .the initial sensitivity question can be now posed more precisely : which of the two reaction products , or , is more sensitive to changes in the activation rate of ?apart from the system structure , we are making the following assumptions regarding the reaction rates : * the bimolecular reaction rate ( ) is large compared to the first - order reaction rate of ( ) .that is , most of the influx of is directed towards .this assumption amplifies the effect of the nonlinear kinetics in the system ( in the opposite case , the bimolecular reaction could be considered as a perturbation in an almost - linear network ) . *the influx of to the system is high , i.e. is also large . *the rates of are such that has low copies and high noise ( this was already stated above ) , so that we can not replace by its mean in the bimolecular reaction .we now want to see what happens to the steady - state means and when varies . in the case of ,the situation is simple : follows the behavior of , thus , our focus shifts from to in this case . if the mean of is sensitive to changes in , we know that will be sensitive as well .this is precisely the case when stochastic focusing is present . in the case of ,the situation is different : we expect it to be so , because in order to produce a molecule , we need both and to be present . therefore , if hits zero , will inevitably accumulate ( since we assumed that the rate of the alternate path , , is small ) , but no will be produced .instead , while stays at zero , will _ drop _ with a speed that depends on .once returns to non - zero numbers , the accumulated amount of will be converted into in a strong production burst .depending on , this may result in a brief burst of , or may go unnoticed ( when is small enough , the burst will happen , because can not be removed fast enough from the system ) .we thus see that can display a more complex behavior than .we next turn to the mean of . since ,let us assume first that . in this case , the first - moment equation for will give note that the mean of does not appear there , because we assumed no first - order degradation .the equation says that the steady - state mean of the product of with is constant , _ independently of _ . turning to the equation of the first moment of , we then get which shows that is also not affected in its mean by changes in the rates of .intuitively , we can see why it is plausible for the mean of to remain constant , by looking at the bimolecular reaction that produces it : when increases , the mean of drops and vice versa .thus , the average production rate of can not change that much and in fact , does not change at all in the limiting case . using this observation, we can understand also why is not sensitive to when is non - zero but small : while in this case the above relations do not hold exactly , we still expect them to hold with good precision ( simulation verifies that ) . overall then , we see that is relatively insensitive to changes in , compared to , and it makes sense to consider as the target for regulation . it should be noted that the above arguments hold only for the _ means _ of and . we do not expect the variance of to be equally insensitive to the noise in , however it is not entirely clear how could be used in a noise - suppressing feedback mechanism . to get a feeling for the scaling of the different constants we consider the equilibrium solutions to the ode model of the network .the following three relations are immediate : assuming the mean lifetime of the substrate and the enzyme to be about the same we may pick the units of time such that . as we are interested in stochasticfocusing , a low copy - number phenomenon , we further prescribe as a _ base case _ that . combined with and this implies with the enzymatic rate parameter still free and given , we can next consider the rate for the inflow of enzyme to be an adjustable parameter which controls the amount of product . using and we arrive at the relation for any given rate and desirable setpoint , can be solved for the value of that makes .we now define the _ gain _ as the response to a 50% decrease of enzyme from the base case , with , does not respond ( i.e. is insensitive ) , and can be considered a perfect transmission . in the table below values from using have been computed for different values of the rate constant .the conclusion is that is required for the network to be responsive . [cols="<,>,>,>,>,>",options="header " , ]we here consider a more realistic enzymatic reaction mechanism for the degradation of the substrate , which includes the formation of an enzyme - substrate complex .as we will see , this more detailed mechanism implies an overall behavior which is similar to the sfe model studied in the main paper the mechanism is displayed on fig .[ fig : enzyme_saturation_mech ] . according to this scheme , active enzyme ( ) and substratehave to first form a complex before is degraded .if the enzyme spontaneously switches to its inactive form while bound to , we assume that the complex dissociates . with the additional reactions ,the open - loop system becomes again analytically intractable , as the conditional moment equations are no longer closed .we will therefore base our analysis on the behavior of the corresponding deterministic system . to further simplify the task, we will first assume a fixed number of active enzymes , , i.e. ignore the enzyme ( de)activation reactions .moreover , we will assume that the substrate flux towards is much smaller than the flux into the enzymatic reaction and therefore set .under these conditions , one can verify that the steady - state concentration of substrate and free ( active ) enzyme are given by the following expressions : we see that the existence of a finite positive steady - state for the system depends on the relation of with the ratio , which connects the rates of substrate influx and the catalytic efficiency of the active enzyme .in other words , if there is not a sufficient number of active enzymes in the system , there is no finite and positive steady - state ; instead , the number of substrate molecules tends to infinity , as the existing free enzyme molecules are completely saturated and can not process all incoming substrate molecules .when an alternative fate is available for the substrate ( e.g. its conversion to ) , the system will of course be stable , as the alternative pathway will absorb the excess substrate .however , if is small compared to , the resulting steady - state substrate concentration , expected to be close to , will be large . now , let us consider the case where is varied externally .as it approaches from above , the steady - state concentration of quickly diverges to infinity .therefore , it is reasonable to imagine that if active enzymes fluctuate slowly , randomly and close to the critical value , this will result in dramatic fluctuations in .the intuition obtained from the above observations is confirmed by stochastic simulation of the system , as shown on fig .[ fig : c_saturation ] . for the system of fig .[ fig : enzyme_saturation_mech ] for , , , , , , , and ( is the total number of enzymes , as defined previously ) . with the selected rates the average number of active enzymes is close to 6 , while .as expected , we observe an amplification of the enzyme fluctuations in the substrate abundance . ] in summary , the open - loop sfe network with the more detailed enzymatic reaction mechanism displays a behavior analogous to the simplified system considered in the main paper , with the only difference that when enzyme saturation is taken into account _ even a non - zero number active enzymes may be insufficient to prevent large substrate fluctuations _, if the enzyme is not much faster compared to the rate of substrate influx ( i.e. ). provided the total number of available enzymes is greater than the necessary minimum to prevent complete saturation , we expect that addition of feedback from or to the enzyme will , similarly to the simplified case , result in great noise reduction .for the example presented on fig .[ fig : c_saturation ] , the cv of was reduced from about 2 to about 0.38 and the fano factor from about 900 to about 1.73 for and .remarkably , the steady - state means of the system were again very close to the ode steady - state ( obtained from numerical simulation of the deterministic system ) : , and , whereas , and .denote by , by and by . since the total number of enzymes , is assumed equal to , we can write the dynamics of without reference to : we also know that the stationary distribution of is , where .we can thus describe the evolution of moments of conditioned on the state of , following the approach described in .we further simplify the problem by considering the steady - state conditional moments . following the notational conventions of , we set :={\color{red}\mu_1(y)}\\ & \lim_{t\to\infty}\mathbb{e}[z_2|y , t]:={\color{orange}\mu_2(y)}\\ & \lim_{t\to\infty}\mathbb{e}[\left(z_1-{\color{red}\mu_1(y , t)}\right)^2|y , t]:={\color{blue}c_{(2,0)}(y)}\\ & \lim_{t\to\infty}\mathbb{e}[\left(z_2-{\color{orange}\mu_2(y , t)}\right)^2|y , t]:={\color{magenta}c_{(0,2)}(y)}\\ & \lim_{t\to\infty}\mathbb{e}[\left(z_1-{\color{red}\mu_1(y , t)}\right)\left(z_2-{\color{orange}\mu_2(y , t)}\right)^2|y , t]:={\color{teal}c_{(1,1)}(y)}\\ & \lim_{t\to\infty}\mathbb{p}[y(t)=y]:=p(y)\end{aligned}\ ] ] the steady - state first- and second - order conditional moments of and are then obtained by solving the following system of linear equations : this system of equations has to be solved for all $ ] to yield , , , and , which in turn can be marginalized over to derive unconditional moments .for example , =\sum_{y=0}^n \mu_1(y)p(y)\ ] ] and =\lim_{t\to\infty}\mathbb{e}[z_1(t)^2]-\lim_{t\to\infty}\mathbb{e}[z_1(t)]^2=\\ & = \sum_{y=0}^n\left(c_{(2,0)}(y)+\mu_1(y)^2\right)p(y)-\left(\sum_{y=0}^n\mu_1(y)p(y ) \right)^2.\end{aligned}\ ] ] in case the distribution of is not finitely supported ( but still known analytically ) , one can similarly solve over a finite set of values for large enough to capture the bulk of the probability mass of .we should stress that the above system of linear equations is _ exact _, i.e. no closure has been employed .as shown in , the conditions for obtaining closed moment equations are different in the conditional and unconditional cases .the system studied here has non - closed unconditional moments , a feature that has so far hindered the analytical study of stochastic focusing .as can be seen , however , the conditional moment equations are closed .mass fluctuation kinetics is a popular moment closure technique that is used to approximate the evolution of means , variances and covariances in stochastic chemical kinetics .the approximate moment equations are derived by setting to zero the third - order cumulants ( equal to the third order central moments ) of all species . below ( figs .[ fig : mfk_means ] and [ fig : mkf_vars ] ) we show a comparison of the mfk approximation of mean and variance of with the exact solution based on conditional moments ( open loop system ) .we can clearly see that mfk greatly underestimates both the mean and the variance of the substrate ( notice the log scale on the y - axis ) , which proves that stochastic focusing can not be adequately studied with this approximation .in fact , all moment closure methods tried on the system failed .most likely , this happens due to the fact that in the presence of stochastic focusing the distributions of and become extremely skewed and consequently violate all commonly made regularity assumptions on which moment closure is typically based .here we present simulation results that explore how the behavior of the sfe network statistics changes under negative feedback .more specifically , we study the effect of feedback on the mean , variance cv and fano factor of substrate and product .this analysis helps to put in context our specific choice of feedback parameters used in the main text . and statistics generated by a scan over feedback parameters ( and ) under feedback from . as the strength of negative feedback increases ( i.e. and grow ) , both the mean and variance drop. however , the cv and fano factor behave differently : cv appears more sensitive to than , while the fano factor depends equally on both parameters .moreover , the two noise measures become minimal over different regions of the parameter space . as expected , the behavior of substrate fluctuations as parameters vary , is reflected in product fluctuations as well .the feedback parameter set used in the greatest part of the paper ( , ) is denoted by an asterisk . ] and statistics generated by a scan over feedback parameters ( and ) under feedback from .it is interesting to note that while the behavior of the and means is almost identical to the case of substrate feedback shown above , the noise in ( both in terms of cv and fano factor ) is significantly increased in the present case . on the other hand , noise in not seem decreased in comparison to the case of substrate feedback .in other words , and contrary to the substrate feedback scenario , the behavior of substrate fluctuations is not reflected in product fluctuations . this is perhaps due to a frequency shift in substrate fluctuations , that can no longer be transmitted to ( note that acts as low - pass filter for upstream fluctuations ) .the feedback parameter set used in the greatest part of the paper ( , ) is denoted by an asterisk . ]due to the presence of different time scales in the substrate and enzyme dynamics , achieving good accuracy in the calculation of is hard and thus solving the optimization problem of section 3.3 for obtaining the best feedback parametrization is a computationally intensive problem . in order to get some idea of the optimal solution in a computationally more tractable setting , we turned to the simple moment closure method devised in .this method , however , requires increasing order derivatives of the reaction rates in general , and of the feedback term in particular . for this purposeit is therefore preferable to work with a smooth approximation of the feedback term in the form of a hill function , results in the hill parameter space can then be transferred back into the piecewise linear form of the main manuscript through e.g. a nonlinear least - squares procedure .the parameter determines the asymptote of as and therefore only weakly affects the dynamics in a properly regulated system where large values of are avoided . to simplify the original problem, we therefore determined a suitable fixed value of and considered the reduced problem where all moments are now computed from the closed moment equations .the function defined was optimized very efficiently using the derivative - free nelder - mead simplex algorithm , and also evaluated on a grid in the feedback parameter space , as shown on fig .[ fig : energy_mom ] . the optimal values and were obtained for fixed at 160 . it can be seen that the objective function varies very little along the red curve ; however , intermediate values of and seem to be slightly better according to the moment system .the contours of the same objective function , computed with respect to the original stochastic dynamics , is shown on fig .[ fig : energy_ssa ] . as the overlay of the red curve from fig .[ fig : energy_mom ] indicates , this feature is not an artifact of moment closure , but is rather visible in the ssa - based evaluation of the function .the greatest difference from the moment closure result , is that the function now seems to get slightly smaller as increases . in this respect ,the moment closure result can serve as a good initial approximation of the optimal hill function parameters .the ssa result reproduces our observation made in fig .11 of the main text , as the optimal hill parametrization results in a step - like function , with very high . for the range of values tested here ,the optimal was found to be around 23 ( the upper limit of the search interval ) , while was around 18.5 .these results agree very well with the results from fig .11 , where the optimal gain was found to be equal to 30 ( again , the upper limit of the search interval ) and around 16.6 , which is very close to the `` knee '' of the hill curve with , and . ) and moment closure using moments up to order 4 .the red line traces the points along which is varies the least . ] .the red curve from fig .[ fig : energy_ssa ] is overlaid . ]to gain some intuition about the role of high gain in the robustness of feedback system , we consider here a very simple example shown on fig . [ fbloop ] .the system to be regulated consists of an amplifier a , which is simply model as a gain ; that is , when and , the _ output _ of a is connected to its _ input_ by , where is the gain of the amplifier .it is also possible that an unwanted signal , the _ disturbance _ , corrupts output of a , in which case .assume further that is very high ( ) but also not known precisely and even fluctuating in time .in this case , a given _ reference input _ will be translated into an output which inherits the uncertainty in the amplifier gain .consequently , the output of this so - called _ open - loop _ system ( obtained for ) can be severely affected by changes in and disturbance inputs .let us now consider the _ closed - loop _system , obtained for . in this case , the output is multiplied by the _ feedback gain _ ( which , contrary to , is assumed to be _ precisely _ known ) , subtracted from and fed back into a. this is typical case of _ negative feedback _ because the ( scaled ) output is subtracted from the input . we can now write the output as which implies that in this case , the closed - loop system gain from to has been reduced from to . if , this ratio is approximately equal to .in other words , the gain of the closed - loop system is now specified by the feedback gain , which is precisely known .the uncertainty in no longer influences the input - output relation , while the effect of the disturbance is also reduced by a factor of .
nature presents multiple intriguing examples of processes which proceed at high precision and regularity . this remarkable stability is frequently counter to modelers experience with the inherent stochasticity of chemical reactions in the regime of low copy numbers . moreover , the effects of noise and nonlinearities can lead to `` counter - intuitive '' behavior , as demonstrated for a basic enzymatic reaction scheme that can display _ stochastic focusing _ ( sf ) . under the assumption of rapid signal fluctuations , sf has been shown to convert a graded response into a threshold mechanism , thus attenuating the detrimental effects of signal noise . however , when the rapid fluctuation assumption is violated , this gain in sensitivity is generally obtained at the cost of very large product variance , and this unpredictable behavior may be one possible explanation of why , more than a decade after its introduction , sf has still not been observed in real biochemical systems . in this work we explore the noise properties of a simple enzymatic reaction mechanism with a small and fluctuating number of active enzymes that behaves as a high - gain , noisy amplifier due to sf caused by slow enzyme fluctuations . we then show that the inclusion of a plausible negative feedback mechanism turns the system from a noisy signal detector to a strong homeostatic mechanism by exchanging high gain with strong attenuation in output noise and robustness to parameter variations . moreover , we observe that the discrepancy between deterministic and stochastic descriptions of stochastically focused systems in the evolution of the means almost completely disappears , despite very low molecule counts and the additional nonlinearity due to feedback . the reaction mechanism considered here can provide a possible resolution to the apparent conflict between intrinsic noise and high precision in critical intracellular processes .
the evaluation of the complexity of finite sequences is key in many areas of science .for example , the notions of structure , simplicity and randomness are common currency in biological systems epitomized by a sequence of fundamental nature and utmost importance : the dna .nevertheless , researchers have for a long time avoided any practical use of the current accepted mathematical theory of randomness , mainly because it has been considered to be useless in practice . despite this belief ,related notions such as lossless uncompressibility tests have proven relative success , in areas such as sequence pattern detection and have motivated distance measures and classification methods in several areas ( see for a survey ) , to mention but two examples among many others of even more practical use .the method presented in this paper aims to provide sound directions to explore the feasibility and stability of the evaluation of the complexity of strings by means different to that of lossless compressibility , particularly useful for short strings .the authors known of only two similar attempts to compute the uncomputable , one related to the estimation of a chaitin omega number , and of another seminal related measure of complexity , bennett s logical depth .this paper provides an approximation to the output frequency distribution of all turing machines with 5 states and 2 symbols which in turn allow us to apply a central theorem in the theory of algorithmic complexity based in the notion of algorithmic probability ( also known as solomonoff s theory of inductive inference ) that relates frequency of production of a string and its kolmogorov complexity hence providing , upon application of the theorem , numerical estimations of kolmogorov complexity by a method different to lossless compression algorithms .a previous result using a simplified version of the method reported here soon found an application in the study of economic time series , but wider application was preempted by length and number of strings .here we significantly extend in various directions : ( 1 ) longer , and therefore a greater number by a factor of three orders of magnitude of strings are produced and thoroughly analyzed ; ( 2 ) in light of the previous result , the new calculation allowed us to compare frequency distributions of sets from considerable different sources and of varying sizes ( although the smaller is contained in the larger set , it is of negligible size in comparison)they could have been of different type , but they are not ( 3 ) we extend the method to sets of turing machines whose busy beaver has not yet been found by proposing an informed method for estimating a reasonably non - halting cutoff value based on theoretical and experimental considerations , thus ( 4 ) provide strong evidence that the estimation and scaling of the method is robust and much less dependent of turing machine sample size , fully quantified and reported in this paper .the results reported here , the data released with this paper and the online program in the form of a calculator , have now been used in a wider number of applications ranging from psychometrics to the theory of cellular automata , graph theory and complex networks . in sumthis paper provides a thorough description of the method , a complete statistical analysis of the _ coding theorem method _ and an online application for its use and exploitation .the calculation presented herein will remain the best possible estimation for a measure of a similar nature with the technology available to date , as an exponential increase of computing resources will improve the length and number of strings produced only linearly if the same standard formalism of turing machines used is followed .central to ait is the definition of algorithmic ( kolmogorov - chaitin or program - size ) complexity : where is a program that outputs running on a universal turing machine .a technical inconvenience of as a function taking to the length of the shortest program that produces is its uncomputability . in other words, there is no program which takes a string as input and produces the integer as output .this is usually considered a major problem , but one ought to expect a universal measure of complexity to have such a property .the measure was first conceived to define randomness and is today the accepted objective mathematical measure of complexity , among other reasons because it has been proven to be mathematically robust ( by virtue of the fact that several independent definitions converge to it ) . if the shortest program producing is larger than , the length of , then is considered random .one can approach using compression algorithms that detect regularities in order to compress data .the value of the compressibility method is that the compression of a string as an approximation to is a sufficient test of non - randomness .it was once believed that ait would prove useless for any real world applications , despite the beauty of its mathematical results ( e.g. a derivation of gdel s incompleteness theorem ) .this was thought to be due to uncomputability and to the fact that the theory s founding theorem ( the invariance theorem ) , left finite ( short ) strings unprotected against an additive constant determined by the arbitrary choice of programming language or universal turing machine ( upon which one evaluates the complexity of a string ) , and hence unstable and extremely sensitive to this choice .traditionally , the way to approach the algorithmic complexity of a string has been by using lossless compression algorithms .the result of a lossless compression algorithm is an upper bound of algorithmic complexity .however , short strings are not only difficult to compress in practice , the theory does not provide a satisfactory answer to all questions concerning them , such as the kolmogorov complexity of a single bit ( which the theory would say has maximal complexity because it can not be further compressed ) . to make sense of such things and close this theoretical gap we devised an alternative methodology to compressibility for approximating the complexity of short strings , hence a methodology applicable in many areas where short strings are often investigated ( e.g. in bioinformatics ) .this method has yet to be extended and fully deployed in real applications , and here we take a major step towards full implementation , providing details of the method as well as a thorough theoretical analysis .a fair compression algorithm is one that transforms a string into two components .the first of these is the compressed version while the other is the set of instructions for decompressing the string .both together account for the final length of the compressed version .thus the compressed string comes with its own decompression instructions .paradoxically , lossless compression algorithms are more stable the longer the string .in fact the invariance theorem guarantees that complexity values will only diverge by a constant ( e.g. the length of a compiler , a translation program between and ) and will converge at the limit . + * invariance theorem * : if and are two universal turing machines and and the algorithmic complexity of for and , there exists a constant such that : latexmath:[\[\label{invariance } hence the longer the string , the less important the constant or choice of programming language or universal turing machine .however , in practice can be arbitrarily large , thus having a very great impact on finite short strings . indeed , the use of data lossless compression algorithms as a method for approximating the kolmogorov complexity of a string is accurate in direct proportion to the length of the string .the algorithmic probability ( also known as levin s semi - measure ) of a string is a measure that describes the expected probability of a random program running on a universal ( prefix - free .for details see . ] ) turing machine producing .formally , levin s semi - measure defines the so - called universal distribution .here we propose to use as an alternative to the traditional use of compression algorithms to calculate by means of the following theorem . + * coding theorem * : there exists a constant such that : that is , if a string has many long descriptions it also has a short one .it beautifully connects frequency to complexity the frequency ( or probability ) of occurrence of a string with its algorithmic ( kolmogorov ) complexity .the coding theorem implies that one can calculate the kolmogorov complexity of a string from its frequency , simply rewriting the formula as : an important property of as a semi - measure is that it dominates any other effective semi - measure because there is a constant such that for all , hence called _ universal _ .* notation : * we denote by the class ( or space ) of all -state 2-symbol turing machines ( with the halting state not included among the states ) .+ in addressing the problem of approaching by running computer programs ( in this case deterministic turing machines ) one can use the known values of the so - called busy beaver functions as suggested by and used in .the busy beaver functions and can be defined as follows : + * busy beaver functions * ( rado ) : if is the number of ` 1s ' on the tape of a turing machine with states and symbols upon halting starting from a blank tape ( no input ) , then the busy beaver function .alternatively , if is the number of steps that a machine takes before halting from a blank tape , then .+ in other words , the busy beaver functions are the functions that return the longest written tape and longest runtime in a set of turing machines with states and symbols . and are noncomputable functions by reduction to the halting problem . in fact faster than any computable function can .nevertheless , exact values can be calculated for small and , and they are known for , among others , symbols and states .a program showing the evolution of all known busy beaver machines developed by one of this paper s authors is available online .this allows one to circumvent the problem of noncomputability for small turing machines of the size that produce short strings whose complexity is approximated by applying the algorithmic coding theorem ( see fig .[ flowchart ] ) .as is widely known , the halting problem for turing machines is the problem of deciding whether an arbitrary turing machine eventually halts on an arbitrary input .halting computations can be recognized by running them for the time they take to halt .the problem is to detect non - halting programs , programs about which one can not know in advance whether they will run forever or eventually halt .it is important to describe the turing machine formalism because numerical values of algorithmic probability for short strings will be provided under this chosen standard model of a turing machine .+ consider a turing machine with the binary alphabet and states and an additional halt state denoted by 0 ( as defined by rado in his original busy beaver paper ) .+ the machine runs on a -way unbounded tape . at each step : the machine s current `` state '' ; and the tape symbol the machine s head is scanning define each of the following : a unique symbol to write ( the machine can overwrite a on a , a on a , a on a , and a on a ) ; a direction to move in : ( left ) , ( right ) or ( none , when halting ) ; and a state to transition into ( which may be the same as the one it was in ) . the machine halts if and when it reaches the special halt state 0. there are turing machines with states and 2 symbols according to the formalism described above .the output string is taken from the number of contiguous cells on the tape the head of the halting -state machine has gone through .a machine produces a string upon halting .one can attempt to approximate ( see eq .3 ) by running every turing machine an particular enumeration , for example , a quasi - lexicographical ordering , from shorter to longer ( with number of states and 2 fixed symbols ) .it is clear that in this fashion once a machine produces for the first time , one can directly calculate an approximation of , because this is the length of the first turing machine in the enumeration of programs of increasing size that produces . but more important, one can apply the coding theorem to extract from directly from the output distribution of halting turing machines .let s formalize this by using the function as the function that assigns to every string produced in the quotient : ( number of times that a machine in produces ) / ( number of machines that halt in ) as defined in .more formally , + where is the turing machine with number ( and empty input ) that produces upon halting and is , in this case , the cardinality of the set .a variation of this formula closer to the definition of is given by : is strictly smaller than 1 for , because of the turing machines that never halt , just as it occurs for .however , for fixed and the sum of will always be 1 .we will use eq .[ d ] for practical reasons , because it makes the frequency values more readable ( most machines do nt halt , so those halting would have a tiny fraction with too many leading zeros after the decimal ) .moreover , the function is non - computable but it can be approximated from below , for example , by running small turing machines for which known values of the busy beaver problem are known .for example , for , the busy beaver function for maximum runtime , tells us that , so we know that a machine running on a blank tape will never halt if it hasnt halted after 107 steps , and so we can stop it manually . in what followswe describe the exact methodology . from now on , with a single parameterwill mean .we call this method the _ coding theorem method _ to approximate ( which we will denote by ) .approximations from the output distribution of turing machines with 2 symbols and states for which the busy beaver values are known were estimated before but for the same reason the method was not scalable beyond .the formula for the number of machines given a number of states is given by derived from the formalism described .there are 26559922791424 turing machines with 5 states . herewe describe how an optimal runtime based on theoretical and experimental grounds can be calculated to scale the method to larger sets of small turing machines . because there are a large enough number of machines to run even for a small number of machine states ( ) , applying the coding theorem provides a finer and increasingly stable evaluation of based on the frequency of production of a large number of turing machines , but the number of turing machines grows exponentially , and producing requires considerable computational resources .the busy beaver for turing machines with 4 states is known to be 107 steps , that is , any turing machine with 2 symbols and 4 states running longer than 107 steps will never halt .however , the exact number is not known for turing machines with 2 symbols and 5 states , although it is believed to be 47176870 , as there is a candidate machine that runs for this long and halts and no machine greater runtime has yet been found .so we decided to let the machines with 5 states run for 4.6 times the busy beaver value for 4-state turing machines ( for 107 steps ) , knowing that this would constitute a sample significant enough to capture the behavior of turing machines with 5 states .the chosen runtime was rounded to 500 steps , which was used to build the output frequency distribution for .the theoretical justification for the pertinence and significance of the chosen runtime is provided in the following sections .we did nt run all the turing machines with 5 states to produce because one can take advantage of symmetries and anticipate some of the behavior of the turing machines directly from their transition tables without actually running them ( this is impossible in general due to the halting problem ) .we avoided some trivial machines whose results we know without having to run them ( reduced enumeration ) .also , some non - halting machines were detected before consuming all the runtime ( filters ) .the following are the reductions utilized in order to reduce the number of total machines and therefore the computing time for the approximation of .the blank symbol is one of the 2 symbols ( 0 or 1 ) in the first run , while the other is used in the second run ( in order to avoid any asymmetries due to the choice of a single blank symbol ) . in other words, we considered two runs for each turing machine , one with 0 as the blank symbol ( the symbol with which the tape starts out and fills up ) , and an additional run with 1 as the blank symbol .this means that every machine was run twice .due to the symmetry of the computation , there is no real need to run each machine twice ; one can _ complete _ the string frequencies by assuming that each string produced produced by a turing machine has its complement produced by another symmetric machine with the same frequency , we then group and divide by symmetric groups .we used this technique from to .a more detailed explanation of how this is done is provided in using polya s counting theorem .we can exploit the right - left symmetry .we may , for example , run only those machines with an initial transition ( initial state and blank symbol ) moving to the right and to a state different from the initial one ( because an initial transition to the initial state produces a non - halting machine ) and the halting one ( these machines stop in just one step and produce ` 0 ' or ` 1 ' ) . for every string produced , we also count the reverse in the tables .we count the corresponding number of one - symbol strings and non - halting machines as well .if we consider only machines with a starting transition that moves to the right and goes to a state other than the starting and halting states , the number of machines is given by note that for the starting transition there are possibilities ( possible symbols to write and possible new states , as we exclude the starting and halting states ) .for the other transitions there are possibilities .we can make an enumeration from to .of course , this enumeration is not the same as the one we use to explore the whole space .the same number will not correspond to the same machine . in the whole spacethere are machines , so it is a considerable reduction .this reduction in means that in the reduced enumeration we have of the machines we had in the original enumeration .suppose that using the previous enumeration we run machines for with blank symbol . can be the total number of machines in the reduced space or a random number of machines in it ( such as we use to study the runtime distribution , as it is better described below ) . for the starting transition we considered only possibilities out of possible transitions in the whole space .then , we proceeded as follows to complete the strings produced by the runs .we avoided transitions moving left to a different state than the halting and starting ones .we completed such transitions by reversing all the strings found .non - halting machines were multiplied by .we also avoided transitions ( writing ` 0 ' or ` 1 ' ) from the initial to the halting state .we completed such transitions by including times ` 0 ' . including times ` 1 ' .finally , we avoided transitions from the initial state to itself ( movements symbols ) .we completed by including non - halting machines . with these completions, we obtained the output strings for the blank symbol . to complete for the blank symbol we took the complement to of each string produced and counted the non - halting machines twice .then , by running machines , we obtained a result representing , that for is .it is useful to avoid running machines that we can easily check that will not stop .these machines will consume the runtime without yielding an output .the reduction in the enumeration that we have shown reduces the number of machines to be generated .now we present some reductions that work after the machines are generated , in order to detect non - halting computations and skip running them .some of these were detected when filling the transition table , others at runtime .while we are filling the transition table , if a certain transition goes to the halting state , we can activate a flag .if after completing the transition table the flag is not activated , we know that the machine wo nt stop . in our reduced enumeration there are machines of this kind . in is machines .it represents 42.41% of the total number of machines .the number of machines in the reduced enumeration that are not filtered as non - halting when filling the transition table is 5562153742336 .that is 504.73 times the total number of machines that fully produce .there should be a great number of escapees , that is , machines that run infinitely in the same direction over the tape .some kinds are simple to check in the simulator .we can use a counter that indicates the number of consecutive not - previously - visited tape positions that the machines visits .if the counter exceeds the number of states , then we have found a loop that will repeat infinitely . to justify this ,let us ask you to suppose that at some stage the machine is visiting a certain tape - position for the first time , moving in a specific direction ( the direction that points toward new cells ) .if the machine continues moving in the same direction for steps , and thus reading blank symbols , then it has repeated some state in two transitions .as it is always reading ( not previously visited ) blank symbols , the machine has repeated the transition for twice , being the blank symbol .but the behavior is deterministic , so if the machine has used the transition for and after some steps in the same direction visiting blank cells , it has repeated the same transition , it will continue doing so forever , because it will always find the same symbols .there is another possible direction in which this filter may apply : if the symbol read is a blank one not previously visited , the shift is in the direction of new cells and there is no modification of state . in factthis would be deemed an escapee , because the machine runs for new positions over the tape .but it is an escapee that is especially simple to detect , in just one step and not .we call the machines detected by this simple filter `` short escapees '' , to distinguish them from other , more general escapees. we can detect cycles of period two .they are produced when in steps and the tape is identical and the machine is in the same state and the same position .when this is the case , the cycle will be repeated infinitely . to detect it, we have to anticipate the following transition that will apply in some cases . in a cycle of period two, the machine can not change any symbol on the tape , because if it did , the tape would be different after two steps .then the filter would be activated when there is a transition that does not change the tape , for instance where is some direction ( left , right ) and the head is at position on tape , which is to say , reading the symbol ] is \}\to\{s , t[i+d],-d\}\ ] ] we calculated with and without all the filters as suggested in . running without reducing the number or detecting non - halting machines took 952 minutes .running the reduced enumeration with non - halting detectors took 226 minutes .we filtered the following non - halting machines : = 0.13 cm [ cols="<,>",options="header " , ] an important question is how robust is , that is how sensitive it is to . we know that the invariance theorem guarantees that the values converge in the long term , but the invariance theorem tells nothing about the rate of convergence .we have shown that respects the order of except for very few and minor value discrepancies concerning the least frequent strings ( and therefore the most unstable given the few machines generating them ) .this is not obvious despite the fact that all turing machines with states in are included in the space of machines( that is , the machines that never reach one of the states ) , because the number of machines in overcomes by far the number of machines in , and a completely different result could have been then produced . however , the agreement between and seems to be similarly high among , and despite , the few cases in hand to compare with .the only way for this behaviour to radically change for is if for some , starts diverging in ranks from on before starting to converge again ( by the invariance theorem ) . if one does not have any reason to believe in such a change of behavior , the rate of rank convergence of is close to optimal very soon , even for the relatively small " sets of turing machines for small .one may ask how robust the complexity values and classifications may be in the face of changes in computational formalism ( e.g. turing machines with several tapes , and all possible variations ) .we have shown that radical changes to the computing model produce reasonable ( and correlated with different degrees of confidence ) ranking distributions of complexity values ( using even completely different computing models , such as unidimensional deterministic cellular automata and post tag systems ) .we have also calculated the maximum differences between the kolmogorov complexity evaluations of the strings occurring in every 2 distributions and for .this provides estimations for the constant in the invariance theorem ( eq . [ invariance ] ) determining the maximum difference in bits among all the strings evaluated with one or another distribution , hence shedding light on the robustness of the evaluations under this procedure . the smaller the values of the more stable our method. the values of these bounding constants ( in bits ) among the different numerical evaluations of using for after application of the coding theorem ( eq . [ codingeq ] ) are : + + + where means evaluated using the output frequency distribution after application of the coding theorem ( eq . [ codingeq ] ) for ( is a trivial non interesting case ) and where every value of is calculated by quartiles ( separated by semicolons ) , that is , the calculation of among all the strings in the 2 compared distributions , then among the top 3/4 , then the top half and finally the top quarter by rank .notice that the estimation of between and , and and remained almost the same among all strings occurring in both , at about 4 bits .this means one could write a compiler " ( or translator ) among the two distributions for all their occurring strings of size only 4 bits providing one or the other complexity value for based on one or the other distribution .the differences are considerably smaller for more stable strings ( towards the top of the distributions ) .one may think that given that the strings with their occurrences in necessarily contain those in for all ( because the space of all turing machines with an additional state always contain the computations of the turing machines will less states ) , the agreement should be expected .however , the contribution of to contributes with the number of strings in .for example , contributes only 1832 strings to the 99608 produced in ( that is less than 2% ) .all in all , the largest difference found between and is only of 5 bits of among all the strings occurring both in and ( 1832 strings ) , where the values of in are between 2.285 and 29.9 .we have put forward a method based on algorithmic probability that produces frequency distributions based on the production of strings using a standard ( rado s ) model of turing machines generally used for the busy beaver problem .the distributions show very small variations , being the result of an operation that makes incremental changes based on a very large number of calculations having as consequence the production of stable numerical approximations of kolmogorov complexity for short strings for which error estimations of from the invariance theorem were also estimated .any substantial improvement on , for example , by approximation of a for , is unlikely to happen with the current technology as the number of turing machines grows exponentially in the number of states .however , we have shown here based both on theoretical and experimental grounds that one can choose informed runtimes significantly smaller than that of the busy beaver bound and capture most of the output determining the output frequency distribution .an increase of computational power by , say , one order of magnitude will only deliver a linear improvement on .the experimental method presented is computationally expensive , but it does not need to be executed more but once for a set of ( short ) strings . as a resultthis can now be considered an alternative to lossless compression as a complementary technique for approximating kolmogorov complexity .an _ online algorithmic complexity calculator _ ( oacc ) implementing this technique and releasing the data for public use has been made available at http://www.complexitycalculator.com .the data produced for this paper has already been used in connection to graph theory and complex networks , showing , for example , that it produces better approximations of kolmogorov complexity of small graphs ( by comparing it to their duals ) than lossless compressibility . in itis also shown how the method can be used to classify images and space - time diagrams of dynamical systems , where its results are also compared to the approximations obtained using compression algorithms , with which they show spectacular agreement . in , it is used to investigate the ratios of complexity in rule spaces of cellular automata of increasing size , supported by results from block entropy and lossless compressibility . in , it is also used as a tool to assess subjective randomness in the context of psychometrics . finally in , the method is used in numerical approximations to another seminal measure of complexity ( bennett s logical depth ) , where it is also shown to be compatible with a calculation of strict ( integer - value ) program - size complexity as measured by an alternative means ( i.e. other than compression ) .the procedure promises to be a sound alternative , bringing theory and practice into alignment and constituting evidence that confirms the possible real - world applicability of levin s distribution and solomonoff s universal induction ( hence validating the theory itself , which has been subject to criticism largely on grounds of simplicity bias and inapplicability ) . as gregory chaitin has pointed out when commenting on this very method of ours : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the theory of algorithmic complexity is of course now widely accepted , but was initially rejected by many because of the fact that algorithmic complexity depends on the choice of universal turing machine and short binary sequences can not be usefully discussed from the perspective of algorithmic complexity . discovered employing [ t]his empirical , experimental approach , the fact that most reasonable choices of formalisms for describing short sequences of bits give consistent measures of algorithmic complexity !so the dreaded theoretical hole in the foundations of algorithmic complexity turns out , in practice , not to be as serious as was previously assumed . [ hence , of this approach ] constituting a marked turn in the field of algorithmic complexity from deep theory to practical applications ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this also refers to the fact that we have found an important agreement in distribution and therefore of estimations of kolmogorov complexity upon application of the algorithmic coding theorem with other abstract computing formalisms such as one - dimensional cellular automata and post s tag systems . in this paperwe have provided strong evidence that the estimation and scaling ( albeit limited by computational power ) of the method is robust and much less dependent on formalism and size sample than what originally could have been anticipated by the invariance theorem .brady , the determination of the value of rado s noncomputable function for four - state turing machines , _ mathematics of computation 40 _ ( 162 ) : 647665 , 1983 .calude , _ information and randomness _ , springer , 2002 .calude and m.a .stay , most programs stop quickly or never halt , _ advances in applied mathematics _, 40 , 295 - 308 , 2008 .calude , m.j .dinneen , and c .- k . ,shu , computing a glimpse of randomness , _ exper ._ , 11 , 361370 , 2002 .chaitin , on the length of programs for computing finite binary sequences : statistical considerations , _ journal of the acm _ , 16(1):145159 , 1969 .chaitin , algorithmic information theory , _ ibm journal of r&d _ , 21 , no .4 , pp . 350359 , 1977 .chaitin et al .report of h.zenils phd ( computer science ) thesis , universit de lille 1 , france , 2011 .http://www.mathrix.org/zenil/report.pdf g.j ._ from philosophy to program size , _ 8th .estonian winter school in computer science , institute of cybernetics , tallinn , 2003 .r. cilibrasi , p. vitanyi , clustering by compression , _ ieee transactions on information theory , _ 51 , 4 , 15231545 , 2005 .t.m . cover and j.a .thomas , _ information theory , _ j. wiley and sons , 2006 .delahaye , h. zenil , towards a stable definition of kolmogorov - chaitin complexity , arxiv:0804.3459 , 2007 .delahaye and h. zenil , on the kolmogorov - chaitin complexity for short sequences . in c. calude ( ed . ) ,_ randomness and complexity : from leibniz to chaitin _, world scientific , 2007 .delahaye and h. zenil , numerical evaluation of the complexity of short strings : a glance into the innermost structure of algorithmic randomness , _ applied math . and comp .r. downey & d.r .hirschfeldt , _ algorithmic randomness and complexity _ , springer , 2010. n. gauvrit , h. zenil , j .-delahaye and f. soler - toscano , algorithmic complexity for short binary strings applied to psychology : a primer , _ behavior research methods _( in press ) doi : 10.3758/s13428 - 013 - 0416 - 0 w. kircher , m. li , and p. vitanyi , the miraculous universal distribution , _ the mathematical intelligencer , _ 19:4 , 715 , 1997 .kolmogorov , three approaches to the quantitative definition of information , _ problems of information and transmission _ ,1(1):17 , 1965 . l. levin , laws of information conservation ( non - growth ) and aspects of the foundation of probability theory , _ problems in form . transmission _ 10 . 206210 , 1974 . m. li , p. vitnyi , _ an introduction to kolmogorov complexity and its applications , _ springer , 2008 .l. ma , o. brandouy , j .-delahaye and h. zenil , algorithmic complexity of financial motions , _ research in international business and finance , _ pp . 336347 , 2014 .( published online in 2012 ) . . rivals , m. dauchet , j .-delahaye , o. delgrange , compression and genetic sequence analysis . , _ biochimie _ , 78 , pp 315322 , 1996 . t. rad , on non - computable functions , _ bell system technical journal , _ vol .3 , pp . 877884 , 1962 . f. soler - toscano , h. zenil , j .-delahaye and n. gauvrit , correspondence and independence of numerical evaluations of algorithmic information measures , computability , vol .2 , pp . 125140 , 2013 .solomonoff , a formal theory of inductive inference : parts 1 and 2 . _ information and control _ , 7:122 and 224254 , 1964 .h. zenil , f. soler - toscano , j .-delahaye and n. gauvrit , two - dimensional kolmogorov complexity and validation of the coding theorem method by compressibility , preprint at arxiv:1212.6745 [ cs.cc ] .( _ awaiting journal decision _ ) h. zenil , compression - based investigation of the dynamical properties of cellular automata and other systems , _ complex systems . _ 19(1 ) , pages 128 , 2010 .h. zenil , j .-delahaye and c. gaucherel , image information content characterization and classification by physical complexity , _ complexity _ , vol .173 , pages 2642 , 2012 . h. zenil and j - p .delahaye , on the algorithmic nature of the world , in g. dodig - crnkovic and m. burgin ( eds ) , _ information and computation _ , world scientific publishing company , 2010 .h. zenil , une approche exprimentale la thorie algorithmique de la complexit , dissertation in fulfillment of the degree of doctor in computer science , university of lille 1 , 2011 .h. zenil , f. soler - toscano , k. dingle and a. louis , correlation of automorphism group size and topological properties with program - size complexity evaluations of graphs and complex networks , _ physica a : statistical mechanics and its applications _( in press ) doi : 10.1016/j.physa.2014.02.060 . h. zenil , busy beaver " from the wolfram demonstrations project http://demonstrations.wolfram.com/busybeaver/ h. zenil and e. villarreal - zapata , asymptotic behaviour and ratios of complexity in cellular automata rule spaces , _ international journal of bifurcation and chaos _ vol .13 , no . 9 , 2013 . h. zenil and j .- p .delahaye , an algorithmic information - theoretic approach to the behaviour of financial markets , _ journal of economic surveys _ , vol .25 - 3 , pp . 463 , 2011
drawing on various notions from theoretical computer science , we present a novel numerical approach , motivated by the notion of algorithmic probability , to the problem of approximating the kolmogorov - chaitin complexity of short strings . the method is an alternative to the traditional lossless compression algorithms , which it may complement , the two being serviceable for different string lengths . we provide a thorough analysis for all binary strings of length and for most strings of length by running all turing machines with 5 states and 2 symbols ( with reduction techniques ) using the most standard formalism of turing machines , used in for example the busy beaver problem . we address the question of stability and error estimation , the sensitivity of the continued application of the method for wider coverage and better accuracy , and provide statistical evidence suggesting robustness . as with compression algorithms , this work promises to deliver a range of applications , and to provide insight into the question of complexity calculation of finite ( and short ) strings . + additional material can be found at the _ algorithmic nature group _ website at + http://www.algorithmicnature.org . an online algorithmic complexity calculator implementing this technique and making the data available to the research community is accessible at http://www.complexitycalculator.com . + keywords : algorithmic randomness ; algorithmic probability ; levin s universal distribution ; solomonoff induction ; algorithmic coding theorem ; invariance theorem ; busy beaver functions ; small turing machines .
over the past decade significant progress has been made in constructing space - time codes that achieve the optimal rate - diversity trade - off for _ flat - fading _ channels when there are transmit alphabet constraints .far less attention has been given to space - time code design and analysis for fading channels with memory , _i.e. , _ inter - symbol interference ( isi ) channels which are encountered in broadband multiple antenna communications . there have been several constructions of space - time codes for fading isi channels using multi - carrier techniques ( see for example and references therein ) .however , since these inherently increase the transmit alphabet size , and the right framework to study such constructions is through the _ diversity - multiplexing _ trade - off .we examined diversity embedded codes for isi channels in , by considering the diversity - multiplexing trade - off .as in space - time code design for flat - fading channels , it is natural to ask for a characterization of the rate - diversity trade - off for isi channels with transmit alphabet constraints .therefore this imposes a maximal rate of bits and we normalize the rate by and state the rate in terms of a number in ] is the space - time coded transmission sequence at time with transmit power constraint and is assumed to be additive white ( temporally and spatially ) gaussian noise with variance .the matrix consists of fading coefficients which are i.i.d . and fixed for the duration of the block length ( ) .consider a transmission scheme in which we transmit over a period and send ( fixed ) known symbols .] for the last transmissions . for the period of communication we can equivalently write thereceived data as , & \ldots & { \bf y}[t-1 ] \end{array } \right ] } _ { \mathbf{y } } & = \underbrace { \left [ \begin{array}{ccc } { \bf h}_0 & \ldots & { \bf h}_{\nu } \end{array } \right ] } _ { \mathbf{h } } \underbrace { \left [ \begin{array}{ccccccc } { \bf x}[0 ] & { \bf x}[1 ] & \ldots & { \bf x}[t-\nu-1 ] & 0 & \ldots & 0 \\ 0 & { \bf x}[0 ] & { \bf x}[1 ] & \ldots & { \bf x}[t-\nu-1 ] & 0 & 0 \\ \ldots & & \vdots & \ddots & . & . & \vdots\\ 0 & \ldots & 0 & { \bf x}[0 ] & { \bf x}[1 ] & \ldots & { \bf x}[t-\nu-1 ] \end{array } \right ] } _ { \mathbf{x } } + { \bf z } \end{aligned}\ ] ] _ i.e. _ , where , , , .notice that the structure in ( [ eqn : model1 ] ) is different from the flat - fading case , since the channel imposes a toeplitz structure on the equivalent space - time codewords given in ( [ eqn : model1])-([eqn : model2 ] ) .this structure makes the design of space - time codes different than in the flat - fading case . for reference ,the space - time codeword is completely determined by the matrix given by , & { \bf x}[1 ] & \ldots & { \bf x}[t-\nu-1 ] & 0 & \ldots & 0 \end{array } \right ] \end{aligned}\ ] ] a scheme with diversity order has an error probability at high snr behaving as .more formally , [ defn : div ] a coding scheme which has an average error probability as a function of that behaves as is said to have a diversity order of .the fact that the diversity order of a space - time code is determined by the rank of the codeword difference matrix is well known .therefore , for flat - fading channels , it has been shown that the diversity order achieved by a space - time code is given by where are the space - time codewords .clearly the analysis in can be easily extended to fading isi channels , and we can write where are matrices with structure given in ( [ eqn : model1 ] ) .it is easy to see from the structure of in ( [ eqn : model1 ] ) that the rank of the matrix is _ at most _ times the rank of the matrix ( see ( [ eq : x1def ] ) ) , _ i.e. _ , the codebook structure proposed in takes two information streams and outputs the transmitted sequence .the objective is to ensure that each information stream gets the designed rate and diversity levels .let denote the message set from the first information stream and denote that from the second information stream .then analogous to definition [ defn : div ] , we can write the diversity order for the messages as , [ [ design - criteria - for - fading - isi - channels ] ] design criteria for fading isi channels : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the space - time codeword for fading isi channels have the structure given in ( [ eqn : model1 ] ) . to translate this to the diversity embedded case , we annotate it with given messages , as . clearly we can then translate the code design criterion from ( [ eq : divisi ] ) to diversity embedded codes for isi channels as , in an identical manner , we can show for the message set , we need the following to hold . as one can easily see , these are simple generalizations of the diversity - embedded code design criteria developed in to the fading isi case . for a given diversity order , it is natural to ask for upper bounds on achievable rate . for a flat rayleigh fading channel ,this has been examined in where the following result was obtained . [thm : ratedivtsc ] ( ) given a constellation of size and a system with diversity order , then the rate that can be achieved is given by in symbols per transmission , _i.e. , _ the rate is bits per transmission .just as theorem [ thm : ratedivtsc ] shows the trade - off between achieving high - rate and high - diversity given a fixed transmit alphabet constraint for a flat fading channel , there also exists a trade - off between achievable rate and diversity for frequency selective channels , and we aim to characterize this trade - offfold increase in the diversity order . ]. a corollary will be an upper bound on the performance of diversity embedded codes for isi channels .this can be seen by observing that we can easily extend the proof in to the case where we have the toeplitz structure as given in ( [ eqn : model1 ] ) .note that the diversity order of the codes for fading isi channel is given by the rank of the corresponding ( toeplitz ) codeword difference matrix . since this rank is upper boundedas seen in ( [ upper_bound ] ) , we see that we immediately obtain a trivial upper bound for the rate - diversity trade - off for th fading isi case as follows .[ lem : ratedivisiuppbnd ] if we use a constellation of size and the diversity order of the system is , then the rate in symbols per transmission that can be achieved is upper bounded as note that in theorem [ thm : rdisi ] , we establish a corresponding lower bound that asymptotically ( in block size ) matches this upper bound .note that due to the zero padding structure for isi channels , the effective rate is going to be smaller than the rate of space - time code . since we do not utilize transmissions over a block of transmissions for each of the antennas we can only hope for a rate asymptotically in transmission block size .let be a -level partition where is a refinement of partition .we view this as a rooted tree , where the root is the entire signal constellation and the vertices at level are the subsets that constitute the partition . in this paperwe consider only binary partitions , and therefore subsets of partition can be labeled by binary strings , which specify the path from the root to the specified vertex .signal points in qam constellations are drawn from some realization of the integer lattice .we focus on the particular realization shown in figure [ fig : binpart ] , where the integer lattice has been scaled by ] . given this, we can represent the -qam constellation as , _i.e. , _ the coset representatives of in .the lattice can also be written as the set of gaussian integers =\{a + b i : a , b\in{\mathbb z}\} ] and the quotient /(1-\xi){\bf x } ] which are mapped from the inputs bits using a structure given in ( [ eq : ffdivembmlc ] ) .therefore , given that we transmit the sequence shown in ( [ eq : xisidef ] ) , & { \bf x}[1 ] & \ldots & { \bf x}[t-\nu-1 ] & 0 & \ldots & 0 \end{array } \right ] , \end{aligned}\ ] ] we need a mapping from a binary matrix as in ( [ eq : ffdivembmlc ] ) .for a constellation of size , we do this by taking message sets and mapping them to a codeword with the structure given in ( [ eq : xisidef ] ) as follows , \stackrel{f_2}{\longrightarrow } { \mbox{ } } ^{(1 ) } = \left [ \begin{array}{ccccccc } { \bf x}[0 ] & { \bf x}[1 ] & \ldots & { \bf x}[t-\nu-1 ] & 0 & \ldots & 0 \end{array } \right ] \end{aligned}\ ] ] where the entry of is given by _ i.e. , _ binary string . since the mapping is just the set - partitioning mapping specified in section [ subsec : setpart ] , we need the last columns of to be _ given constants _ for _ all _ choices of the message sets .that is , we need the following structure for the matrix , & { \bf k}[1 ] & \ldots & { \bf k}[t-\nu-1 ] & \mathbf{0 } & \ldots & \mathbf{0 } \end{array } \right ] , \label{eq : kisidef}\end{aligned}\ ] ] where , as before , \} ] .we define a mapping by , & { \bf c}[1 ] & \ldots & { \bf c}[t-\nu-1 ] & \mathbf{0 } & \ldots & \mathbf{0 } \\ \mathbf{0 } & { \bf c}[0 ] & { \bf c}[1 ] & \ldots & { \bf c}[t-\nu-1 ] & \mathbf{0 } & \mathbf{0 } \\ \ldots & & \vdots & \ddots & . & . &\vdots\\ \mathbf{0 } & \ldots & \mathbf{0 } & { \bf c}[\mathbf{0 } ] & { \bf c}[1 ] & \ldots & { \bf c}[t-\nu-1 ] \end{array } \right]\end{aligned}\ ] ] [ suitable_isi ] define to be the set of binary matrices of the form given in ( [ eq : bdef ] ) if for some fixed they satisfy the following properties for .* for any distinct pair of matrices the rank of ] .the second block multiplies the input vector by the generator matrix , with special structure which we define in the following subsection , and generates a vector of polynomials .the final block then maps this vector to a binary matrix .we define the set to be the set of all output matrices for all possible inputs on the stream .note that these sets satisfy the properties in definition [ suitable_isi_trellis ] .explicit construction of full diversity maximum rate binary convolutional codes was first shown in .this was extended for general points on the rate diversity tradeoff for flat fading channels in .we will give constructions for such sets of binary matrices for isi channels in this section .consider the construction for a particular layer above .we will see the construction of rate symbols per transmission , and rank distance of binary codes for transmission over the isi channel .represent the generator matrix or transfer function matrix for this code by an generator matrix given by , \end{aligned}\ ] ] denoting we choose the input message polynomial is represented by the vector of message polynomial ^t\end{aligned}\ ] ] where ] .then from ( [ eqn : grepr_trellis ] ) with we have , \end{aligned}\ ] ] the proof now that the left null space of over is of dimension at most is the same as the proof of theorem [ rate2 ] by choosing such that , therefore , given the result of theorem [ rate2 ] , which is proved in section [ subsec : general ] , we can prove the rank guarantees of the convolutional codes .in this section we will give background needed for construction of binary codes with properties given in definition [ suitable_isi ] .we start in section [ subsec : codestr ] with a representation of in terms of polynomials over which will be useful in proving the construction of binary codes . in section [ subsec : def ] we will list some definitions which we will use in proving rank guarantees in section [ sec : rank ] .note that these definitions are not required for constructing , _i.e. , _ maximal rank sets , for which the proof is much simpler as seen in section [ subsec : maximal ] .finally in section [ subsec : rate ] we will show that , where .the rank properties of are given in section [ sec : rank ] .given a rate , we define the linearized polynomial where . to develop the binary matrices with structure given in ( [ eq : bdef ] ), we define ^t .\end{aligned}\ ] ] where , and is a primitive element of .let and be the representations of and in the basis respectively _i.e. _ , .we obtain a matrix representation of as , ^t .\end{aligned}\ ] ] now , in order to get the structure required in ( [ eq : bdef ] ) , we need to study the requirements of so that the last elements in are for all the rows .note that the row of is given by the binary expansion of in terms of the basis , where is a primitive element of .the coefficients in this basis expansion can be obtained using the trace operator described below for completeness .consider an extension field of the base field .if is a primitive element of then form a basis of over and any element can be uniquely represented in the form , to solve for the coefficients we will use the trace function and trace dual bases . note that for any element the trace of the element relative to the base field is defined as , given that the trace function satisfies the following properties , * .* . * , if .also given the basis the corresponding trace dual basis is defined to be the unique set of elements which satisfy the following relation for , the fact that the trace dual basis exists and is unique can be found in standard references such as . therefore given , we can find by using the properties of the trace function and noting that , where the last equality follows from the definition of the trace dual basis . therefore binary matrix given in ( [ eq : bdef ] ) can be represented in terms of the set defined as associate to the codeword vector given by , ^t\end{aligned}\ ] ] associate with every such codeword the codeword matrix given by the representation of each element of in the basis .since we know that the last elements in are for all the rows .therefore we can see that is a cyclic shift by positions of .hence , for we can write , ^t,\end{aligned}\ ] ] where represents the matrix obtained by a cyclic shift of all the rows of the matrix by positions . for transmission over an isi channel , as seen in section [ subsec : code_isi ] , it can be shown from equation ( [ eqn : model1 ] ) that the effective binary transmitted codeword matrix for a particular is of the form ^t\end{aligned}\ ] ] clearly we see that .we will show in [ subsec : general ] that indeed for that , _. we will need the following definitions in the construction of the basis vectors of the null space of . 1 .we define a set which will be used extensively in the proof in section [ sec : rank ] as , 2 .given a binary vector define as , } _ \mathbf{g } \end{aligned}\ ] ] note that the mapping is a one - to - one mapping between and , due to the linear independence of .3 . for a given fixed define such that , 4 . motivated by the mapping in ( [ eqn : defomega ] ) , for each we will use the following representation : \\ \nonumber g_k^{(i ) } & = & \sum_{j=0}^{\nu}\delta_{k , j}^{(i ) } \alpha^j \quad { \rm where } \,\ , \delta_{k , j}^{(i)}\in { \mathbb{f}_{{2}}}\end{aligned}\ ] ] 5 . for an element given by , define 6 . for each define , 7 . for each define a function by , \end{aligned}\ ] ] 8 . given a set of elements define , note that it then directly follows that , using the polynomial representation given in section [ subsec : codestr ] , we can give a lower bound on the rate as follows .[ theorem_rate ] consider then a lower bound to the cardinality of the set is given by or lower bound to effective rate is , .let be the mapping , ^t \mapsto { { \rm tr}_{{2}^t/{2}}}(\theta_i f(\beta_j)),\end{aligned}\ ] ] for some .this is homomorphism of the -vector space into .the cardinality of the set is given by , note that the range space of is the range of the trace function , _i.e. , _ . noting that since and the rank of the equivalent matrix transformation of ^t ] and , then for some , but since or . ] now if satisfy [ prop4 ] then we are done , otherwise we again use the lemma [ lemma : tilde_operation ] with as the input vectors .repeat this process until satisfy the properties [ prop1 ] , [ prop2 ] , [ prop3 ] and [ prop4 ] .this process has to terminate since we know that and hence is finite . ' '' '' note that from property [ prop2 ] the elements are such that are linearly independent only over .the following lemma shows that as long as this is sufficient to guarantee the independence of over as well .consider elements such that are linearly independent over . if the size of the extension field is such that then these vectors are linearly independent over as well .clearly otherwise the property [ prop2 ] in the theorem [ theorem : basis ] will be violated .define , ^t\end{aligned}\ ] ] and ^t\end{aligned}\ ] ] by the linear independence of we conclude that has full rank over . therefore , there exist linearly independent columns over in .select these columns and form the matrix which is of rank .therefore as .select these same columns in the matrix and form the matrix .let us look at the determinant of .note that since , since we see the linear independence of .moreover , note that since from above and therefore we conclude that .hence the vectors are linearly independent over . ' '' '' in this section we will prove the required rank guarantees for with and therefore show that is given by this set .we state the following lemma required in the proof of the rank guarantees and prove it in the appendix .[ lemma : detpnonzero ] consider a matrix defined as , \left [ \begin{array}{cccc } 1 & \ldots & 1 & 1 \\ \xi^{2^{r-1 } } & \ldots & \xi^{2 } & \xi \\ ( \xi^ { 2 } ) ^{2^{r-1 } } & \ldots & ( \xi^{2})^2 & \xi^{2 } \\ \vdots & & \vdots & \\ ( \xi^{(m_t-1 ) } ) ^{2^{r-1 } } & \ldots & ( \xi^{(m_t-1)})^2 & \xi^{(m_t-1 ) } \\ \end{array } \right]\end{aligned}\ ] ] where and the vectors are linearly independent over .if , then .let as in ( [ eq : fdef ] ) and .then for defined in ( [ eq : sdef ] ) , and over the binary field .[ rate2 ] the rate bound is directly from theorem [ theorem_rate ] .if has rank distance then there exists a vector for some such that the corresponding binary matrix has binary rank equal to ( as the code is linear ) .equivalently there exists some for which there exists a binary vector space of dimension such that for every , just as we saw in ( [ eq : nullspufsmallufeq ] ) , we have note that the size of is . rewriting the abovewe have that and , } _ \mathbf{b } \left[\begin{array}{c } f(1)\\ f(\xi)\\ \vdots\\ f(\xi^{(m_t-1)})\\ \alpha f(1)\\ \vdots \\\alpha^{\nu } f(\xi^{(m_t-1 ) } ) \end{array } \right ] & = 0\end{aligned}\ ] ] let the function be as in ( [ eqn : defomega ] ) such that it maps to .note that , since is a one - to - one mapping , as seen in ( [ eqn : defomega ] ) in section [ subsec : def ] , we immediately see that , ( [ eqn : intermsofb ] ) can be rewritten as , } _ \mathbf{g } \underbrace { \left[\begin{array}{c } f(1)\\ f(\xi)\\ \vdots\\ f(\xi^{(m_t-1 ) } ) \end{array } \right ] } _ { \mathbf{c}_f } & = 0\end{aligned}\ ] ] where , or equivalently as \underbrace { \left [ \begin{array}{cccc } 1 & \ldots & 1 & 1 \\ \xi^{2^{r-1 } } & \ldots & \xi^{2 } & \xi \\ ( \xi^ { 2 } ) ^{2^{r-1 } } & \ldots & ( \xi^{2})^2 & \xi^{2 } \\ \vdots & & \vdots & \\ ( \xi^{(m_t-1 ) } ) ^{2^{r-1 } } & \ldots & ( \xi^{(m_t-1)})^2 & \xi^{(m_t-1 ) } \\ \end{array } \right ] } _ { \mathbf{w } \in { \mathbb{f}_{{2}^t}}^{m_t\times r } } \left[\begin{array}{c } f_{r-1}\\ f_{r-2}\\ \vdots\\ f_0 \end{array } \right ] & = 0\end{aligned}\ ] ] if the only element in is the all zero vector then , , has full binary rank , we have already shown the result in theorem [ rate1 ] . if not , by theorem [ theorem : basis ] there exists a set of minimal vectors , for .if it implies that , and therefore which in turn would imply that all matrices in have rank at least .we will prove that by contradiction .let us assume that there are more than such minimal vectors _taking any of the minimal vectors of the solution space we conclude that , \left [ \begin{array}{cccc } 1 & \ldots & 1 & 1 \\ \xi^{2^{r-1 } } & \ldots & \xi^{2 } & \xi \\ ( \xi^ { 2 } ) ^{2^{r-1 } } & \ldots & ( \xi^{2})^2 & \xi^{2 } \\ \vdots & & \vdots & \\ ( \xi^{(m_t-1 ) } ) ^{2^{r-1 } } & \ldots & ( \xi^{(m_t-1)})^2 & \xi^{(m_t-1 ) } \\ \end{array } \right ] } _ { \mathbf{p } } \left [ \begin{array}{c } f_{r-1 } \\ \vdots \\ f_1 \\ f_0 \end{array } \right ] & = \mathbf{0}_{r \times 1}\end{aligned}\ ] ] where .this is possible iff , as shown in lemma [ lemma : detpnonzero ] by the linear independence of it follows that the determinant can never be zero .therefore there can be at most basis vectors and from ( [ eqn : sizeofd ] ) and property [ prop3 ] of theorem [ theorem : basis ] since , .therefore all matrices in have rank at least . ' '' '' the consequence of theorem [ rate2 ] is that satisfies the requirements of definition [ suitable_isi ] and therefore can be used to construct diversity embedded codes for fading isi channels as done in theorem [ thm : mlc_isi ] .we will start off by giving an example of a code which has full diversity equal to when transmitted over the flat fading channel but does not have the maximum possible diversity of when transmitted over an isi channel with taps ._ example 1 : _ consider construction of a code for , with rate and bpsk signaling using code constructions given in . to design these codes , use the field extension with the primitive polynomial given by and the primitive element .define , where depends on the input message .the space time codeword is obtained as , ^t\end{aligned}\ ] ] where is the representation of as a binary row vector and . as was shown in code achieves full diversity _ i.e. _ , has rank for all nonzero .now assume that we use this code for transmission over an isi channel with .since this is a linear code , the rank distance of the code is the minimum rank of a nonzero codeword .therefore the space time codeword corresponding to is given by , .\end{aligned}\ ] ] when transmitted over the isi channel we see that the equivalent space time codeword is given by , .\end{aligned}\ ] ] clearly since , we conclude that the space time codeword which achieves full diversity over the flat fading channel does not achieve the maximum possible diversity of over the isi channel ._ example 2 : _ similarly this can be shown to hold true for any diversity point .consider for example the case of , , and bpsk signaling using code constructions given in .use the field extension with the primitive element .define , where depends on the input message as before .the space time codeword is obtained as , ^t\end{aligned}\ ] ] where is the representation of as a binary row vector and . as was shown in code achieves diversity _ i.e. _ , has rank for all nonzero .but it can be seen as before that the space time codeword corresponding to does not achieve the maximum possible diversity of when transmitting over the isi channel with taps . _ example 3 : _ consider construction of a bpsk code for , , with rate and hence . to design these codes , use the field extension with the primitive polynomial given by and the primitive element .the set of codeword polynomials which satisfy the constraints in ( [ eq : sdef ] ) are given by , this set is of cardinality latexmath:[\ ] ] where note that this pivoting and reduction to a row echelon form is a full rank operation and preserves the rank of .therefore , where and .let the columns containing the pivots in be denoted by .therefore by the cauchy binet formula , we have note that for all such that the maximum coefficient of in is less than the maximum coefficient of in by at least .therefore , also note that , therefore , therefore by the linear independence of we can conclude that there exists a term in with a power of which in not canceled by any other term in the equation ( [ eqn : detp ] ) . therefore we conclude that implying . hence proved .a. r. calderbank , s. n. diggavi and n. al - dhahir .space - time signaling based on kerdock and delsarte - goethals codes ._ ieee international conference on communications ( icc ) , pp 483 - 487 , paris , june 2004 ._ s. n. diggavi , n. al - dhahir , and a. r. calderbank , diversity embedding in multiple antenna communications , advances in network information theory ._ dimacs series in discrete mathematics and theoretical computer science , pages 285 - 301 , 2004 ._ h. e. gamal , a. r. hammons , y. liu , m. p. fitz , o. y. takeshita , on the design of space - time and space - frequency codes for mimo frequency selective fading channels , _ ieee transactions on information theory , 49(9):22772291 , september 2003 ._ j - c .guey , m. p. fitz , m. r. bell , and w - y .kuo , signal design for transmitter diversity wireless communication systems over rayleigh fading channels ._ ieee transactions on communications , 47(4):527537 , april 1999 ._ h. f. lu and p. v. kumar , rate - diversity trade - off of space - time codes with fixed alphabet and optimal constructions for psk modulation , _ ieee transactions on information theory , 49(10):27472752 , october 2003 ._ v. tarokh , n. seshadri , and a.r .space - time codes for high data rate wireless communications : performance criterion and code construction ._ ieee transactions on information theory , 44(2):744765 , march 1998 .
designs for transmit alphabet constrained space - time codes naturally lead to questions about the design of rank distance codes . recently , diversity embedded multi - level space - time codes for flat fading channels have been designed from sets of binary matrices with rank distance guarantees over the binary field by mapping them onto qam and psk constellations . in this paper we demonstrate that diversity embedded space - time codes for fading inter - symbol interference ( isi ) channels can be designed with provable rank distance guarantees . as a corollary we obtain an asymptotic characterization of the fixed transmit alphabet rate - diversity trade - off for multiple antenna fading isi channels . the key idea is to construct and analyze properties of binary matrices with a particular structure induced by isi channels .
under the influence of the solar wind , the magnetosphere resides in a complex , non - equilibrium state .the plasma particles have non - maxwellian velocity distribution , mhd turbulence is present everywhere , and intermittent energy transport known as bursty - bulk flows occurs as well .the magnetospheric response to particular solar events constitutes an essential aspect of space weather while the response to solar variability in general is often referred to as _ space climate _ .theoretical approaches to space climate involve concepts and methods from stochastic processes , nonlinear dynamics and chaos , turbulence , self - organized criticality , and phase transitions .self - organization can lead to low - dimensional behavior in the magnetosphere .however , power - law dependence observed in the fourier spectra of the auroral electrojet ( ae ) index is a typical signature of high dimensional colored noise indicating multi - scale dynamics of the magnetosphere . in order to reconcile low - dimensional , deterministic behavior with high - dimensionality, proposed that a high - dimensional system near self - organized criticality ( soc ) can be characterized by a few parameters whose evolution is governed by a small number of nonlinear equations .some magnetospheric models , like the one presented in , are based on the soc - concept . herea system tunes itself to criticality and the energy transport across scales is mediated by avalanches which are power - law distributed in size and duration . on the other hand ,it was suggested in that the substorm dynamics can be described as a non - equilibrium phase transition ; i.e. as a system tuned externally to criticality . here , a power - law relation is given , with characteristic exponent close to the input - output critical exponent in a second - order phase transition .in fact , it is claimed in that the global features of the magnetosphere correspond to a first order phase transition whereas multi - scale processes correspond to the second- order phase transitions .the existence of metastable states in the magnetosphere , where intermittent signatures might be due to dynamical phase transitions among these states , was suggested by , and forced and/or self - organized criticality ( fsoc ) induced by the solar wind was introduced as a conceptual description of magnetospheric dynamics .the concept of intermittent criticality was suggested by who asserted that during intense magnetic storms the system develops long - range correlations , which further indicates a transition from a less orderly to a more orderly state .here , substorms might be the agents by which longer correlations are established .this concept implies a time - dependent variation in the activity as the critical point is approached , in contrast to soc . in the present paper we investigate determinism and predictability of observables characterizing the state of the magnetosphere during geomagnetic storms as well as during its quiet condition , butthe emphasis is on the evolution of these properties over the course of major magnetic storms .the measure of determinism employed here increases if the system dynamics is dominated by modes governed by low - dimensional dynamics .hence , the determinism in most cases is a measure of low - dimensionality . for a low - dimensional , chaotic systemthe predictability measure increases when the largest lyapunov exponent increases , and hence it is really a measure of un - predictability . for a high - dimensional or stochastic system it is related to the degree of persistence in timeseries representing the dynamics .high persistence means high predictability .one of the most useful data tools for probing the magnetosphere during substorm conditions is the ae minute index which is defined as the difference between the au index , which measures the eastward electrojet current in the auroral zone , and the al index , which measures the westward electrojet current , and is usually derived from 12 magnetometers positioned under the auroral oval .the auroral electrojet , however , does not respond strongly to the specific modifications of the magnetosphere that occur during magnetic storms .a typical storm characteristic , however , is a change in the intensity of the symmetric part of the ring current that encircles earth at altitudes ranging from about 3 to 8 earth radii , and is proportional to the total energy in the drifting particles that form this current system .the indices and sym - h indices are both designed for the study of storm dynamics .these indices contain contribution from the magnetopause current , the partial and symmetric ring current , the substorm current wedge , the magnetotail currents , and induced currents on the earth s surface .they are derived from similar data sources , but sym - h has the distinct advantage of having 1-min time resolution compared to the 1-hour time resolution of . has recommended that the sym - h index be used as a de facto high - resolution index .the analysis of these indices are central to this study .we particularly focus on sym - h and sym - h which is derived from the sym - h when the contribution of the magnetopause current is excluded .the typical magnetic storm consists of the initial phase , when the horizontal magnetic field suddenly increases and stays elevated for several hours , the main phase where this component is depressed for one to several hours , and the recovery phase which also lasts several hours .the initial phase has been associated with northward directed imf ( little energy enters the magnetosphere ) , but it has been discovered that this phase is not essential for the storm to occur . in order to define a storm , we follow the approach of , where the minimum is a common reference epoch , the main - phase decrease is sufficiently steep , and the recovery phase is also defined .the sym - h index data are downloaded from world data center , with 1-min resolution .we also use minute data for the interplanetary magnetic field ( imf ) component , minute data for the solar wind bulk velocity along the sun - earth axis , as well as flow pressure which is given in nt .these data are retrieved from the omni satellite database and are given in the gse coordinate system .gaps of missing data in , and flow pressure are linearly interpolated from the data which are not missing , while sym - h data are analyzed for the entire period .the same result for the and is obtained when gaps of missing data are excluded from the analysis .data for the period from january 2000 till december 2005 is used to compute general properties of the magnetosphere . in order to analyze storm conditionsall the indices are analyzed during ten intense magnetic storms .analyzed storms occurred on 6 april 2000 , 15 july 2000 , 12 august 2000 , 31 march 2001 , 21 october 2001 , 28 october 2001 , 6 november 2001 , 7 september 2002 , 29 october 2003 , and 20 november 2003 .these storms are characterized with minimum which is in the range between -150 nt to -422 nt .+ the remainder of the paper is organized as follows : section 2 describes the data analysis methods employed .section 3 presents analysis results discerning general statistical scaling properties of global magnetospheric dynamics using minute data over several years and data generated by a numerical model which produces realizations of a fractional ornstein - uhlenbeck ( fo - u ) process . in particular we study how determinism and predictability of the geomagnetic and solar wind observables change over the course of magnetic storms .section 4 is reserved for discussion of results and section 5 for conclusions .the recurrence plot is a powerful tool for the visualization of recurrences of phase - space trajectories .it is very useful since it can be applied to non - stationary as well as short time series , and this is the nature of data we use to explore magnetic storms . prior to constructing a recurrence plotthe common procedure is to reconstruct phase space from the time - series of length by time - delay embedding .suppose the physical system at hand is a deterministic dynamical system describing the evolution of a state vector in a phase space of dimension , i.e. evolves according to an autonomous system of 1st order ordinary differential equations ; and that an observed time series is generated by the measurement function , assume that the dynamics takes place on an invariant set ( an attractor ) in phase space , and that this set has box - counting fractal dimension .since the dynamical system uniquely defines the entire phase - space trajectory once the state at a particular time is given , we can define uniquely an -dimensional measurement function , where the vector components are given by equation ( [ eq2 ] ) , and is a time delay of our choice . if the invariant set is compact ( closed and bounded ) , is a smooth function and , the map given by equation ( [ eq3 ] ) is a topological embedding ( a one - to - one continuous map ) between and . the condition can be thought of as a condition for the image not to intersect itself , i.e. to avoid that two different states on the attractor are mapped to the same point in the -dimensional embedding space .if such an embedding is achieved , the trajectory ( where is given by equation ( [ eq3 ] ) ) in the embedding space is a complete mathematical representation of the dynamics on the attractor .note that the dimension of the original phase space is irrelevant for the reconstruction of the embedding space .the important thing is the dimension of the invariant set on which the dynamics unfolds .there are practical constraints on useful choices of the time delay .if is much smaller than the autocorrelation time the image of becomes essentially one - dimensional .if is much larger than the autocorrelation time , noise may destroy the deterministic connection between the components of , such that our assumption that determines will fail in practice .a common choice of has been the first minimum of the autocorrelation function , but it has been shown that better results are achieved by selecting the time delay as the first minimum in the average mutual information function , which can be percieved as a nonlinear autocorrelation function . herewe use the average mutual information function to calculate the value of .the recurrence - plot analysis deals with the trajectories in the embedding space .if the original time series has elements , we have a time series of vectors for .this time series constitutes the trajectory in the reconstructed embedding space .the next step is to construct a \times [ ( n-(m-1)\tau]$ ] matrix consisting of elements 0 and 1 .the matrix element is 1 if the distance is in the reconstructed space , and otherwise it is 0 . the recurrence plot is simply a plot where the points for which the corresponding matrix element is 1 is marked by a dot . for a deterministic systemthe radius is typically chosen as 10% of the diameter of the reconstructed attractor , but varies for different sets of data . for a non - stationary stochastic process like a brownian motionthere is no bounded attractor for the dynamics , and the diameter is limited by the length of the data record .the first example of recurrence plot is shown in figure 1 , obtained from the when no storm is present on 5 september 2001 . in figure 2 ,the recurrence plot is shown for the for the strong storm on 6 april 2000 . in both cases ,embedding dimension is and , which corresponds to 10% of the data range .[ fig1 ] during quiet condition on september 5 , 2001 .a ) time series , b ) recurrence plot of the time series shown in ( a).,width=302 ] [ fig2 ] during the strong storm on 6 april 2000 .a ) time series , b ) recurrence plot of the time series shown in ( a).,width=302 ] the empirical mode decomposition ( emd ) method , developed in is very useful on non - stationary and nonlinear time series .emd method can give a change of frequency in any moment of time ( instantaneous frequency ) and a change of amplitude in the system .however , in order to properly define instantaneous frequency , a time series should have the same number of zero crossings and extrema ( or they can differ at most by one ) , and a local mean should be close to zero .the original time series usually does not have these characteristics and should be decomposed into intrinsic mode functions ( if ) for which instantaneous frequency can be defined .decomposition can be obtained through the so - called sifting process .this is an adaptive process derived from the data and can be briefly described as follows : all local maxima and minima in the time series are found , and all local maxima and minima are fitted by cubic spline and these fits define the upper ( lower ) envelope of the time series .then the mean of the upper and lower envelope is defined , and the difference between the time series and this mean represents the first if , , if instantaneous frequency can be obtained , defined by some stopping criterion . if not , the procedure is repeated ( now starting from instead of ) until the first if is produced .higher ifs are obtained by subtracting the first if from the time series and the entire previously mentioned procedure is repeated until a residual , usually a monotonic function , is left .we use a stopping criterion defined by , where on fraction of the if , and , on the remaining fraction of the if . here , is the if amplitude , and , , and . by the above definitions , ifs are complete in the sense that their summation gives the original time series : where is the number of ifs and is a residual . in figure 3awe show the ifs from emd performed on the imf during a magnetic storm on 6 april 2000 ( whose time series is plotted in figure 2a ) , while in figure 3b the index for the same storm is shown .[ fig3 ] for the magnetic storm on 6 april 2000 , b ) for the same event.,width=302 ] in order to study stochastic behavior of a time series by means of emd analysis , we refer to who studied characteristics of white noise using the emd method .they derived for white noise the relationship , where and represents empirical variance and mean period for the if . here , , where is the _m_th if and is the ratio of the _m_th if length to the number of its zero crossings . analyzed telecommunication indices and noticed a resemblance to autoregressive processes of the first order ar(1 ) , which are stochastic and linear processes . for such processes . for fractional gaussian noise processes ( ) and fractional brownian motions ( ) we have the connection , where is the hurst exponent , as shown by .a useful feature of the emd analysis is the possibility of extraction of trends in the time series , because the slowest if components should often be interpreted as trends .this is an advantage compared to the standard variogram or rescaled - range techniques , whose estimation of the scaling exponents is biased by the trend . in this paperwe employ a simple test for determinism , developed by , where the following hypothesis is tested : when a system is deterministic , the orientation of the trajectory ( its tangent ) is a function of the position in the phase space .further , this means that the tangent vectors of a trajectory which recurs to the same small `` box '' in phase space , will have the same directions since these are uniquely determined by the position in phase space .on the other hand , trajectories in a stochastic system have directions which do not depend uniquely on the position and are equally probable in any direction .this test works only for continuous flows , and is not applicable to maps since consecutive points on the orbit may be very separated in the phase space . for flows ,the trajectory orientation is defined by a vector of a unit length , whose direction is given by the displacement between the point where trajectory enters the box _j _ to the point where the trajectory exits the same box . the displacement in _m_-dimensional embedding space is given from the time - delay embedding reconstruction : , \label{eq6}\end{aligned}\ ] ] where is the time the trajectory spends inside a box .the orientation vector for the pass through box is the unit vector .the estimated averaged displacement vector in the box is where is the number of passes of the trajectory through box .if the dynamics is deterministic , the embedding dimension is sufficiently high , and in the limit of vanishingly small box size , the trajectory directions should be aligned and . in the case of finite box size , will not depend very much on the number of passes , and will converge to as . in contrast , for the trajectory of a random process , where the direction of the next step is completely independent of the past , will decrease with as . in our analysiswe will choose the linear box dimension equal to the mean distance a phase - space point moves in one time step and set time step in equation ( [ eq6 ] ) .[ fig4 ] in each box visited by a 2-dimensional projection of the -dimensional embedding space reconstructed from the time series in ( a).,width=226 ] [ fig5 ] in each box visited by a 2-dimensional projection of the -dimensional embedding space reconstructed from the time series in ( a).,width=226 ] in figure 4b we show displacement vectors averaged over the passes through the box , for a three - dimensional embedding of the lorenz attractor , whose time series is shown in figure 4a ; in figure 5b the same is shown for a random process , in this case a fractional ornstein - uhlenbeck ( fo - u ) process .these model systems will be used throughout this paper as archetypes of low - dimensional and stochastic systems , respectively .the lorenz system has the form with standard coefficient values , , and , which give rise to a chaotic flow . the fo - u process is described by the stochastic equation : where is a fractional gaussian noise with hurst exponent . the drift ( and ) and diffusion ( ) parameters are fitted by the least square regression to the time series of the sym - h storm index .this will be explained in more detail in section [ stormdeterminism ] .the degree of determinism of the dynamics can be assessed by exploring the dependence of on . in practice , this can be done by computation of the averaged displacement vector where the average is done over all boxes with same number of trajectory passes . as shown in ,the average displacement of passes in _m_-dimensional phase space for the brownian motion is where is the gamma function .the deviation in between a given time series and the brownian motion can be characterized by a single number given by the weighted average over all boxes of the quantity , where we have explicitly highlighted that the averaged displacement of the trajectory in the reconstructed phase space depends on the time - lag . for a completely deterministic signalwe have , and for a completely random signal .all systems described by the laws of classical ( non - quantum ) physics are deterministic in the sense that they are described by equations that have unique solutions if the initial state is completely specified . in this senseit seems meaningless to provide tests for determinism .the test described in this section is really a test of _ low dimensionality_. the test is performed by means of a time - delay embedding , for embedding dimension up to a maximum value , where is limited by practical constraints .high requires longer time series in order to achieve adequate statistics . a test that fails to characterize the system as deterministic for in reality only tells us that the embedding dimension is too small , i.e. the number of degrees of freedom of the system exceeds .such systems will in the following be characterized as random , or stochastic .[ fig6 ] : square symbols are derived from numerical solutions of the lorenz system , and triangles from these solutions after randomization of phases of fourier coefficients .b ) : diamonds from lorenz system , and triangles after randomization of phases . ,width=302 ] in figure 6a , we plot versus for a time series generated as a numerical solution of the lorenz system .here we use , , and the box size is of the order of average distance a phase - space point moves during one time - step . in the same plotwe also show the same characteristic for the surrogate time series generated by randomizing the phases of the fourier coefficients of the original time series .this procedure does not change the power spectrum or auto - covariance , but destroys correlation between phases due to nonlinear dynamics . for low - dimensional , nonlinear systemssuch randomization will change , as is demonstrated for the lorenz system in figure 6a .we also calculate versus for these time series and plot the results in figure 6b .again , for the original and surrogate time series are significantly different .[ fig7 ] : square symbols are derived from numerical solutions of the fo - u stochastic equation , and triangles from these solutions after randomization of phases of fourier coefficients .b ) : squares from fo - u equation , and triangles after randomization of phases ., width=302 ] [ fig8 ] as a function of embedding dimension for solution of lorenz equations ( triangles ) , fo - u process ( squares ) , and tsym - h ( circles ) . , width=302 ] for the numerically generated fo - u process , where , and , we observe in figure 7 that and for the original and surrogate time series do not differ , demonstrating that these quantities are insensitive to randomization of phases of fourier coefficients if the process is generated by a linear stochastic equation .one should pay attention to the nature of the experimental data used in the test of determinism . for low - dimensional data contaminated by low - amplitude noise or brownian motions, the analysis results will depend on the box size , but the problem is solved by choosing it sufficiently large . fora low - dimensional system represented by an attractor of dimension the results may also depend on the choice of embedding dimension .the estimated determinism tends to increase with increasing until it stabilizes at as approaches . for a random signalthere is no such dependence on embedding dimension , as demonstrated by example in figure 8 .here we plot the determinism ( when ) versus embedding dimension for the lorenz and fo - u time series .for comparison we also plot this for transformed sym - h , tsym - h , during magnetic storm times ( the transformation and reasons for it are explained in section [ sec : results ] ) .it converges to a value less than 1 , and for embedding dimensions higher than for the lorenz time series .this indicates that this geomagnetic index during magnetic storms exhibit both a random and a deterministic component , and that the dimensionality of this component is higher than for the lorenz system .in this subsection we develop an analysis which is based on the diagonal line structures of the recurrence plot . in our studywe use the average inverse diagonal line length : where is a histogram over diagonal lengths : for a low - dimensional , chaotic deterministic system ( for which the embedding dimension is sufficient to unfold the attractor ) is an analog to the largest lyapunov exponent , and is a measure of the _ degree of unpredictability_. for stochastic systems , the recurrence plots do not have identifiable diagonal lines , but rather consists of a pattern of dark rectangles of varying size , as observed in figure 1 . for embedding dimension a dark rectangle corresponds to time intervals on the horizontal axis and on the vertical axis , for which the signal is inside the same -interval whenever is included in either or . in this casethe length of unbroken diagonal lines is a characteristic measure of the linear size of the corresponding rectangle , and the pdf a measure of the distribution of residence times of the trajectory inside -intervals . for selfsimilar stochastic processes such as fractionalbrownian motions can be computed analytically , and computed as function of the selfsmilarity exponent . since the residence time inside an -box increases as the smoothness of the trajectory increases ( increasing ) , we should find that is a monotonically decreasing function of .in section [ sec : predictabilityresults ] we compute numerically for a synthetically generated fo - u process and thus demonstrate this relationship between and .hence both and can serve as measures of predictability , but is more general , because it is not restricted to selfsimilar processes or processes with stationary increments , and applies to low - dimensional chaotic as well as stochastic systems .in it is shown that the fluctuation amplitude ( or more precisely ; the one - timestep increment ) of the ae index is on the average proportional to the instantaneous value of the index .this gives rise to a special kind of intermittency associated with multiplicative noises , and leads to a non - stationary time series of increments .however , the time series is stationary , implying that the stochastic process has stationary increments .thus , a signal with stationary increments , which still can exhibit a multifractal intermittency , can be constructed by considering the logarithm of the ae index .similar properties pertain to the sym - h index , although in these cases we have to add a constant before taking the logarithm , i.e. has stationary increments .using the procedure described in the estimated coefficients are and . in figure 9a we show the increments for the original sym - h data , while in figure 9b we show the increments for the transformed signal , which in the following will be denoted tsym - h . in this section , we employ emd and variogram analysis to tsym - h , imf and solar wind flow speed .the emd analysis is used to compute intrinsic mode functions ( if ) for time intervals of minutes using data for the entire period from january 2000 till december 2005 .the empirical variance estimates versus mean period for each if component in tsym - h , , and are shown as log - log plots in figure 10a . in section [ emdmethod ]we mentioned that has demonstrated that for fractional gaussian noise the slope is equal to , where is the hurst exponent .this estimate for the slope seems valid for our data as well , as is shown in the figure from comparison with the variogram , even though the time series on scales up to minutes are non - stationary processes having the character of fractional brownian motions .the results from the two different methods shown in figures 10a and 10b are roughly consistent , using the relations and , which implies . in practice ,we have calculated from emd as a function of for fractional gaussian noises and motions with self - similarity exponent , and have derived a relation .the variogram represent a second order structure function : which scales with a time - lag as , is denoted as selfsimilarity exponent , and is a time series .note that a hurst exponent implies that the process is a nonstationary motion , and if the process is selfsimilar , the selfsimilarity exponent is . in our terminologya white noise process has hurst exponent and a brownian motion has . from figure 10awe observe three different scaling regimes for tsym - h . for time scalesless than a few hundred minutes it scales like an uncorrelated motion ( ) . on time scales from a few hours to a week it scales as an antipersistent motion ( depending on analysis method ) , and on longer time scales than a week it is close to a stationary pink noise ( ) .similar behavior was observed for in , but there the break between non - stationary motion and stationary noise ( where changes from to ) occurs already after about 100 minutes , indicating the different time scales involved in ring current ( storm ) dynamics and electrojet ( substorm ) dynamics .results for indicate a regime with antipersistent motion ( ) up to a few hundred minutes , and then an uncorrelated or weakly persistent motion ( ) up to a week . on longer time scales than this the variogram indicates that the process is stationary .the exponent for can not be estimated from the variogram since it is difficult to obtain a linear fit to the concave curve in figure 10b .the concavity is less pronounced in the curve derived from the emd method in figure 10a and , corresponding to an antipersistent motion ( ) , can be estimated on time scales up to a few hundred minutes . becomes stationary already after a few hundred minutes , which is similar to the behavior in , as pointed out by . in the concavity of the variogram follows from modelling as a ( multifractal ) ornstein - uhlenbeck process with a strong damping term that confines the motion on time scales longer than 100 minutes .accounting for this confinement the true " selfsimilarity exponent of the stochastic term turns out to be .thus the antipersistence derived from the emd analysis may be a spurious effect from this confinement .the conclusion in is that and behave as uncorrelated motions up to the scales of a few hours and become stationary on scales longer than this .moreover , the stochastic term modelling the two signals share the same multifractal spectrum . in comparison ,tsym - h and are non - stationary motions on scales up to a week before they reach the stationary regime . in figure 11we show and for tsym - h and its surrogate time - series with randomized phases of fourier coefficients .we observe that and for the surrogate time series does not deviate from those computed from the original tsym - h , indicating that the dynamics of tsym - h is not low - dimensional and nonlinear .the same results are obtained for imf and flow speed ( not shown here ) .[ fig10 ] versus mean period for each if component in tsym - h , , and shown as log - log plots .b ) the variogram shown in log - log plot . in both panelsstars are for tsym - h , diamonds for imf , and triangles for .note that a generalization of the result in yields ., width=302 ] [ fig11 ] : square symbols are for tsym - h , and triangles are for this signal after randomization of phases of fourier coefficients .b ) : squares is for tsym - h , and triangles are after randomization of phases.,width=302 ] [ fig12 ] for imf ( squares ) , ( triangles ) , tsym - h ( diamonds ) , fo - u ( stars ) computed before , during and after storm onset .the values for are computed using 12 hour intervals and are averaged over ten different storms .b ) the index averaged over the ten storms ., width=302 ] [ fig13 ] for tsym - h , where the triangles are the mean of computed 3 days before and after the storm for ten different storms .these curves represent non - storm conditions .the upper curve ( squares ) is the mean over all ten storms computed at the time of the minimum , i.e. it represents the -curve around storm onset .many curves are terminated for because there were no boxes with more than passages of the phase - space trajectory.,width=302 ] [ fig14 ] averaged over ten storms for time series where missing points have been interpolated ; tsym - h ( diamonds ) , tsym - h ( stars ) , ( triangles),width=302 ] in the following analysis we test for determinism in tsym - h , and for ten intense storms .the reference point in our analysis is the storm s main phase , and then we analyze all the data spanning the time interval three days before and three days after the storm in tsym - h , and .we compute for with a time resolution of 12 hours .the choice of from the -curve is a compromise between clear separation between low - dimensional and stochastic dynamics and small error bars ( which increase with increasing ) . in order to improve statistics for ,we compute determinism using data from all ten storms .this means that each is computed over 12 hours interval over 10 storms , which gives points . as a reference, we compute for the fo - u process , whose coefficients are fitted by the least square regression to the sym - h index during investigated storms . in all computations ,we use embedding dimension , time - delay , and . in figure 12awe plot for tsym - h , , and fo - u , and in figure 12b the index averaged over all ten storms is plotted , since this index shows precisely when the storm takes place .we can observe that is essentially the same for , and fo - u , and stays approximately constant during the course of a storm .however , for tsym - h increases during storm time . in order to demonstrate that the change in is significant , we plot in figure 13 for tsym - h , where the triangles are the mean of computed 3 days before and after the storm for ten different storms .these curves represent non - storm conditions .the upper curve ( squares ) is the mean over all ten storms computed at the time of the minimum , i.e. it represents the -curve around storm onset .in addition , we test determinism for the quantity sym - h=0.77 sym - h-11.9 , where is the solar wind s dynamic pressure .sym - h is a corrected index where the effect of the magnetopause current due to is subtracted , and thus represents the ring - current contribution to sym - h . in order to obtain stationary incrementswe analyze a transformed index ; tsym - h=(+ - h ) , where , and .since some data points are missing in , we have made a linear interpolation over the missing points .it seems that the interpolation decreases determinism in tsym - h , and for reference we compute determinism for the interpolated tsym - h , where the interpolation is done over the same points as in tsym - h , even though tsym - h does not have missing points . in figure 14we plot for tsym - h , tsym - h , and .we see that the determinism in tsym - h is lower than that in tsym - h , but it still increases during the storm . on the other hand shows no change in determinism during storm events .the determinism ( as measured by ) of the storm index tsym - h and tsym - h has been shown to exhibit a pronounced increase at storm time .a rather trivial explanation of this enhancement would be that it is caused by the trend " incurred by the wedge - shaped drop and recovery of the storm indices associated with a magnetic storm .we test this hypothesis by superposing such a wedge - shaped pulse to an fo - u process and compute .next , we take tsym - h for ten storms and for each set of data subtract the wedge - shaped pulse ( computed by a moving - average smoothing ) .the residual signal represents the detrended " fluctuations .the result is shown in figure 15 and reveals that the trend in fo - u process has no discernible influence on the determinism during the storm while , on the other hand , we observe that the enhancement of around storm time persists in the detrended " fluctuations .this result suggests that the increasing determinism during storms is a result of an enhanced low - dimensional component in the storm indices . as mentioned in section[ determinism ] for low - dimensional dynamics , nonlinearity may be important for the measure of determinism . for a nonlinear , low - dimensional system the destruction of nonlinear coupling by randomizing phases of fourier coefficients will in general reduce the determinism , while for a linear , stochastic process we will observe no such effect .but what role will nonlinearity play if it is introduced in the deterministic terms of a stochastic equation ?the deterministic term in the fractional langevin equation representing the fo - u is a linear damping term . however , the best representation of the damping / drift term in an fo - u model for tsym - h is not linear .following , if - h , the drift term is given as the conditional probability density given that : in fo - u is a linear function of , but a polynomial fit to drift term derived from tsym - h data requires a sixth order polynomial , confirming the nonlinearity of the tsym - h process .this is shown in figure 16 .next , we test determinism for the nonlinear fo - u process , whose scaling exponent is estimated from the variogram of tsym - h , and where and is used .figure 17a shows for numerical realizations of this process compared with the same analysis after randomization of the phases of the fourier coefficients .the result reveals that the nonlinear fo - u process is not more deterministic than its randomized version .next , we form the composite time series , where is the solution of the lorenz system and is the nonlinear fo - u process , both signals with zero mean and unit variance . again , embedding dimension is used .now the -curve is lowered when the phases are randomized , as shown in figure 17b , which confirms our conjecture that determinism is a measure of low - dimensionality .[ fig15 ] : triangles are derived from an fo - u process with a `` storm trend '' imposed , diamonds are derived from the detrended " tsym - h.,width=302 ] [ fig16 ] [ fig17 ] .a ) diamonds are derived from numerical solutions of the nonlinear fo - u .triangles are from these solutions after randomization of phases of fourier coefficients .b ) diamonds are derived from numerical solutions of the nonlinear fo - u with a solution of the _ x _ component of the lorenz system superposed .triangles are from the latter signals after randomization of phases.,width=302 ] [ fig18 ] as a function of the parameter _c_.,width=302 ] [ fig19 ] for tsym - h ( stars ) , tsym - h ( circles ) , ( squares ) , ( upward triangles ) , detrended tsym - h ( downward triangles ) averaged over ten storms .error bars represent standard deviation based on data from these ten storms .time origin is defined by the minimum of the average index for the ten storms.,width=302 ] [ fig20 ] for tsym - h ( stars ) , tsym - h ( circles ) , ( squares ) , ( triangles ) averaged over the same storms as in the previous figure.,width=302 ] [ fig21 ] vs. computed from numerical realizations of the fo - u process.,width=302 ] even though we deal with a predominantly stochastic system , its correlation and the degree of predictability changes in time , and our hypothesis is that abrupt transitions in the dynamics take place during events like magnetic storms and substorms .we therefore employ recurrence plot quantification analysis as a tool for detection of these transitions .we compute the average inverse diagonal line length as defined in equation ( [ eq4 ] ) , but the same results can be drawn from other quantities that can be derived from the recurrence plot . can be used as a proxy for the positive lyapunov exponent in a system with chaotic dynamics , and is sensitive to the transition from regular to chaotic behavior , as can be shown heuristically for the case of the lorenz system , where we use , , and is varied from 20 to 40 , such that transient behavior is obtained .for a hopf bifurcation occurs , which corresponds to the onset of chaotic flows . in figure 18awe plot a bifurcation diagram for the _ x _ component of the lorenz system as a function of the parameter , while in figure 18b we show for the _ x _ component again as a function of the parameter _c_. similar results have been obtained from the longest diagonal length , when applied to the logistic map . in the following analysis ,we use embedding dimension , because the results do not seem dependent on and because , in the case of stochastic or high - dimensional dynamics , a topological embedding can not be achieved for any reasonable embedding dimension .this fact demonstrates the robustness of the recurrence - plot analysis , which responds to changes in the dynamics of the system even if it is a stochastic or high - dimensional system for which no proper phase - space reconstruction is possible .since reduction in means increase of predictability it may also be a signature of higher persistence in a stochastic signal .this motivates plotting and ( computed as a linear fit from the variogram over the time scales up to 12 hours ) for solar wind parameters and magnetic indices .figure 19 shows for tsym - h , , , tsym - h and detrended tsym - h averaged over 10 magnetic storms .figure 20 shows the same for , but detrended tsym - h is not shown since its changes insignificantly during the course of the storm .we observe that the increase in the predictability and persistence does not occur simultaneously for all observables .while , tsym - h , detrended tsym - h and tsym - h get the most predictable during or after the main phase of the storm , solar wind s flow speed becomes the most predictable _ prior _ to the storm s main phase . from a hundred realizations of the fo - u processgenerated numerically with the coefficients in the stochastic equation fitted to model the tsym - h signal , we find and , in good agreement with the results obtained from the tsym - h time series .the general relationship between and can also be explored through numerical realizations of fo - u processes .figure 21 shows computed for varying as a mean value of 100 realizations of such a process for each . for persistent motions ( )there is a linear dependence between and , and a best fit yields this analysis shows the importance of as a universal measure for predictability : in low - dimensional systems it is a proxy for the lyapunov exponent , while for persistent stochastic motions it is a measure of persistence through equation ( [ gamma - h ] ) .the storm index sym - h and the solar wind observables ( flow velocity and imf ) show no clear signatures of low - dimensional dynamics during quiet periods. however , low - dimensionality increases in sym - h and sym - h during storm times , indicating that self - organization of the magnetosphere takes place during magnetic storms .this conclusion is drawn from the study of ten intense , magnetic storms in the period from 2000 - 2003 . even though our analysis shows no discernible change in determinism during magnetic storms for solar wind parameters, there is an enhancement of the predictability of the solar wind observables as well as the geomagnetic storm indices during major storms .we interpret this as an increase in the persistence of the stochastic components of the signals . the increased persistence in the solar wind flow , prior to the storms main phase could indicate that is more important driver than during magnetic storms .this is consistent with a reexamination of the solar wind - magnetosphere coupling functions done by , who found that the most optimal function is of the form , where .also , it has been shown in through numerical simulation , that increased changes the magnetospheric response from a steadily convecting state to highly variable in both space and time .it has been shown in that the plasma sheet is the dominant source for the ring current based on the similarity in composition of the inner plasma sheet and ring current regions . during the main phase of the storm, ions from the plasma sheet are flowing to the inner magnetosphere on the open drift paths and then move to the dayside magnetopause . in this stormphase the ring current is highly asymmetric , as was experimentally shown by energetic neutral atom imaging ( see and references therein ) . during the recovery phase ,ions from the plasma sheet are trapped on closed drift paths , and form the symmetric ring current .therefore , the increase in determinism of the ring - current ( sym - h ) during storms implies increased determinism in the plasma sheet as well .a magnetic storm is a coherent global phenomenon investing a vast region of the inner magnetosphere , and implying large scale correlation .the counterpart of this increase of coherence is the reduction of the spontaneous incoherent short time scale fluctuations .consequently , one should expect a reduction of the free degrees of freedom which implies an increase of determinism , i.e. the possible emergence of a low - dimensionality .analysis of predictability shows significant differences between on one hand , and and sym - h on the other .while the former is a non - stationary , slightly anti - persistent motion up to time scales of approximately 100 minutes , and a pink noise on longer time scales , the latter are slightly persistent motions on scales up to several days and noises on longer time scales .these differences indicate the different role the solar wind and the velocity play in driving the substorm and storm current systems ; is important in substorm dynamics which will be studied in a separate paper , while is a major driver of storms .the authors acknowledge illuminating discussions with m. rypdal and b. kozelov .the authors would like to thank the kyoto world data center for and sym - h index , and cdaweb for allowing access to the plasma and magnetic field data of the omni source . also , comments of two anonymous referees are highly appreciated .abarbanel , h. ( 1996 ) , analysis of observed chaotic data , institute for nonlinear science , springer , new york .akasofu , s. i. ( 1965 ) , the development of geomagnetic storms without a preceding enhacement of the solar plasma pressure , _ planet .space sci . , _ _ 13 _ , 297 .angelopoulos , v. , t. mukai , s. kokobun ( 1999 ) , evidence for intermittency in earth s plasma sheet and implications for self - organized criticality , _ phys .plasmas , _ _ 6 _ , 4161 .bak , p. , c. tang , and k. wiesenfeld ( 1987 ) , self - organized criticality : an explanation of 1/f noise , _ phys .lett . , _ _ 59 _ , 381 .balasis , g. , i. a. daglis , p. kapiris , m , mandea , d. vassiliadis , k. eftaxias ( 2006 ) , from pre - storm activity to magnetic storms : a transition described in terms of fractal dynamics , _ ann .geophys , _ _ 24 _ , 3557 .beran , j .( 1994 ) , statistics for long - memory processes , _monographs on statistics and applied probability _ , chapman & hall / crc , boca raton .chang , t. ( 1998 ) , self - organized criticality , multi - fractal spectra , and intermittent merging of coherent structures in the magnetotail , _ astrophysics and space science _ , _ 264 _ , 303 .chapman , s.c ., n. watkins , r. o. dendy , p. helander , and g. rowlands ( 1998 ) , a simple avalanche model as an analogue for magnetospheric activity , _ geophys .lett . , _ _ 25 _ , 2397 .consolini , g. , and t. s. chang ( 2001 ) , magnetic field topology and criticality in geotail dynamics : relevance to substorm phenomena , _ space sci ., _ _ 95 _ , 309 .davies , t. n , and m. sugiura ( 1966 ) , auroral electrojet activity index ae and its universal time variations , _ j. geophys .res . , _ _ 71 _ , 785 .eckmann , j. p , s. o. kamphorst and d. ruelle ( 1987 ) , _ europhys ._ 5 _ , 973 .flandrin , p. , and p. gonalv ( 2004 ) , empirical mode decomposition as data - driven wavelet- like expansion , _ international journal of wavelets , multiresolution and information processing , _ _ 2 _ , 1 .franzke , c. ( 2009 ) , multi - scale analysis of teleconnection indices : climate noise and nonlinear trend analysis , _ nonlin .processes geophys . , __ 16 _ , 65 .gonzalez , w. d. , j. a. joselyn , y. kamide , h. w. kroehl , g. rostoker , b. t. tsurutani , and v. m. vasyliunas ( 1994 ) , what is a geomagnetic storm ? , _ j. geophys .res . , _ _ 99 _ , 5771 .huang , n.e , z. seng , s. r. long , m. c. wu , h. h. shih , q. zheng , n. yen , c. c. thung , h. h. liu ( 1998 ) , the empirical mode decomposition and the hilbert spectrum for nonlinear and non - stationary time series analysis , proc . r. soc .a _ 454 _ , 903 .kaplan , d. t. , and l. glass ( 1992 ) , direct test for determinism in a time series , _ phys ._ , _ 68 _ , 427 .kaplan , d. t. , and l. glass ( 1993 ) , coarse - grained embedding of time series : random walks , gaussian random processes , and deterministic chaos , _ physica d _ , _ 64 _ , 431 .klimas , a. , d. vassiliadis , d. n. baker , and d. a. roberts ( 1996 ) , the organized nonlinear dynamics of the magnetosphere , _ j. geophys .res . , _ _ 101 _ , 13 , 089 - 13,113 .loewe , c. a. , and g. w. prlss ( 1997 ) , clasiffication and mean behavior of magnetic storms , _ j. geophys .res . , _ _ 102 _ , 14,209 .marwan , n. , m. c. romano , m. thiel and j. krths ( 2007 ) , recurrence plots for the analysis of complex systems _ physics reports _ _ 438 _ , 237 .newell , p. t. , t. sotirelis , k. liou , c. i. meng , and f. j. rich ( 2006 ) , cusp latitude and the optimal solar wind coupling function , _j. geophys ._ , _ 111 _ , a09207 , doi:10.1029/2006ja011731 .nose , m. , s. ohtani , k. takahashi , a. t. y. lui . , r. w. mcentire , d. j. williams , s. p. christon , and k. yumoto ( 2001 ) , ion composition of the near - earth plasma sheet in storm and quiet intervals : geotail/ epic measurements , _ j. geophys ._ , _ 106 _ , 8391 .pulkkinen , t. i. , c. c. goodrich , and j. g. lyon ( 2007 ) , solar wind electric field driving of magnetospheric activity : is it velocity or magnetic field , _ geophys ._ , _ 34 _ , l21101 , doi:10.1029/2007/gl031011 .rilling , g. , p. flandrin , and p. goncalves ( 2003 ) , on empirical mode decomposition and its algorithms , ieee - eurasip workshop on nonlinear signal and image processing nsip-03 , grado(i ) .rypdal , m. , and k. rypdal ( 2010 ) , stochastic modeling of the ae index and its relation to fluctuations in of the imf on time scales shorter than substorm duration , _ j. geophys .res . , _ _ 115 _ , a11216 .rypdal , m. , and k. rypdal ( 2011 ) , discerning a linkage between solar wind turbulence and ionospheric dissipation by a method of confined multifractal motions , _ j. geophys .res . , _ _ 116 _ , a02202 .kozyra , j. u. , and m. w. liemohn ( 2003 ) , ring current energy input and decay , _ sp . sci ._ , _ 109 _ , 105 .sharma , a. s. , d. vassiliadis , and k. papadopoulos ( 1993 ) , re - construction of low - dimensional magnetospheric dynamics by singular spectrum analysis , _ geophys ._ , _ 20 _ , 335 .sharma , a. s. , a. y. ukhorskiy , and m. sitnov ( 2003 ) , modeling the magnetosphere using time series data , _ geophysical monograph _, _ 142 _ , 231 .sitnov , m. i. , a. s. sharma , and k. papadopoulos ( 2001 ) , modeling substorm dynamics of the magnetosphere : from self - organization and self - organized criticality to nonequlibrium phase transitions , _ phys .e , _ _ 65 _ , 016116 .takens , f. ( 1981 ) , detecting strange attractors in fluid turbulence , in : dynamical systems and turbulence , edited by d. rand , and l. s. young , springer , berlin .trulla , l. l , a. guliani , j. p. zbilut , and c. l. webber , jr .( 1996 ) , recurrence quantification analysis of the logisitc equation with transients , _ phys ._ 223 _ , 255 .vassiliadis , d. v , a. s. sharma , t. e. eastman , and k. papadopoulos ( 1990 ) , low - dimensional chaos in magnetospheric activity from ae time series , _ geophys ._ 17 _ , 1841 .wanliss , j. a. , and k. m. showalter ( 2006 ) , high - resolution global storm index : versus sym - h , _ j. geophys .res . , _ _ 111 _ , a02202 .watkins , n. w. , ( 2002 ) , scaling in the space climatology of the auroral indices : is soc the only possible description , _ nonlin .processes in geophys . , _ _ 9 _ , 389 .wu , b. z , and n. e. huang ( 2004 ) , a study of the characteristics of white noise using the empirical mode decomposition method , proc .r. soc . lond .a , _ 460 _ , 1597 .wu , b. z , n. e. huang , s. r. long , c.k .peng ( 2007 ) , on the trend , detrending , and variability of nonlinear and non - stationary time series , pnas , _ 104 _ , 38 .
the storm index sym - h , the solar wind velocity , and interplanetary magnetic field show no signatures of low - dimensional dynamics in quiet periods , but tests for determinism in the time series indicate that sym - h exhibits a significant low - dimensional component during storm time , suggesting that self - organization takes place during magnetic storms . even though our analysis yields no discernible change in determinism during magnetic storms for the solar wind parameters , there are significant enhancement of the predictability and exponents measuring persistence . thus , magnetic storms are typically preceded by an increase in the persistence of the solar wind dynamics , and this increase is also present in the magnetospheric response to the solar wind .
in a large number of internet traffic measurements many authors detected self - similarity . self - similarity is usually attributed with heavy - tailed distributions of objects at the traffic sources , e.g. sizes of transfered files or time delays in user interactions .recently the dynamical origins of self - similarity also attract increasing attention .chaotic maps are known to generate fractal properties , and it has been shown , that the exponential backoff algorithm of the tcp protocol can produce long range dependent traffic under some special circumstances .veres emphasized the chaotic nature of the transport , and fekete gave an analytical explanation that this chaoticity is in close relation with the loss rate at highly utilized buffers due to packet drops .guo gave a markovian analysis of the backoff mechanism , and showed , that the self - similarity can be observed only at very high loss rates .in this paper we study a lossless network , where the traffic sources are _ not _ heavy - tailed distributed . even though we observe self - similar traffic , which is generated by the communicating tcps themselves .we show , that the random switching between the destinations of the flows together with the complex dynamics of interacting tcps can lead to long range dependency .the interaction between tcps at the buffers leads to backoff phases in the individual tcps causing fractal traffic in an environment where the loss rate is much below the lower bound of self - similarity .the outline of the paper is the following : first we introduce the concept of real and effective loss .then we present a simple lossless model network , where self - similar traffic is observed , however the necessary conditions discusssed in the literature cited above , are not satisfied .next we show that similar scenario can be found in real networks as well .finally we conclude our results .in the internet traffic many individuals use a common , finite resource to transmit information . if the resources are exhausted ( e.g. routers are congested ), data throughput is not possible .therefore data transmitters should avoid congestion on shared information routes .most of today s computer programs use similar algorithm to avoid congestion : they apply basicly the same tcp protocol with slight differences . the common concept in everytcp is , that the data sending rate must be adapted to the actually available resources .every tcp starts with a blind measuring phase ( slow start ) , which exponentially reaches the maximum throughput rate .if the route , where tcp sends its data is stationary utilized , the algorithm works in a high throughput slow adaption phase ( congestion avoidance ) .the sending rate is varied around a high value by slowly increasing and rapidly decreasing it .since every sent packet and its received acknowledgement continuously measure the transmission possibilities , this phase is very stable and can adapt to slowly varying situations . if the route gets highly loaded , the tcp tries to clear the congestion by decreasing the sending rate .if the congestion is so high , that the tcp can not guess the proper sending rate ( acknowledgements do not arrive in the timeout period ) , the algorithm enters a very slow sending phase ( exponential backoff ) . in this phase due to the lack of information an exponentially slowing algorithmis applied to try to find a new possible sending rate : the last package is resent after exponential increasing time intervals until an acknowledgement received or a maximum time interval is reached . in this paperwe concentrate on the backoff phase of the tcp .we will show , that due to its blind nature , in this phase the tcp can feel higher loss rates as it really is . by the blindness of the tcp we mean the consequence of karn s algorithm , which governs the backoff phase . under normal transmission conditions tcp operates in slow start or in congestion avoidance mode . in these modesthe tcp estimates the optimal sending rate from the arrival time of the acknowledgements ( ack ) by calculating the average round trip time ( srtt ) and its average deviation from the mean ( mdev ) .after each received ack the tcp estimates the retransmission timeout ( rto ) .if this timeout is exceeded between sending a packet and receiving an acknowledgement for it , the tcp retransmits the packet assumed to be lost ( by real loss or timeout ) .in this situation tcp applies the so called karn s algorithm .the karn s algorithm specifies that the acknowledgments for retransmitted data packets can not be used to approximate the sending rate . since for a received ack packetone can not decide if it is the ack of the original or of the retransmitted packet , the round trip time ( rtt ) and so the sending rate can not be estimated .the rtt can be calculated only for those packets , which are not retransmitted .so the tcp retransmits the packet and doubles the rto calculated from the previous flow - informations ( backoff phase ) .if the retransmitted packet timeouts again , the rto is doubled and the packet is retransmitted again . the rto is increased up to a maximal value defined in the protocol .the tcp leaves the backoff phase only if the rtt can be estimated without ambiguity : the tcp must receive the acknowledgements of two consecutive sent packets .we will show a situation where this method reports reasonably higher loss rate for the tcp as it really is .we distinguish the loss into real or virtual ._ real _ loss is referred to dropped packets which either are not arrived to the destination or the acknowledgment for it do not arrive to the sending tcp .we call a loss to be virtual if the acknowledgment arrives after the retransmission timeout ( rto ) period , so the packet is retransmitted due to a spurious timeout. the _ effictive _ loss is assembled from the real and virtual losses .this distinction is important , since real loss emerges at highly congested buffers or at low quality lines ( e.g. radio connections ) .these situations can be solved by improving the hardware conditions .in contrast , high virtual loss can evolve also under very good hardware conditions from heavily fluctuating background traffic . on a route with several routers , where the packets can stay in a long queue, round trip times can change in a wide range depending on the queuing time .the queuing time depends on the saturation of the buffers on the route .if the background traffic fills the buffers at a varying rate , the queueing time , and so the round trip time varies also .bursty background traffic can fill the buffers rapidly to a high value , and after that can leave it to be cleared out .if the round trip time increases to such a high value due to a rapid filling up , that it becomes larger than the retransmission timeout value , a virtual loss occurs . after a burst which caused the virtual lossthe clearing out of the buffer will lead to a shorter round trip time , which decreases the rto value also .so for the next burst event the rto is not large enough that the tcp can receive the ack packet .so another virtual loss occurs without really loosing the sent packets .we will show in a model network and in a real measurement , that long range dependent traffic can emerge from the virtual losses due to the bursty background , however , real packet loss rate is so low , that one would expect a scalable simple traffic rate .in this section we present a simple model network , which shows self - similar traffic . our model differs in several aspects from previous studies in the literature excluding the known reasons of self - similarity . in our modelthree hosts transfer fixed sized files to each other through a router .all hosts transfer files with the same size .the topology of the model is shown in fig .[ fig - topol ] . from each numbered sites of the networka tcp flow is initiated to one of the other numbered sites .each tcp flow passes through the router full duplex connections , so the flow of the acknowledgements do not interfere with the corresponding tcp data flow .however data from other tcps must share the same buffers and lines with acknowledgements .we have chosen the network to be practically lossless : the buffer length in the router was set so large , that it is very improbable that tcp flows fill them .all the six buffers for the three full duplex lines are large enough to store all the files transfered between the hosts at a moment .there is no external packet loss on the lines as well .we will study the traffic properties on a _ line _ connecting a chosen host with the router .so the packet flows we want to analyze are initiated from a fixed chosen host and they are built up from several successive tcp flows . in this topologythe traffic is not always self - similar .the throughput of packets on a line can be regular if the destination of the individual tcp flows is chosen on a regular way .an example is shown in fig .[ fig - throughput]a , where the tcp flows has been generated with the following simple rule : from the host numbered by ( ) the tcp sends packets to host . after a file has been transmitted , the host starts a new tcp flow _ immediately _ , there is no external random delay between the flows as it would be if we took the user behavior into account .under such regular sending rules the tcps can utilize the available bandwidth and the traffic has a scalable periodic pattern . in fig .[ fig - throughput]a we show the congestion window for a host .we have implemented this simple network in the ns-2 simulator ( version number : 2 .the link parameters are : link rate 1mbps , delay 1ms .file size of a tcp flow was 1000 packet .the receiver window was much larger than the file sizes .we have used the reno version of tcp . if we introduce stochasticity in the sending rules we arrive at a non - scalable , long range dependent traffic .we applied the following rules to generate self - similar packet transport .all hosts send fixed size files to each other .each host uses the same file size .if a host starts a new tcp , it randomly chooses to which host to send the file .after a transmission is completed , the host chooses the next destination immediately .the next destination is chosen randomly again without silent period between consecutive tcp flows . in fig .[ fig - throughput]b we show , that the stochasticity inhibits the tcps to synchronize and the packet transport becomes irregular .the size of the transfered files was chosen not too large to hinder the tcps to adapt to each other .we investigate now the irregular packet traffic if it shows self - similarity or not .self - similarity can be tested by investigating the second order statistics of the traffic .consider a weakly stationary process , with constant mean and autocorrelation function .let denote the aggregated series of .the process is self - similar if , and is second order self - similar if has the same variance and autocorrelation as .the sign expresses the fact that the equality can be satisfied only in a stochastic sense , exact equation can only be used for abstract mathematical objects .we have performed self - similarity test by using the variance time method . in fig .[ fig - aggtimemod ] we plot the variance of the aggregated time series of the packets which scales as the fitted line in the figure indicates hurst exponent showing that the time series is self - similar since .we emphasize again , that the time series under consideration is built up from several consecutive tcp flows .if a traffic is self - similar it shows properties which differs from ones of memory - less ( markovian ) processes : the dynamics of the system shows a strong history dependence .this long range dependence in the time evolution manifests itself typically in heavy - tailed distributions .a distribution is heavy - tailed if asymptotic decay of the distribution is slow : a power - law with exponent less than two . due to the always existing upper bounds of the measured data it is enough if the decay holds in the last decades below the maximum value : ,\ { \rm and } \ n>2.5\ .\ ] ] such distributions are called heavy - tailed , since occurrence of values much larger than the mean value of the observable is not negligible in contrast to commonly used distributions as gaussian or exponential .however in measured time series it can happen , that from the tail we can not detect so many events as it is needed to plot a smooth distribution function . in these casesit is favorably to work with the _ cumulative _ distribution , which has an asymptotic behavior as .therefore one should use the inverse cumulative function to fit the parameter on the logarithmic plot .now we want to investigate if the long range dependency shows up in the traffic .we consider only the case when a destinations of the tcps were chosen randomly . in fig .[ fig - decaymod ] we plot the inverse cumulative distribution of the packet inter arrival time on a down link .the distribution shows a slow decay with which indicates that the fluctuating traffic has long silent periods .a power law decaying fluctuation can be characterized by the hurst exponent if the traffic is viewed as an on - off process .the silent periods are the off intervals of the process .the hurst parameter is known for this type of process : , which gives similar result as calculated from the variance time plot in fig .[ fig - aggtimemod ] . in the followingwe look for the origin of the long range dependence found above . in our modelthe topology and the tcp flow generating rules are chosen such a way , that the link / source conditions for self - similarity are excluded . in network - side models the adaptive nature of tcp , namely the tcp backoff state mechanism is pointed out as a main origin of such behavior .we investigate now if there is a close relation between the self - similarity of the traffic , and backing off of the tcp . in the backoffphase tcp retransmits the lost packet and doubles the rto .tcp keeps track of the doubling times by a backoff variable . in the non - backoff phases , and in backoff shows how many times the rto counter has been doubled .due to karn s algorithm the rto is doubled until two consecutive packet receives its acknowledgement .first we recall shortly , that a tcp flow in the backoff phase produces heavy - tailed statistics in the packet inter arrival time .a tcp in a given backoff state waits for a period between two packet sending attempts .the -th backoff state occurs only after successive packet losses .let s denote the packet retransmission probability ( effective loss ) with .the probability of consecutive packet retransmission is .hence the probability of a silent period due to backoffs , decays as , where .next we repeat the main idea of a markovian chain model for backoff states and show , that the statistics of backoff states delivers the average effective loss probability .let denote the probability that the tcp is in a deep backoff . in a simplified markovian framework one can estimate the by the transition probabilities between backoff states as follows ( for a detailed matrix representation see .the rto value is doubled if one of two successive packets do not receive ack and is retransmitted .if the retransmission probability is the transition probability to a deeper backoff is .this yields a backoff probability decay to be and one can read off the average loss probability from the gradient of the semilogarithmic plot of versus .we emphasize here , that the loss probability measured by the probability of backoff states is the effective loss felt by the tcp .this probability can be much larger as the real loss .this is the case in our model , since the real loss is below , however , the effective loss is about . a typical backoff distribution for our stochastic model is shown in fig .[ fig - backstatmod ] .this gives us the possibility to demonstrate the connection between long range dependency and the backoff distribution .one compares the probability calculated from the backoff statistics and the inter packet arrival time decay factor calculated from the packet traffic time series .the two value agrees as , hence the long range dependency is caused mainly by the backoff mode of the tcp ( and not by other external reasons as e.g. long range distributed user interaction ) .we have demonstrated the connection between the long range dependency , self - similarity and backoff mechanism .finally we search for the origins of backing off the tcp .our model by construction excludes the known origins of self - similarity : the tcp flows follow each other immediately and transfer data at a given rate without external silent periods as e.g. would be the case with user - generated interactive flow .the transfered file sizes are constant .the buffer lengths are much above the critical chaotic threshold value .the only stochasticity is in the switching between the destinations of the tcp flows .this irregularity introduces some unpredictability in the data flow .if this unpredictability is high , the tcp estimation for the available sending rate is false .the consequences of unpredictability has been studied from many aspects , however all the previous studies require a case when the high real loss probability ( due to small buffers or external loss ) hinders the tcp to make sufficient predictions . herewe presented a model , where the stochastic choosing of destination itself pushes tcp into the backoff phase and generates self - similar traffic .how can this happen ?tcp operates in backoff , if the ack packet arrive after the limit set by rto .the rto value is calculated from the traffic continuously , using an average over some time - window .if the traffic is bursty , with silent periods comparable with size of the averaging window , the tcp can not adapt to the rapid changes in the traffic . in our modelwe detect heavy bursts in the queue lengths in the router .since tcps changes the destination randomly , it can happen , that after a silent period a buffer will be fed by one or two tcp. if these tcps are in slow start , the feeding of a buffer can be exponential fast .the queue lengths can hence grow very rapidly .if a queue gets longer , packets arriving in this queue must wait longer .a rapid change in the queue length can cause a so rapid change in the round trip time of a packet , that the ack for this packet arrives after the rto expires .so large fluctuations in the queue length ( background traffic ) can cause a series of virtual losses and backing off the tcp . in fig .[ fig - queuetime ] we show a typical queue length time plot , where the large fluctuations cause backoff phase in a tcp .there is a clear relation between the increasing queue length and the evolution of backoff states . since in our modelonly the heavily fluctuating background traffic can back off a tcp , we can conclude to identify the fluctuating background as a source of self - similarity .this self - similarity is a self - generated one , originating from the improper synchronization of hosts , which continuously send data to each other by using many successive tcp flows .in this section we present active measurement results which show similar results in a real network environment as found in the previous section in a small model scenario .the time evolution of a long tcp flow on a transcontinental connection was followed on the ip level by tcpdump and on the kernel level by a slightly modified linux kernel from the 2.2.x series .the modified kernel gave us the possibility to follow directly the internal tcp variables _ in situ _ for a real network environment .on the transcontinental line chosen for the measurement typically many tcp connections share the same route giving a highly fluctuating background traffic . additionally on the long line with many routersit is possible that the packets of our tcp flow stacks in filled queues .so the round trip time can fluctuate in a very wide range resulting many backoff states .figure [ fig - backtimemea ] shows a very congested time interval , where many backoff states were observed .here we mention , that in contrast to the tcp implementations of ns-2 , the backoff variable of the linux kernel can have larger values than 6 .as described in the previous section the self - similarity is characterized by the hurst parameter , if the stochastic process under consideration is weakly stationary . to satisfy this conditionwe restrict our analysis only for some parts ( time intervals ) of the whole measurement . in the time range under studythe highly congested traffic showed self - similar nature .the variance time plot for the aggregated time series of packet arrivals is plotted in figure [ fig - vartimmea ] , from which we can read off the hurst parameter . in fig .[ fig - statintmea ] we show the statistical distribution of packet inter arrival times , which show an decay giving a similar value for the hurst parameter as calculated from the variance time plot . since we do not have total control over the whole internet, we can not prove rigorously that the observed self - similarity is the consequence exclusively of the fluctuations in the background traffic as it is in the simulation scenario presented in the previous section .however it is possible to show , that as in the simulation there is a close relation between the inter packet time statistics and the backoff statistics under such conditions where the real packet loss is low , indicating self - generated self - similarity .here we investigate first , what was the loss rate at the line . in end - to - end measurements packet losscan be easily detected by analyzing tcpdump data .but to gain this direct information about the traffic , one needs special rights on the origin of the tcp flow and on the destination as well .this ideal condition is given usually only for a very restricted number of routes .in most cases one can monitor the network traffic only on one side as it was the case in our measurement .we applied the algorithm of benko et.al. with some improvement to detect packet losses from tcpdump data , and to decide if the packet is lost really or timeout occurred .the algorithm is the following .an effective loss occurs , if a packet is resent .a resent packet begins with the same serial number as the original packet , so we have to count the number of packets , whose sequence number occurred more than once in the tcp flow .we used timestamps to avoid the wrapped sequence number problem .detecting real loss events is a bit more tricky .a sent packet is probably lost if the tcp receives duplicate acknowledgement .duplicate acks are sent by the receiving tcp if a packet with higher sequence number has arrived .however this can happen due to changes in the packet order also , therefore the tcp waits a little and retransmits the packet only if packet order switching is very improbable .hence for detecting real loss events we have to count the number of resent packets , which are sent due to receiving of duplicate acks .previously we mentioned that the background traffic during the whole measurement can not be approximated by weakly stationary stochastic processes and for analysis one has to consider only parts of the data . in thisparts the flow can be characterized by static parameters e.g. the loss ratio is constant in time .these intervals can not be too short to have enough data for statistical considerations . in fig .[ fig - lossmea ] we plot the loss probability versus time for the whole measurement .one can see long plateaus however there are non - stationary regimes as well .in the following we restrict ourself only for the longest stationary range .we investigated the statistics of the backoff states for this time regime from the data logged by the modified linux kernel .we found , that the distribution shows an exponential decay as it follows from the markovian description presented in the previous section .the fig .[ fig - backstatmea ] shows the decay of the probability of the backoff states .the slope of the fitted line indicates a loss probability felt by the tcp .this loss rate is consistent with the asymptotic decay of the packet inter - arrival times ( fig .[ fig - statintmea ] ) and with the hurst parameter of the aggregated traffic ( fig .[ fig - vartimmea ] ) .so the close relation between the backoff states and the self - similarity of the traffic holds . the next question is , if the tcp is backed off due to real packet losses or the packets where only delayed and timed out . in fig .[ fig - lossbackmea ] we compare the loss ratio from the backoff statistics ( ) with the loss probability calculated from the tcpdump output .we find , that the average loss probability felt by the tcp equals with the _ real plus virtual _( effective ) loss and not with the _ real _ loss alone . herethe difference between the two type of losses is crucial , since the real loss is smaller than , the lower bound of loss probability , which gives self - similar traffic , but the effective loss is higher .we have demonstrated in a model network and in a real measurement how tcp can generate self - similar traffic itself .we showed that at very low packet loss rate the congestion control mechanism detects false packet losses if the background traffic is bursty . on this fluctuating background traffic tcpresponds with backoff states .the switching between backoff and congestion avoidance phases introduces further fluctuations into the overall traffic , which results additional backoffs .this self - generated burstiness evolves to self - similarity however the network properties indicate simple , easily predictable traffic . in the future we focus on the self - generation of the burstiness , what are the exact conditions for emergence of self - similarity in perfect networkg. v. thanks the support of the hungarian science fund otka t037903 and t 032437 .p. thanks the support of the hungarian science fund otka d37788 , t34832 , t42981 .k.park , g.kim and m.crovella , `` on the relationship between file sizes , transport protocols , and self - similar network traffic '' , in proceedings of the international conference on network protocols , pp .171 - 180 , oktober 1996
self - similarity in the network traffic has been studied from several aspects : both at the user side and at the network side there are many sources of the long range dependence . recently some dynamical origins are also identified : the tcp adaptive congestion avoidance algorithm itself can produce chaotic and long range dependent throughput behavior , if the loss rate is very high . in this paper we show that there is a close connection between the static and dynamic origins of self - similarity : parallel tcps can generate the self - similarity themselves , they can introduce heavily fluctuations into the background traffic and produce high effective loss rate causing a long range dependent tcp flow , however , the dropped packet ratio is low .
in recent years the identification and categorization of networks has become an emerging research area in fields as diverse as sociology and biology , but has remained relatively unutilized in software engineering .the study and categorization of software systems as networks is a promising field , as the the identification of networks in software systems may prove to be a valuable tool in managing the complexity and dynamics of software growth , which have traditionally been problems in software engineering .however , current trends in software development offer diverse and accessible software to study , which may help software engineers learn how to create better programs .in particular , open - source software ( oss ) allows researchers access to a rich set of examples that are production - quality and studiable `` in the wild '' .they are a valuable asset that can aid in the study of software development and managing complexity . in oss systems ,applications are often distributed in the form of packages .a package is a bundle of related components necessary to compile or run an application . because resource reuse is naturally a pillar of oss , a package is often dependent on some other packages to function properly .these packages may be third - party libraries , bundles of resources such as images , or unix utilities such as grep and sed .package dependencies often span across project development teams , and since there is no central control over which resources from other packages are needed , the software system self - organizes in to a collection of discrete , interconnected components .this research applies complex network theory to package dependency networks mined from two oss repositories .a network is a large ( typically unweighted and simple ) graph where denotes a vertex set and an edge set .vertices represent discrete objects in a dynamical system , such as social actors , economic agents , computer programs , or biological producers and consumers .edges represent interactions among these `` interactons '' .for example , if software objects are represented as vertices , edges can be assembled between them by defining some meaningful interaction between the objects , such as inheritence or procedure calls ( depending on the nature of the programming language used ) .real - world networks tend to share a common set of non - trivial properties : they have scale - free degree distributions and exhibit the small - world effect .the degree of a vertex , denoted , is the number of vertices adjacent to , or in the case of a digraph either the number of incoming edges or outgoing edges , denoted and , respectively . in real - world networks such as the internet , the world - wide web , software objects , networks of scientic citations , the distribution of edges roughly follows a power - law : .that is , the probability of a vertex having edges decays with respect to some constant .this is significant because it shows deviation from randomly constructed graphs , first studied by erd and r and proven to take on a poisson distribution in the limit of large , where .random connection models also fail to explain the `` small - world effect '' in real networks , the canonical examples being social collaboration networks , certain neural networks , and the world - wide web .the small - world effect states that and where is the _ clustering coefficient _ of a graph , and is the s _ characteristic path length _ .the clustering coefficent is the propensity for neighbors of a vertex to be connected to each other . for a vertex , we can define the clustering coefficent as , and therefore $ ] .the clustering coefficient for a graph is the average over all vertices , .real - world networks are normally highly clustered while random networks are not , because for large networks . because most networks are sparse ,that is , random networks are not highly clustered . is the average geodesic ( unweighted ) distance between vertices . to summarize, random graphs are not small - world because they are not highly clustered ( although they have short path lengths ) and they are do not follow the commonly observed power - law because the edge distribution is poissonian . the presence of these features in networks indicate non - random creation mechanisms , which although several models have been proposed , none is agreed upons . in order to make accurate hypothesis about possible network creation mechanisms , a wide variety of real - world networks sharing these non - trivial properties should be identified .previous research in networks of software have focused on software at``low '' levels of abstraction ( relative to the current research ) .clark and green found zipf distributions ( a ranking distribution similar to the power - law , which is also found in word frequencies in natural language ) in the structure of cdr and car lists in large lisp programs during run - time . in the case of object - oriented programming languages ,several studies have identified the small - world effect and power - law edge distribution in networks of objects or procedures where edges represent meaningful interconnection between objects , such as inheritence or in the case in procedural languages , procedures are represented as vertices and edges between vertices symbolize function calls .similar statistical features have also been identified in networks where the vertices represent source code files on a disk and edges represent a dependency between files ( for example , in c and c++ one source file may _ # include _ another ) , and in documentation systems .mining the debian gnu / linux software repository and the freebsd ports collection has allowed us to create networks of package dependencies . in the case of the debian repository, data was taken from the i386 branch of the `` unstable '' tree , which contains the most up - to - date software and is the largest branch .the debian data was extracted using _ apt _( advanced packaging tool ) , while the bsd data was extracted from the ports _ index _ system .the bsd ports system allowed us to distinguish between run - time dependencies and compile - time ( build ) dependencies .the data here is for only compile - time dependencies , although results are similar for run - time dependencies .graphs were constructed in java using the java universal network / graph framework .`` snapshots '' of the repositories were taken during the month of september , 2004 .the debian network contains packages and edges , giving each package an average coupling to packages . for the debian network , and puts the debian network in the small - world range , since an equivalent random graph would have and .there are 1,945 components , but the largest component contains 88% of the vertices .the rest of the vertices are disjoint from each other , resulting in a large number of components with only 1 vertex .the diameter of the largest component is 31 .the distribution of outgoing edges , which is a measure of dependency to other packages , follows a power - law with .the distribution of incoming edges , which measures how many packages are dependent on a package , follows a power - law with .while 10,142 packages are not referenced by any package at all , the most highly referenced packages are referenced thousands of times .73% of packages depend on some other package to function correctly .correlation between , , and package size is not calculated because the normality assumption is violated . [ cols="<,<,<",options="header " , ] the bsd compile - time dependency network contains packages and edges , coupling each package to an average of other packages . for the bsd network , and .an equivalent random graph would have and .hence , the bsd network is small - world .the degree distribution of the bsd network also resembles a power - law , with and . for the run - time network ,results were similar : the run - time network is both small - world and follows a power - law .in the debian network , the 20 most highly depended - upon packages are libc6 ( 7861 ) , xlibs ( 2236 ) , libgcc1 ( 1760 ) , zlib1 g ( 1701 ) , libx11 - 6 ( 1446 ) , perl ( 1356 ) , libxext6 ( 1110 ) , debconf ( 1013 ) , libice6 ( 922 ) , libsm6 ( 919 ) , libglib2.0 - 0 ( 859 ) , libpng12 - 0 ( 622 ) , libncurses5 ( 616 ) , libgtk2.0 - 0 ( 615 ) , libpango1.0 - 0 ( 610 ) , libatk1.0 - 0 ( 602 ) , libglib1.2 ( 545 ) , libxml2 ( 538 ) , libart-2.0 - 2 ( 524 ) , and libgtk1.2 ( 474 ) .the number in parentheses represents the number of incoming edges .the list is composed mainly of libraries that provide some functionality to programs such as xml parsing or that provide some reusable components such as graphical interface widgets .because the most highly - connected package ( libc6 ) is required for execution of c and c++ programs , we can infer that these are the most widely used programming languages .figure 1 shows the double - log distribution of edges in the debian network ( scatterplots for the bsd network would have a similar shape ) . from the figurewe can see the heavy - tailed power - law shape .the absolute value of the slope of the regression line indicates the power - law exponent , .this research has shown that package dependency networks mined from two open - source software repositories share the following properties typical to other real - world networks : there are many directions for future research in the study of software networks .currently , there is no model of network formation that takes software dynamics ( reuse , refactoring , addition of new packages ) in to account .also , the impact of the network structure on software dynamics should be investigated .future research should identify other networks in software and move towards formulating a theory of networks and their value to software engineering .additional dependency networks can be constructed on windows computers using memory profiling tools , and determining interactions based on shared .dll ( dynamic library link ) files and active - x controls .
this research analyzes complex networks in open - source software at the inter - package level , where package dependencies often span across projects and between development groups . we review complex networks identified at `` lower '' levels of abstraction , and then formulate a description of interacting software components at the package level , a relatively `` high '' level of abstraction . by mining open - source software repositories from two sources , we empirically show that the coupling of modules at this granularity creates a small - world and scale - free network in both instances . complex networks , open - source software , software engineering
network coding is an exciting new technique promising to improve the limits of transferring information in wireless networks .the basic idea is to combine packets that travel along similar paths in order to achieve the multicast capacity of networks .the network coding scheme called _ local network coding _ was one of the first practical implementations able to showcase throughput benefits , see cope in .the idea of local network coding is to encode packets belonging to different flows whenever it is possible for these packets to be decoded at the next hop .the simplicity of the idea gave hopes for its efficient application in a real - world wireless router .in the simple alice - relay - bob scenario , the relay xors outgoing packets while alice and bob use their own packets as keys for decoding . the whole procedure offers a throughput improvement of 4/3 by eliminating one unnecessary transmission .local network coding has been enhanced with the functionality of_ opportunistic listen_. the wireless terminals are exposed to information traversing the channel , and proposed a smart way to make the best of this inherent broadcast property of the wireless channel .particularly , each terminal operates in always - on mode , overhearing constantly the channel and storing all overheard packets .the reception of these packets is explicitly announced to an intermediate node , called the relay , which makes the encoding decisions .finally , the relay can arbitrarily combine packets of different flows as long as the recipients have the necessary keys for decoding .using the idea of opportunistic listen , an infinite wheel topology , where everyone listens to everyone except from the intended receiver , can benefit by an order of 2 in aggregate throughput by diminishing the downlink into a single transmission , see .the wheel is a particular symmetric topology that is expected to appear rarely in real settings .also , the above calculations take into account that all links have the same transmission rates , thus it takes the same amount of time to deliver a native ( non - coded ) packet or an encoded one .in addition , all possible flows are conveniently assumed to exist .this , however , is not expected to be a frequent setting in a real world network .a natural question reads : what is the expected throughput gain in an arbitrary wireless ad hoc network ?the maximum gain does not come at no cost either .deciding which packets to group together in an encoded packet is not a trivial matter as explained in , in er and in clone . in the latter case ,the medium is assumed to be lossy , and the goal is to find the optimal pattern of retransmissions in order to maximize throughput .in the first case , a queue - length based algorithm is proposed for end - to - end encoding of symmetric flows ( i.e. flows that one s sender is the other s destination and the other way around . ) .all these decision - making problems are formulated as follows .denote the set of nodes in need of a packet belonging to flow and the set of nodes having it .then the encoded combination of two packets belonging to flows and can be decoded successfully if and only if and .if this condition is true we draw an edge on the _ coding graph _ with vertices all the possible packets . then finding the optimal encoding schemeis reduced to finding a minimum clique partition of the coding graph , a commonly known np - hard problem , . moreover ,the same complexity appears when the relay node makes scheduling decisions , i.e. , selecting which packets to serve and with what combinations .work related to index coding has shown that this problem can be reduced to the boolean satifyability problem ( sat problem ) , .thus a second question arises : what is the loss in throughput gain if instead of searching over all possible encoded packet combinations , we restrict our search in combinations of size at most ? in this paper we are interested in showing that , for a real ad hoc wireless network , opportunities for large encoding combinations rarely appear . to show this, we consider regular topologies like grids as well as random ones . we calculate the maximum encoding number in these scenarios in the mean sense and we consider small as well as large networks . to capture the behaviour of large ( or dense ) networks ,we examine the scaling laws of maximum encoding number .scaling laws are of extreme interest for the network community in general .although they hold asymptotically , they provide valuable insights to the system designers . in this direction , the authors in study the wireless networks scaling capacity in a gupta - kumar way taking into account complex field nc . also examines the use of nc for scaling capacity of wireless networks .they find that nc can not improve the order of throughput , i.e. the law prevails . discusses the issue of scaling nc gain in terms of delays while identifies the energy benefits of nc both for single multicast session as well as for multiple unicast sessions . in ,nc is used instead of power control and the benefits are characterized . in a similar spirit , investigates the use of rate adaptation for wireless networks with intersession nc .utilizing rate adaptation , it is possible to change the connectivity and increase or decrease the number of neighbors per node .they identify domains of throughput benefits for such case .the most relevant work in the field is .the authors analyze the maximum coding number , i.e. , the maximum number of packets that can be encoded together such that the corresponding receivers are able to decode .they show that this number scales with the ratio where is a region outside the communication region and inside the interference region .note however , that this work does not yield any geometric property for the frequency of large combinations since it relies only on specific protocol characteristics . in networks with small , e.g. , whenever a hard decoding rule is applied , there is no bound for the maximum coding number . in this paperwe study the problem from a totally different point of view , showing that there exist inherent geometric properties bounding the maximum coding number below a number relative to the population or density of nodes .moreover , we apply the boolean connectivity model for which , and thus the previous result does not provide any bound at all .we show that the upper bound of the maximum coding number is related to a convexity property that any valid combination has .we start by considering a fixed separation distance network , like a square grid , and show that in such networks , the maximum coding number is and {n}) ] .the main objective of this paper is to find how the _ maximum coding number _ and the _ maximum network coding gain _ scale with the number of nodes .also we will provide bounds for the scaling constants which are useful for determining the behaviour in small networks .apart from the number of neighbors , the gain analysis depends also on the activated flows . in the simple alice relay bob topology , it is possible that only the flow going from alice to bob is activated , in which case the gain is zero . in this paperwe are interested in determining an upper bound for the efficiency loss when the relay is constrained on combinations of size ( e.g. if the system is constrained to pairwise xoring ) .for this reason , we consider the maximum gain scenario . for each node designated as a relay, we assume that all possible two hop flows traversing this relay are activated .this means that each node designated as a relay , has all possible different packets from which to select an xor combination to send to the neighbors . since not all of those combinations are valid , finding the maximum valid combination that corresponds to the maximum coding number is a non - trivial task and will be the goal of this paper .the resulting bound will help characterize the efficiency loss due to resorting to -wise encoding . in real systems, some flows might not be active in which case the resulting efficiency loss from -wise encoding will be even smaller . to make this more precise , similar to , we define source - destination pairs designating 2hop flows that cross the relay .each flow has a source , a destination , a set of nodes having it ( either by overhearing or ownership ) and a set of nodes needing it .we write because at least one node , the destination or the source , is not part of and correspondingly .two flows are called symmetric when they satisfy the property and . herewe summarize the previous subsection in the form of constraints .we will focus on network coding opportunities appearing in the aforementioned arbitrary network around the relay . _( valid node ) : _ a node is a _ valid node _ if . _( valid flow ) : _ a flow is a _ valid flow _ if and are valid nodes not neighboring with each other , i.e. , ._ ( valid combination ) : _ a subet of flows with , where , is a _ valid combination_ if * each flow is a valid flow , * every pair of flows satisfies and or equivalently , is connected with while is connected with .we define the maximum coding number as the greatest cardinality among all valid sets .if the positions of the network are random , is evidently a random variable .note that we could impose additional constraints .for example , if a flow can be routed more efficiently by a node other than , then this flow should be excluded from the set of valid flows .this would restrict further the set of valid combinations and thus by omitting this constraint we derive an upper bound for .next , we state some fundamental properties of the valid combinations . for each flow belonging to a valid combination we have * , * for all , which leads us to the following properties .the destination node of is different from the destination node of any other flow .the source node of is different from the source node of any other flow .next , we provide a result on the topology for a valid combination .let represent the set of locations of all nodes being the source or destination of a flow belonging to a combination .any valid combination of size 3 or larger corresponds to a convex polygon ( the polygon is formed using the set as edges ) .[ lemma : convex ] consider a valid combination defined by flows where .consider also the set of nodes that are sources and/or destinations in and the induced set of locations such that we have a bijective mapping for each element with an element .assume that there is a node which is an interior point of the convex hull is the minimal convex set containing . ] of .thus its location can be written as where and for all .on the other hand , there is a unique , which is the communicating pair ( source or destination ) of in at least one flow , so that all the other nodes ( destinations or sources ) in should be able to reach the node directly .thus , which is a contradiction to ( [ eq : proximity ] ) .consequently the node , as well as all other nodes of the combination , necessarily lie on the perimeter of the convex hull .thus , the nodes of a valid combination are the vertices of a convex polygon . when the set of sources is identical to the set of destinations , the combination consists of symmetric flows only and , .in order to calculate an upper bound of the network coding combination size , it is enough to resort to the case of symmetric flows . for any valid combination thereexists at least one combination of the same or larger size that contains only symmetric flows .we will show that for any flow we can add the symmetric one without invalidating the combination as long as it is not already counted . in a bipartite graph with all the nodes on one side and the destinations of on the other , consider a directional link , between the source of flow and its destination , for each .note now that the nodes having out - degree one , i.e. , the active sources in , may or may not be identical to one of the destination nodes .we can make a partition of the set of active sources by assigning those with the above property to the set and the rest to the complementary set .if , then the lemma is proved since is a valid combination with symmetric flows only . if not , then we can create a new combination which has more flows than the original one using the following process . for each transmitter in , say the transmitter of flow , add one extra flow with and .this flow does not belong to ( because ) and it does not invalidate the combination due to the bidirectional properties of the model . notethat is a valid flow because can not be connected to due to validity of .note also that is connected to for all since this is again required for the decoding of the original flows .thus , for any flow we can add the symmetric one without invalidating the combination .if a valid combination consists of symmetric flows only , its size must be even . in graph theory terms, a valid combination with symmetric flows can be thought of as a graph created by a clique of nodes , minus a matching with edges , with all symmetric flows defined by this matching activated .this graph is called in _ wheel topology_.in this section we focus on positioning the nodes on a grid .grid topologies often offer an insightful first step approach towards the random positioning behaviour . also , the investigation of grids answers the question whether it is possible to achieve high nc gain by arranging the locations of the nodes .we therefore assume a network with the additional property , for any pair of nodes .this condition pertains to regular grids such as the square , the triangular and the hexagonal grid as well as other grids with non - uniform geometry .we impose nevertheless the property that the node density is the same over all cells and thus the geometry should be somehow homogeneous. the number of nodes inside a disk or radius will be for these networks and the corresponding node density . _( upper bound ) _ the maximum coding number in fixed - separation networks is where is the number of nodes or equivalently . from lemma [ lemma : convex ] we know that the nodes belonging to the maximum combination form a convex polygon .any such polygon fitting inside the disk of radius must have perimeter smaller than . since the nodes on the perimeter should be at least away from each other, we conclude that the maximum coding number is this combined with or respectively yields the result .a particular case of the above bound is the square grid .the number of nodes inside the disk is where is an error decreasing linearly with .thus we obtain an upper bound far we have shown that any network with fixed separation distance and uniform density , will have maximum coding number of , where is the number of nodes connected to the relay .in particular , the constant can be determined for any given grid and for the square grid is .the simulations show that the actual maximum encoding number is approximately half of that calculated above .the reason for that is basically that the valid polygon is always smaller than the disk of radius and often close to the size of a disk of radius .it is interesting to bound the achievable maximum coding number from below as well . to obtain intuition about thisbound we start with a non - homogeneous topology , the _cyclic grid_. we construct concentric cyclic groups of radius , that fall inside the disk of radius .each cyclic group has as many nodes as possible such that the fixed separation distance condition is not violated .such a topology exhibits different behavior depending on the selected origin ( it is not homogeneous ) , nevertheless it helps identify a particular behavior of the achievable maximum coding number .the cyclic group at has nodes .thus the grid of radius will have a very good approximation is , _ ( lower bound ) _ [ th : lower ] in networks with nodes away from each other and cell radius , an achievable maximum coding number is 1 . for cyclic grid with , 2 .{n}) ] for the square grid of , where is the remainder of the division . by focusing on the cyclic group with ,note that each node is from the center and thus the desired connectivity properties are satisfied for all nodes on the cyclic group . in this case, we can calculate the number of nodes in the group as which for large is bounded from below by some linear function of .now each node is away from the center and thus we need to select those nodes satisfying the property of valid combination . for thisit is enough that we leave an empty angle such that if is a diameter and this angle , then . by solving this for the maximum number of points satisfying this property we get which for large bounded from below by some linear function of {n} ] .note that the sparseness of the combination is due to and a possible reasoning is that the bound is constructed to cover all the cases , thus also the case that the uncomfortable positioning of nodes matches the second case of the cyclic grid above . in , relative results on convex polygons in constrained sets guarantee the existence of convex polygons of size {n}) ] ( notice the logarithmic scale of the figure ) . nevertheless , the oscillating effect due to the interplay between the radius and the number of nodes is evident .next , we throw nodes uniformly inside the disk of radius . examples of maximum coding number are showcased in figure [ fig : random_examples ] .it is noted from these examples that large combinations tend to appear in a form where the inner side of the ring is a disk of radius and the outer side is a disk of radius . in figure[ fig : random_maxcomb ] , we present the mean maximum coding number , for different number of nodes . in each sample , the maximum coding number is calculated and the mean is obtained by averaging over random samples . the behavior is depicted in this picture .figure [ fig : random_prob ] shows the probability of existence of at least one coding combination of size in a network of uniformly thrown nodes . for example , the maximum component size for is either 4 or 6 in the majority of cases .the simulation results show that in real networks of moderate size the usual combination size is quite small .the multiplicative constant seems to be close to 1 . in this context, the focus should be on developing efficient algorithms that opportunistically exploit local network coding over a wide span of topologies using small xor combinations rather than attempting to solve complex combinatorial problems in order to find the best combinations available . in a network of uniformly thrown nodes .] next , we set up a realistic experiment to be run in simulation environment .a relay node is positioned at the origin , willing to forward any traffic required and apply nc if beneficial .then , we throw pairs of nodes randomly inside the disk defined by the relay and the communication distance ; note that all nodes are valid nodes .each pair constitutes a symmetric flow .each flow may be valid or invalid depending on the distance between the two nodes , see definition in section [ sec : model ] . whenever the flow is invalid, the nodes communicate directly by exchanging two packets over two slots ( one for each direction ) .if the flow is valid , the relay is utilized to form a 2-hop linear network .again , 2 packets are uploaded towards the relay using two slots while the downlink part is left for the end of the frame .finally , the relay has collected a number of packets which may be combined in several ways using nc . to identify the minimum number of slots required to transmit those packets to the intended receivers, we solve the problem of minimum clique partition with the constraint of using cliques of size up to ( equivalently , combining up to packets together ) . in the abovewe have assumed that all links have equal transmission rates and that the arrival rates of the flows are all equal ( symmetric fair point of operation ) .the network coding gain is calculated by dividing the number of slots used without nc by the number of slots using nc .figure [ fig : realistic ] depicts results from simulated random experiments .evidently , it is enough to combine up to two packets per time in order to enjoy approximately the maximum nc gain .this example supports the intuition that in practice the network coding gain from large combinations is expected to be negligible . .the relay node ( situated at the origin ) assists the flows that can not communicate directly .we restrict nc combinations to size where . ]by considering the boolean connectivity model and applying the basic properties required for correct decoding , we showed that for the local network coding there are certain geometric constraints bounding the maximum number of packets that can be encoded together .particularly , due to the convexity of any valid combination , the sizes of combinations are at most order of for all studied network topologies .the fact that the number of packets is limited gives rise to approximate algorithms for local coding . instead of attempting to solve the hard problem of calculating all possible coding combinations, we showed that an algorithm considering smaller combinations does not lose too much .e. ahmed , a. eryilmaz , m. medard , and a.e .ozdaglar . on the scaling law of network coding gains in wireless networks . in _ proceedings of ieee military communications conference , milcom _ ,pages 17 , 2007 .l. jilin , j.c.s .lui , and m.c .how many packets can we encode ? - an analysis of practical wireless network coding . in _ieee infocom 2008 .the 27th conference on computer communications _ , pages 371375 , apr 2008 .
this paper focuses on a particular transmission scheme called local network coding , which has been reported to provide significant performance gains in practical wireless networks . the performance of this scheme strongly depends on the network topology and thus on the locations of the wireless nodes . also , it has been shown previously that finding the encoding strategy , which achieves maximum performance , requires complex calculations to be undertaken by the wireless node in real - time . both deterministic and random point pattern are explored and using the boolean connectivity model we provide upper bounds for the maximum coding number , i.e. , the number of packets that can be combined such that the corresponding receivers are able to decode . for the models studied , this upper bound is of order of , where denotes the ( mean ) number of neighbors . moreover , achievable coding numbers are provided for grid - like networks . we also calculate the multiplicative constants that determine the gain in case of a small network . building on the above results , we provide an analytic expression for the upper bound of the efficiency of local network coding . the conveyed message is that it is favorable to reduce computational complexity by relying only on small encoding numbers since the resulting expected throughput loss is negligible . encoding number , network coding , random networks , wireless
the fluid dynamics video is http://ecommons.library.cornell.edu/bitstream/1813/17513/2/electroworming_ld.mpg[video1 ] + + we study the effect of electric field magnitude and frequency on c. elegans .low magnitude , dc electric fields have been used before to guide the motion of the wild - type nematode c. elegans . however , the worms appear to be oblivious to uniform electric fields when the field frequency exceeds several tens of hertz . in contrast , nonuniform , moderate intensity , high frequency ( khz ) ac fields trap worms at the location of the highest field intensity . with certain electrode arrangements ,only the worm s tail is immobilized . when the electric field intensity is moderate or high and the frequency is low ( about 1 - 100 khz ) , the worm is eventually injured , paralyzed , or electrified depending on the applied electric field magnitude .this is the first demonstration of dielectrophoretic trapping of an animal .the effects of the electric field intensity and frequency on the worm are recorded in a `` phase diagram . '' + the effect of the dc field on the worm is illustrated in a video featuring a conduit made in a pdms slab .the worm swims towards the cathode .when the electric field polarity is reversed , so is the worm s direction of motion .+ the effects of nonuniform electric fields on worms are studied with a pair of spiked electrodes patterned on a glass slide and set at various distances apart .the glass slide caps a trench molded in pdms to form a 118 m tall and 300 m wide conduit .the total length of the conduit is 17 mm .the conduit is initially filled with deionized ( di ) water with electric conductivity of s / m .worms from a synchronous culture are transferred from the culture dish , placed in the inlet of the microfluidic conduit , and propelled gently to the location of the electrodes with the aid of a syringe .the behavior of individual worms as a function of electric field intensity and frequency is monitored with an optical microscope . the flow field induced by the wormis monitored by seeding the liquid with fluorescent particles and tracking their positions as functions of time to construct the velocity vector field . when the worm is smaller than the gap between the electrodes , typically the worm becomes anchored to the electrode by its tail while its more energetic head moves vigorously .+ we trapped worms using electric field intensities and frequencies which appeared to leave the worms unharmed . trapped worms , after release ,tend to function as untrapped worms for many hours .the anchored worm appears to produce swimming motion similar to that of an untrapped worm in the absence of an electric field .since the trapped worm can not move , the liquid around it is propelled backwards by the worm s undulatory motion .thus , the anchored worm acts as a pump .the worm s motion induces vortices in the flow and is likely to provide effective stirring ; thus the anchored worm can also act as a stirrer .+ like untrapped worms , the anchored worms exhibit photophobicity - avoiding blue light .thus , blue light can be used to exert control on the worm s motion .furthermore , the blue - light avoidance indicates that ( at least ) the light - sensitive neurons are not adversely impacted by the electric fields . + two worms anchored in close proximity synchronized their motion .although the synchronization mechanism is not fully understood , it is most likely that hydrodynamic stimuli played a role in transmitting information from one worm to the other .+ dielectrophoresis can be used , among other things , to sort worms by size , to temporarily anchor worms to enable their characterization and study , and to use worms to induce fluid motion ( worm - pump ) and mixing ( worm - stirrer ) .1 . p. rezai , a. siddiqui , p. r. selvaganapathy and b. p. gupta , appl . phys . lett . , 2010 , 96 , 153702. 2 . c. v. gabel , h. gabel , d. pavlichin , a. kao , clark d.a . anda. d. t. samuel , j. neurosci ., 2007 , 27 , 7586 - 7596 .
the video showcases how c. elegans worms respond to dc and ac electrical stimulations . gabel et al ( 2007 ) demonstrated that in the presence of dc and low frequency ac fields , worms of stage l2 and larger propel themselves towards the cathode . rezai et al ( 2010 ) have demonstrated that this phenomenon , dubbed electrotaxis , can be used to control the motion of worms . in the video , we reproduce rezai s experimental results . furthermore , we show , for the first time , that worms can be trapped with high frequency , nonuniform electric fields . we studied the effect of the electric field on the nematode as a function of field intensity and frequency and identified a range of electric field intensities and frequencies that trap worms without apparent adverse effect on their viability . worms tethered by dielectrophoresis ( dep ) avoid blue light , indicating that at least some of the nervous system functions remain unimpaired in the presence of the electric field . dep is useful to dynamically confine nematodes for observations , sort them according to size , and separate dead worms from live ones .
one of the most important problem to attain high performance in distributed system with heterogeneous clients is to balance the total loads of all the processing jobs among the whole system , such that the processing burdens of all clients are almost same .this is the classical problem of load balancing in distributed system .a distributed system can be considered as a collection of computing devices connected with communication links by which resources are shared among active users . in , the authors describe the load balancing problem as follows _ given the initial job arrival rates at each computer in the system find an allocation of jobs among the computers so that the response time of the entire system over all jobs is minimized"_. this definition of load balancing problem can be viewed as the _ static load balancing _ , where the main assumption is that all information governing load - balancing decisions such as characteristics of jobs , the computing nodes and the communication links are knows at advance . hereload - balancing decisions are made deterministically or probabilistically at compile time and remains constant during runtime .a variant of classic load balancing problem is the _ dynamic load balancing _ , where load balancing decisions are made at run time , and according to the current load of different computing devices , loads are transferred from one computing device to another dynamically at run time . in ,the authors have shown that static load balancing algorithms are more stable in compare to dynamic load balancing algorithms and it is easy to predict the behavior of static algorithms . in this paper we model such static load balancing algorithm for distributed system using congestion game model .the problem of load balancing in distributed system can be handled in three ways : * _ global approach _ : a single decision maker optimizes the overall response time using some optimization techniques .this approach is also called as _ social optimum_.this is the most frequent literature in study and has been studied extensively using different techniques such as nonlinear optimization , polynomial optimization etc . * _ cooperative approach _ : this is based on classical cooperative game theory where the decision makers cooperate between themselves by sharing informations through message passing , and then take the decision using the utility and pay - off function . in , the authors have modeled the problem of load balancing in distributed system using cooperative game theoretic approach . * _ non - cooperative approach _ : here the decision is made using the pay - off and utility function , but the agents does not share any information between themselves . congestion game is a variant of non - cooperative games where each agent s strategy consists of a set of resources , and the cost of the strategy depends only on players using each resource .congestion games have attracted a good deal of attention , because they can model a large class of routing and resource allocation scenarios .another reason for using congestion games in resource allocation is that they possess _ pure nash equilibria _ . in general games ,nash equilibrium may involved mixed ( i.e. , randomized ) strategies for players , but congestion game always have a nash equilibrium in which each player sticks to a single strategy .but the problem with nash equilibrium in congestion game is that they are known to be __ pls - complete__ .so it is difficult to find a nash equilibrium in congestion games and so the convergence time is very high . in ,the authors describe a variant of congestion game , called -congestion game which is an approximate version of pure congestion games , and have been proved to possess a better convergence rate .the authors have proved that -nash dynamics in -congestion game converge to an -nash equilibrium within a finite number of steps .we have used the formulation of -nash equilibrium , as described in , to model the problem of load balancing in distributed system as a resource sharing problem .we have studied the formulation of the problem and the existence of -nash equilibrium for the problem .the problem of static load balancing for single class job distributed system has been studied extensively for global approach . in global approachthe focus is to minimize the overall response time . in ,the authors have formulated the load balancing problem as a nonlinear optimization problem and have given an algorithm to solve that nonlinear optimization problem . in and , kim andkameda derived a more efficient algorithm to introduce the problem .an comparison between several static load balancing algorithms with respect to job dispatching strategy has been studied by tang and chanson , in .the load balancing problem using game theoretic approaches also has been studied both for cooperative and non - cooperative approach . in ,altman , kameda and hosokawa modeled distributed system as collection of nodes connected by either simplex or duplex connected links , and then described the dynamic load balancing problem for this system using game theoretic approach .they have established the proof for a unique nash equilibrium for routing games with the above mentioned distributed system model , under quite general assumption on the costs .for this , they have considered two different architectures , in one the nodes are connected using duplex link , and in another they are connected via two one - way communication links . in ,the authors have modeled the system as an m / m/1 queue , and then proposed an algorithm , called _ cooperative static scheme"_(coop ) , using nash bargaining solution , an interesting variant of cooperative game theoretic concepts and the solution of the problem using first order kuhn - tuker conditions .they have also compared their nash bargaining solution algorithm with the existent static and dynamic load balancing algorithms.there are very few literatures in studying the non - cooperative models for load balancing problems in distributed systems . studied non - cooperative games and derived load balancing algorithms for both single class and multi class job distributed systems . for single class job, they have proposed an algorithm for computing wardrop equilibrium , a variant of nash equilibrium where number of agents participating in the game is infinite . in , the author has modeled the load balancing problem as a stackelberg game , where one player acts as a leader and rest as followers .he has showed that optimal stackelberg strategy computing is np - hard and hence he formulated a near optimal solution . in ,the authors has proposed an uncoordinated load balancing algorithm for peer - to - peer system .they have analyzed the nash equlibrium under general latency function for a p2p system . in this paperwe have moved from these general game theoretic approaches , and modeled the problem as a congestion game , the most suitable strategy to model resource allocation problems using non - cooperative game models .the work of shows that finding a pure nash equilibrium is pls - complete , and hence the convergence time for congestion games , that always produces a pure nash equilibrium , is very high .again , for some initial strategies the shortest path to an equilibrium in the nash dynamics is exponentially long according to the the number of players . in , the authors has proposed a selfish load balancing problem using atomic congestion game , and have shown that the worst case ratio between a nash solution and a social optimum , which is referred several times as price of anarchy , is at most 2.5 .a recent advance in the field of congestion gaming model is -congestion game . in ,the authors have studied these issues for congestion games , and proposed an approximation of pure congestion game , which they named as -congestion game , where -nash dynamics converges to an -nash equilibrium within a finite number of steps under some bounded conditions .this work on the approximation on pure congestion game is the primary motivation behind our work to check how the load balancing problem can be well suited with the approximated congestion game , or the -congestion game .we have used this version of congestion game to model our load balancing scheme .finally we have shown by simulation that using a proper , well chosen value of , we can reduce the number of iteration for congestion game to a very lower bound , and at this bound the load is well distributed among the processing nodes to minimize the processing time .in this paper we address the problem of load balancing in the general arena of _ congestion game _ , or more specifically an approximation of classical congestion game , called _-congestion game_. a congestion game can be formally described as a finite set of players , each of which is assigned a finite set of _ strategies _ and a cost function that he wishes to minimize .a _ state _ is any combination of strategies for the player . a state s is a _ pure nash equilibrium _ if for all players , for all .thus we can say that at a pure nash equilibrium , no player can improve his cost by unilaterally changing his strategy .it is well known that every finite game has a mixed nash equilibrium , but not a pure nash equilibrium . whereas congestion game always has pure nash equilibrium . in case of congestion games ,players cost are based on the shared usage of a common set of resources ( also called edges , in terms of network congestion games ) r = .a player s strategy set is an arbitrary collection of subsets of r ; his strategy will be therefore a subset of r. each resource has an associated _ nondecreasing _ delay function .if t players are using the resource r , they will each incur a cost of . as a result in a state s = ,the cost of player is , where is the number of players using resource r under s ; i.e. . in , the author has shown that in every congestion game , every sequence of improvement steps is finite .this proposition can be shown by a potential function argument , called the _rosenthal s potential function _ , defined as this function has an interesting property that if player shifts strategy from to , the change in exactly mirrors the change in the player s cost : i.e. , .the consequence from this observation is that , if we follow an iterative process here , such that , at each step one player changes its strategy to lower his cost ( a nash dynamics ) , then the potential function will decrease until it reaches a local minimum , which must be a pure nash equilibrium .but this does not provide a bound on the number of such player moves required to reach a pure nash equilibrium in a congestion game ( so the congestion game is pls - complete ) , and this problem leads to the concept of a approximate nash equilibrium , which is -nash equilibrium .in , the authors has studied and formulated the concept of -nash dynamics and -nash equilibrium .a state s is an -nash equilibrium if no player can improve his cost by more than a factor of by unilaterally changing his strategy .the -nash dynamics is a modification of pure nash dynamics , where players are permitted only -moves , i.e. moves that improve the cost of the player by a factor of more than . to make -nash dynamics concrete, the authors assume that among multiple players with -moves available , at each step a move is made by the player with the largest incentive to move ; i.e. the player who can make the largest relative improvement in cost .for this reason they have also introduced -bounded jump , where the delay function satisfies the condition for all and . in particular , a resource ( or edge ) with satisfies the -bounded jump condition .they show that in such a condition , -nash dynamics converge to an -nash equilibrium within a finite number of steps , in fact in steps , where c is an upper bound on the cost of any player .this is the main motivation behind our work to use this modified approximate congestion gaming model to solve the load balancing problem in distributed system.we have the following theorem about -nash equilibrium .[ theorem : epsilonnash ] for , a state s = is an -nash equilibrium if for all player , for all .in a distributed system , some nodes act as the processor nodes , who process the jobs , and some nodes act as the load generator , who generates processes or jobs in a certain rate .we call the processor nodes as the servers , and the load generators as the client .it should be noted that in a typical distributed system , same node can act both as server and clients .but for the simplicity of modeling the system , we consider server and clients as separate nodes without the loss of generality , and our target is to assign the clients to the servers in a balanced way , such that the jobs generated at the clients are distributed almost equally among the servers and the overall system response time is minimized .hence we need to generate a bipartite graph with n servers and m clients as shown in the figure [ fig : systemmodel ] .we use following parameters to model our system : + = the maximum processing rate of server i. + = the actual or effective processing rate of server i. + = load generation rate at client j. + = fraction of the load assigned to server i by client j. + = which is total job arrival rate in complete distributed system . + here , j=1,2,... ,m and i=1,2, ... ,n .it is obvious that for each client j ; + = 1 ; where ; and i=1,2, ... ,m + our objective is to find the fraction for each client j ( j = 1, ... ,m ) such that the expected execution time of the jobs in the system is minimized .we have the following conditions ; + at each server i ; + ; which implies that actual processing rate at any instance at server i must be less than the processing rate capacity or the advertised processing rate of server i. + modeling each computer as an m / m/1 queuing system , + , where = average arrival rate of jobs at server i and denotes the expected execution time of jobs processed at server i. for m / m/1 queueing system , the above condition must be satisfied also which guarantees stability of the overall system . + in our case , + hence , + we can calculate the overall response time of client j as follows : + + our objective is to minimize the overall response time of the system .let for client j , the vector = be the load balancing strategy of client j. then the set s = corresponds to the strategy space or strategy profile of our load balancing game .the cost function of client i is , and the corresponding delay function at resource i is where t is the number of resources , that is number of servers in load balancing game .it is clear that the delay function depends on number of clients associated with the particular resource and the function is nondecreasing . at this point of instance, we have the following assumptions : a. _ each server i can tolerate up to a maximum job arrival rate . _ though in theory each server is capable to process jobs up to its processing rate , but in practice the performance of the system drops dramatically after a specific amount of load , because each server has to process some internal loads also .so this assumption is more likely to the real world scenario and required to proof the convergence of our -congestion game .b. _ there is a maximum job generation rate at the system . this assumption is also near to reality . the convergence property for our load balancing -congestion game can be shown using following theorem [ theorem : boundedjump ] . _ * ( -bounded jump condition ) * _ [ theorem : boundedjump ] the delay function for resource j , as given above is nondecreasing and satisfies the -bounded jump condition for -congestion game . for resource , + , and + + hence, +this can be simplified as , + + so , it is clear that , so the delay function is nondecreasing . according to our assumptions , for each server j, we can get as ; + + clearly , for all server ( resources ) j. hence the -bounded jump condition is satisfied .so in this game , the -nash dynamics will converge to -nash equilibrium within a finite number of steps .we define the potential function for our game which is similar to rosenthal s potential function , but with a multiplier in addition with the delay function .the multiplier comes because the overall response time of all the servers depends on a fraction of each client s load .the potential function with strategy set s of the system is as follows : + where m is the number of clients ( players ) and n is the number of servers ( resources ) .it is clear that the overall system response time depends only on those fractions for which the multiplication factor is nonzero .the cost function incurred at each client i with strategy set s is as given before ; + _ * ( existence of exact potential function ) * _ the potential function as defined above obeys the property of rosenthal s potential function , i.e. if player shifts strategy from to , the changes in exactly mirrors the change in the player s cost . as for each client j , so the total weighted sum of the multipliers at both the cost function and potential function is always 1 , which is a constant value , and thus it does not incurs any change in the relative deference between potential functions and costs with two strategies and . at each independent move ,the change in the player s cost is equal to the change in total delay at all resources , as the delay at each resources reflects directly at the player s cost .hence from the structure of both the functions it is easy to conclude that , , which follows the theorem .it can be also checked easily that the cost function satisfies theorem [ theorem : epsilonnash ] , that is the cost can be increased always above a threshold given by the value of .hence it is clear that our load balancing game model converges to -nash equilibrium within a finite steps .our objective is to minimize the potential function starting from a initial strategy , where each node independently tries to minimize its own cost function and that directly reflects the change in overall system response time which can be measured using the potential function as defined above .the optimization problem at each client i will be as follows ; + where is the current strategy profile for client j and s is the strategy set of the system .+ subjected to ; + + , i=1,2, ...,n + + , i=1,2, ...,m + from the above conditions it is clear that the servers with higher processing power should have higher fractions of jobs assign to it . if the computers are arranged in the decreasing order of their processing rates , then we get a partial order for client j as . now in real system , there will be some servers with low processing rates , where no load will be assigned .so after a index k , for i = k , k+1, ... ,n. now when a client j runs its own optimization problem , the algorithm is given in ( algorithm [ algo : optimal]).each client node runs this algorithm independently to find out its own local optimum solution .* algorithm optimal : * + arrange the computers in decreasing order of their processing rates , + let + for the overall system optimization , we assume that the clients enter the system one after another .this assumption follows directly from the properties of potential function .the initial strategy set of the system is , and according to the property of -nash equilibrium , the system converges within a finite number of steps with any initial strategy .one initial strategy can be that whole job of a node is assigned to its own server . with this initial strategythe system moves to a new state as ( algorithm [ algo : cong ] ) .the algorithm works as a greedy strategy where each node chooses the current best solution and according to the greedy property and reachability conditions of congestion games , the system guarantees to be terminated at -nash equilibrium within a finite number of steps . herethe system designer needs to choose according to the system design using practical simulation .* algorithm load_balance : * + exit when there is no change in cost , that is nash equilibrium has been reached .we have simulated the system using a program written in java programming language .each server has some maximum processing rates , and we use the random number generator to generate load at each clients . the unit of the processing rates of the server and the job generation rate at client is taken as number of jobs per second . for our simulationwe have taken different situation with different number of servers with processing rates varying from small to large amount , and similarly for clients .we have maintained the dynamic nature of the system as much as possible while setting up the simulation environment . to show that the load is balanced among the processing nodes , we have defined the metrics * loadratio*.the metrics is defined as follows , + + ; + where , + = load ratio at server i , + = total load assigned at server i , + = maximum processing rate of server i , + = current effective processing rate of server i. + in figure [ fig : converge ] we have shown two test cases with different set of processing rates at servers and job generation rates at clients .we can see from the figure , that after certain number of rounds , the system converges to the equilibrium state , and at the equilibrium state the cost is stable and minimized .note that congestion game always have a pure strategy nash equilibrium , and hence the nash equilibrium here is unique .values , width=432 ] values , width=432 ] in figure [ fig : cost ] and figure [ fig : rounds ] we have shown four different test cases where each test case has same number of clients and servers , but with different set of maximum processing rates and job generation rates . from figure[ fig : cost ] , we can see that at , all the systems converges to a state , where it gives minimum cost , and decreasing the value of does not improve the system cost .with , we can see from figure [ fig : rounds ] that the number of rounds required to converge at nash equilibrium state is smaller than number of rounds required for pure strategy congestion game , which is essentially with .thus we can see with properly chosen value , -congestion game converges to nash equilibrium rapidly , but with same cost than the pure congestion game . in figure[ fig : loadratio ] and figure [ fig : load ] we have shown the distribution of load among the server nodes .we have considered three cases , when the load is high , that is all clients generate jobs with high rates , when the load is low , that is job generation rate is low , and when the rate is average , that essentially follows the median of a gaussian distribution .we can see from the figures that when load is high , all the server nodes has been assigned some load , but when load is low , only the servers with high processing rates gets the load to make the system performance better .the servers with low processing rates are only considered when load is very high .hence we can have an effective distribution of loads among the processing nodes .in this paper we have proposed a completely new framework for load balancing using -congestion game .we have shown the existence of pure nash equilibrium in such a game and proposed a greedy algorithm to solve the problem . in spite of modeling the problem using pure congestion game as it is pls - complete ,we have used the approximation of congestion game that converges to the equilibrium state within a finite number of steps .it should be noted that in , the authors have also proved that even for symmetric congestion games with the bounded jump condition on all edges , finding a nash equilibrium can still be pls - complete ; thus in this sense , bounded jumps are not a major restriction on the power of congestion games .this is a future motivation of study to check whether the processing overheads of -congestion game is significantly larger than symmetric congestion game or not .finally we have simulated the system to show that with a properly chosen value of , the system converges rapidly to the nash equilibrium than the pure congestion game .we have also shown by simulation that the system distributes the load properly among the processing nodes , and hence the load balancing is achieved using a distributed manner .daniel grosu , anthony t. chronopoulos and ming - ying leung . load balancing in distributed system : an approach using cooperative games . " in the proceedings of the international parallel and distributed processing symposium ( ipds 02 ) , 2002 .subhas suri , csaba d. toth , yunhong zhou . selfish load balancing and atomic congestion games . " in the proceedings of the sixteenth annual acm symposium on parallelism in algorithms and architectures , 2004 . c. kim and h. kameda . optimal static load balancing of multi - class jobs in a distributed computer system . " in the proceedings of the 10th international conference on distributed computing systems , pages 562 - 569 , may 1990 .
the use of game theoretic models has been quite successful in describing various cooperative and non - cooperative optimization problems in networks and other domains of computer systems . in this paper , we study an application of game theoretic models in the domain of distributed system , where nodes play a game to balance the total processing loads among themselves . we have used congestion gaming model , a model of game theory where many agents compete for allocating resources , and studied the existence of nash equilibrium for such types of games . as the classical congestion game is known to be pls - complete , we use an approximation , called the -congestion game , which converges to -nash equilibrium within finite number of steps under selected conditions . our focus is to define the load balancing problem using the model of -congestion games , and finally provide a greedy algorithm for load balancing in distributed systems . we have simulated our proposed system to show the effect of -congestion game , and the distribution of load at equilibrium state .
researchers in many fields use markov chain monte carlo ( mcmc ) methods to translate data into inferences on high - dimensional parameter spaces . studies of the cosmic microwave background ( cmb ) anisotropy spectra are a perfect example of this practice .the concordance cosmology requires six or seven ( depending on whether or not one assumes spatial flatness ) parameters to theoretically describe the spectrum of cmb anisotropies .given the computational expense of converting just one point in this parameter space into an anisotropy spectrum ( 1.3 seconds using the boltzmann code camb on a 2.5 ghz dual - core machine ) and then comparing that spectrum to the data ( 2 seconds using the official wmap likelihood code ) , exhaustively exploring the parameter space in search of models that fit the data well is unfeasible .mcmc is an alternative to this costly process that randomly walks through parameter space such that exploration theoretically locates and confines itself to regions where the fit to the data is good , and thus integrates the distribution of parameter values only over that space where it has high support .integrations that might have taken months when done exhaustively can be accomplished in days ; those that might have taken days can be accomplished in hours .it is fair to say that mcmc has become an `` industry standard '' within some communities . in spite of this success ,serious questions remain .mcmc is a sampling method and its theoretical guarantees are mostly with respect to convergence rather than sampling efficiency .parameter fitting problems are search problems , not sampling problems .there is no fundamental reason that a sampling algorithm should be good for searching and certainly no guarantee that the performance of mcmc as a search algorithm with a finite number of samples will be particularly efficient .finally , mcmc methods generally force investigators to make bayesian assumptions which may or may not be desirable .we present an algorithm originally proposed and implemented by as a more efficient and flexible alternative to mcmc .we will refer to the algorithm as the active parameter search ( aps ) algorithm .we provide c++ code to implemement aps ( publically available at ` https://github.com/uwssg/aps ` ) and test it on the 7-year release of the wmap cmb data .section [ sec : mcmc ] presents an overview of both the mcmc and aps algorithms and discusses the shortcomings of the former and how the latter attempts to address them . section [ sec : user ] details the user - specified parameters necessary to aps and some of the attendant considerations .section [ sec : cartoons ] compares the performance of mcmc and aps on a toy model of a multi - modal likelihood function .section [ sec : wmap7 ] presents the results of the wmap 7 test .we find that aps achieves parameter constraints at comparable ( if not faster ) times to mcmc while simultaneously exploring the parameter space more broadly .we begin this section with a general discussion of the process of deriving parameter constraints from data .we then sketch an mcmc method called the metropolis - hastings algorithm .we will outline the aps algorithm with an aside discussing gaussian processes , a method of multi - dimensional function fitting which occupies some prominence in the algorithm .we end this section by directly comparing aps with mcmc on a toy model in a two - dimensional parameter space .the basic problem both mcmc and aps attempt to address is the following : suppose there is a theory described by some number of parameters .this theory makes a prediction about some function which is tested by an experiment resulting in data points . to quantify what these data say about the theory , an investigator wants to ask what combinations of values of result in predictions that are consistent with the data to some threshold probability .usually , this is quantified by assuming that the data points are independent and normally distributed so that the statistic ( where is the uncertainty associated with the datapoint ) is a random variable distributed according to the probability distribution with degrees of freedom . in that case , the confidence limit constraint on corresponds to the set of all values which result in values of such that more generally , one has a likelihood function ( above represented by ) quantifying the goodness - of - fit between data and theory .one uses mcmc or aps to find the values of corresponding to , where is some threshold set by statistics and the desired .bayesian inference approaches this problem indirectly by assuming that are random variables distributed according to the distribution function .constraining these parameters to is therefore a problem of integrating over until the integral contains of the total .mcmc assumes this approach , as will be shown below .a common criticism of this mode of thought is that , even if the theory is a poor description of the data , bayesian inference will still yield a constraint consisting of the least - poor region of parameter space .frequentist inference takes a more objective approach , defining the confidence limit to be all points in parameter space that satisfy .aps assumes this approach , as will also be shown below , and exploits it to derive parameter constraints with siginficantly fewer calculations of ( a presumably costly procedure ) than mcmc .the direct product of mcmc is a chain : a list of points in parameter space that mcmc sampled and the number of times it sampled them . *( 1 m ) begin by sampling a point from parameter space at random .record this point in the chain and evaluate the likelihood function for that combination of parameters .the value of this likelihood call will be . + * ( 2 m ) select another point in parameter space .this point should be selected randomly , but according to some distribution that depends only on the step length and is tuned so that the algorithm explores away from the initial point but does not effectively jump to any other place in parameter space uniformly at random .evaluate the likelihood function at , giving you .+ * ( 3 m ) if ( i.e. , if the new point in parameter space is a better fit to the data than the old point ) , the step is accepted and the new point is recorded in the chain .set and . + * ( 4 m ) if , draw a random number between 0 and 1 .if , accept the step anyway .+ * ( 5 m ) if the step was ultimately rejected , increment the number of samples associated with by one .+ * ( 6 m ) return to step ( 2 m ) and repeat .there are heuristic statistics that can be performed on the chain to give an indication of whether it has adequately sampled parameter space , though they have issues we point out later .this is the most basic form an mcmc algorithm can take .in addition to chapter 1 of , one may wish to consult appendix a of for guidance writing a functional mcmc .the tests in step ( 3 m ) and ( 4 m ) mean that any mcmc run will gravitate towards the high - likelihood regions of parameter space and ignore regions of extremely low likelihood .one converts the chain into parameter inferences by gridding parameter space and treating the chain as a histogram of how many times the chain visited each point on the grid ( or its proximity ) .this histogram will closely match the bayesian posterior distribution over the parameters . confidence limits are derived by integrating this histogram from its peak out to whatever bounds contain of the sampled points .there are several shortcomings with this method that the aps addresses .the first shortcoming is efficiency . because mcmc seeks to take samples that match the distribution rather than learning specific information about the distribution , such as where the confidence boundaries are , it wastes samples in areas where it already has information .for example , once a high - likelihood parameter setting has been evaluated there is no need to evaluate there again .it is already known that this is a high - likelihood region of the space .however , mcmc specifically indicates that high - likelihood by putting more samples there . these are wasted evaluations .a second mcmc shortcoming , related to the first , involves the possibility that there are several disjoint regions of high likelihood in the parameter space under consideration. this can be problematic for mcmc because steps ( 2m)-(4 m ) ensure that , once the chain finds one of these high likelihood regions , it is unlikely to step out of it and find another ( unless the distribution in step ( 2 m ) is such that it allows very large steps , in which case the chain will take significantly longer to converge ) .an unexplored region of the space may remain unexplored for a long time ( though in theory not infinitely long ) .mcmc never determines whether it has explored the region and whether information could be learned by doing so . when initially testing aps , bryan _et al_. found a separate region of high - likelihood parameter space in the 1-year wmap data release which had been totally ignored by mcmc analyses .this second peak in disappeared as the wmap signal - to - noise improved in subsequent data releases ( as we will see in section [ sec : wmap7 ] ) , but it does serve as an illustration of this particular peril of mcmc . dunkley _ et al_. ( 2005 ) and attempt to address this problem and the general inefficiency of mcmc as a search algorithm by optimizing the proposal density in step ( 2 m ) .they find no clear solution that guards against multiple high likelihood regions .the final shortcoming of mcmc involves its unequivocally bayesian nature . for a number of data points bayesian and frequentist confidence limitsare approximately equivalent .they can , however , yield conflicting results when constraining a subset of parameter space .consider a six - dimensional parameter space like the space of cosmological parameters mentioned in section [ sec : intro ] and revisited in section [ sec : wmap7 ] .if one is really only interested in constraining , one still has to deal with the question of how to treat , since they presumably affect in a way that is non - trivially correlated with . in bayesian inference ,one deals with this question by integrating the probability distribution over the full range of the uninteresting parameters ( marginalizing over them ) , i.e. the confidence limit is the contour in space such that where represent the bounds of the confidence limit in the two - dimensional subspace of interest .this integration yields the two - dimensional ellipses of figures 4 and 6 of ( and most observational cosmology papers since ) . from a frequentist perspective, the integration in equation ( [ bayesint ] ) is problematic .it risks drawing a bound that is either too restrictive or too inclusive .an over - restrictive bound could arise because the integral ( [ bayesint ] ) weights points in space according to the density of corresponding points in space that are highly likely .if there are combinations of that are highly likely only for an extremely limited set of , those points will be excluded from the confidence limit even though they are strictly allowed according to the criterion .bayesian inference does not give much thought to these excluded points because , if we consider the parameters to be random ( which bayesians do ) , the excluded points correspond to highly improbable values of precisely because they only agree with the data for such a limited set .frequentist bounds , consisting of any and all points for which , do not care how many different points in the larger parameter space are acceptable for a given . if just one point satisfies , the corresponding point in subspace is within the bound .an over - inclusive bound could arise because the integral ( [ bayesint ] ) searches only for that bound which contains of the total probability .if the model chosen to describe the data is universally poor , mcmc will still return a consisting of the least - poor region in parameter space without any obvious warning that the model is probably wrong .a frequentist bound would , in that case , return no confidence bound , highlighting the model s deficiency .traditionally , bayesian assumptions are made because they do not require prior knowledge of the underlying likelihood function . in order to draw a frequentist confidence limit, one must be able to calculate before sampling any points in parameter space .bayesian confidence limits only require knowledge of the relative likelihood between sampled points and thus can be revised with each new sample .this represents one advantage mcmc retains over aps . to address the shortcomings of mcmc , bryan _et al_. bryan _et al_. ( 2007b ) propose the following alternative algorithm for exploring parameter space .* ( 1a ) generate some number of starting points uniformly distributed across parameter space .evaluate at each of these points . store them in .+ * ( 2a ) use the points already evaluated to fit a surrogate function that estimates the likelihood and uncertainty in it for other , as yet unevaluated , candidate points .+ * ( 3a ) generate a large number of candidate points also uniformly distributed over parameter space .use the surrogate function to guess the value of at each of the candidate points .this guess will be . the uncertainty in your guess will be .section [ sec : gp ] will describe one means of finding and .the assumption is that estimating and is orders of magnitude less computationally expensive than finding the true . + * ( 4a ) select the candidate point with the maximum value of the statistic where is a parameter that has a similar effect as the length parameter in mcmc s proposal distribution .small values will make it take more samples around already good ( high ) points while large values will make it explore more agressively .evaluate at the selected point and add it to the list of evaluated .+ * ( 5a ) repeat steps ( 2a ) through ( 4a ) .confidence bounds are found by gridding the parameter space ( though no integration or other accumulation is required ) .convergence can be estimated heuristically by observing the size of changes in confidence bounds .maximizing in equation ( [ sstat ] ) implies , to some extent , minimizing .this algorithm therefore seeks out the boundary of the confidence limit , rather than its interior .this yields some efficiency improvements over mcmc . the dependence of on the uncertainty of the predicted candidate value forces the algorithm to simultaneously pay attention to unexplored regions of parameter space .we will see in section [ sec : gp ] that , if a region of parameter space is unexplored , the value of for a candidate point chosen from that region will be very high .this algorithm therefore guards against the second shortcoming of mcmc ( ignorance of disjoint regions of high likelihood ) by explicitly stepping away from regions that are already known to be near the confidence limit and sampling points from poorly - sampled regions of parameter space . empirically compared this dependence on to other information - theoretic dependences and found it to perform best . following that reference, we set in equation ( [ sstat ] ) .step ( 4a ) chooses points solely based on their own merit , rather than the ratio of likelihoods used in steps ( 3 m ) and ( 4 m ) of mcmc .this algorithm can therefore apply a purely frequentist test to parameter space .bounds are drawn in parameter space by examining the full set of points sampled and making a scatter - plot of those points which meet the criterion , rather than by integrating over the relative frequency with which the algorithm visited different points in parameter space .this guards agains the final shortcoming of mcmc : the inherent subjectivity of bayesian confidence limits .step ( 3a ) of aps generates a random set of candidate points and uses data from the points in parameter space already sampled to predict the value of at each candidate point ( this prediction is in equation [ sstat ] ) and assign an uncertainty to that prediction .the algorithm is agnostic about how this prediction and uncertainty are derived .we follow and use the formalism of gaussian processes ( more specifically : kriging ) to make the predictions .there is a slight difference between our formalism and theirs .this will be explained below .the following discussion comes mostly from chapter 2 and appendix a of .gaussian processes use the sampled data to predict at the candidate point by assuming that the function represents a sample drawn from a random process distributed across parameter space . at each point in parameter space , is assumed to be distributed according to a normal distribution with mean and variance dictated by the covariance function . is the intrisic variance in the value of at a single point . encodes how variations in at one point in parameter space affect variations in at other points in parameter space .rasmussen and williams ( 2006 ) treat the special case and finds ( see their equations 2.19 , 2.25 and 2.26 ) where the sums over and are sums over the sampled points in parameter space . relates the sampled point to the candidate point . we do not wish to assume that the mean value of is zero everywhere .therefore , we modify the equation for to give where is the algebraic mean of the sampled . note the similarity to a multi - dimensional taylor series expansion with the covariance matrix playing the role of the derivatives .equation ( [ sig ] ) differs from equation ( 6 ) in because they used the semivariance \ ] ] in place of the covariance . in practice ,the two assumptions result in equivalently valid and .the form of the covariance function must be assumed .we choose to use \ ] ] where is the distance in parameter space between the points and .the exponential form of quantifies the assumption that distant points should not be very correlated .the normalization constant ( known as the `` kriging parameter '' for the geophysicist who pioneered the overall method ) also must be assumed .this is somewhat problematic because , examining equation ( [ mu ] ) , one sees that the factors of and completely factor out of the prediction , so that the assumed value of has no effect on the accuracy of the prediction .if the opposite had been true , one could heuristically set to maximize the accuracy of .given that the function we are trying to model ( as a function of set data and a specified theory ) is not a random process , we find no a priori way to set and instead set it according to heuristics that we believe are consistent with the behaviors we desire from the aps algorithm .we discuss this in more detail in section [ sec : user ] .figure [ fig : gp ] applies the gaussian process of equations ( [ sig ] ) and ( [ mu ] ) with assumption ( [ covraw ] ) to a toy one - dimensional function .inspection shows many desirable behaviors in and . as approaches the sampled points , approaches and approaches zero .closer to a sampled point , the gaussian process knows more about the true behavior of the function .far from the , is larger , and the statistic in equation ( [ sstat ] ) will induce the aps algorithm to examine the true value of .uncertainty bounds . for the purposes of this illustration, we assumed that the kriging parameter was . ] figure [ fig : cartoon ] directly compares aps with mcmc by running both on a toy model in two - dimensional parameter space .the model likelihood function is a -distribution with two degrees of freedom .we are interested in the 95% confidence limit , which corresponds to a criterion of the model is constructed so that there are two regions of parameter space that meet this criterion .these are the two ellipses in figure [ fig : wide ] .figures [ fig : zoom_mcmc ] and [ fig : zoom_krig ] zoom in on one of these regions .the open circles correspond to the points sampled by mcmc .each color represents an independent chain ( four in all ) .each of these chains was allowed to run until it had sampled 100 points in parameter space .the blue crosses represent the points sampled by aps after it had sampled a total of 400 points in parameter space .note that no single mcmc chain found both regions of high likelihood .a user would have to run multiple independent chains to be at all certain that she had discovered all of the regions of interest .it is now standard to address this by running several mcmc chains and aggregating their outputs , but it is concerning to leave coverage of the search space to the luck of multiple random restarts .conversely , aps sampled points from distant regions of parameter space .this is how aps ultimately found both regions of high likelihood . indeed , after only 85 calls to the likelihood functio , aps had sampled points from both regions of high likelihood .consider figures [ fig : zoom_krig ] and [ fig : zoom_mcmc ] .it is obvious that , even near the most densely - sampled high - likelihood region , aps is principally interested in points on the boundary of the confidence region , while mcmc samples the interior .this is another way in which aps is a more efficient allocation of computing resources than mcmc . finally we can observe the perils of common mcmc convergence heuristics .they generally compare statistics from all the chains in aggregate against individual chains . at convergence, these statistics should be similar .we observe that in the toy example , convergence has not happened because some chains are near one local maximum while some are at the other .however , from a practial point of view all the high likelihood areas have been identified .conversely , if the four chains had been chosen unluckily , they would have all converged to the same local maximum and never reached the other .the convergence tests would have reported success while the reality would have been failure to find both local maxima .we will present a more detailed comparison of mcmc and aps in section [ sec : cartoons ] .aps as presented thus far contains parameters which must be set by the user . we describe them in this section .a summary list is presented at the end of the section . is the number of candidate points considered at each iteration .one consideration that should go into choosing is speed .if is too large , the evaluation of all values of and in step ( 3a ) will become comparably expensive to evaluating the likelihood function and the algorithm will lose some of its efficiency advantage over mcmc . can also affect the algorithm s propensity to explore unknown regions of parameter space . a small adds additional randomness because the selection will be affected more by the luck of choosing candidates than the metric used to evaluate them . is the number of purely random points on which to sample before proceeding to iterate steps ( 2a)-(4a ) .this number need only be sufficiently large to make the initial gaussian process reasonable .it should not be so large as to be equivalent to a fine - gridding of parameter space , which would defeat the purpose of having an efficient search algorithm . in practice, one must specify bounds on each of the parameters beyond which it is not worth exploring .to prevent the relative sizes of each parameter s allowed range from affecting the performance of our gaussian process , we amend the covariance function ( [ covraw ] ) to read \ ] ] where and denote different points in parameter space and the sum over is a sum over the dimensionality of the parameter space .additionally , in order to prevent the gaussian process from becoming prohibitively slow as more points are added to the set of sampled points , we follow and only use the nearest sampled neighbors of each candidate point when predicting and . is another choice made by the user . to set the kriging parameter , we sample an additional uniform set of points after the initial sample but before proceeding to step ( 2a ) . for each of these pointswe both predict using the gaussian process and sample the actual .we set equal to the value necessary that 68% of these points have .as the algorithm runs , we periodically adjust so that the search through parameter space strikes a balance between exploring unknown regions of parameter space and identifying regions of parameter space satisfying .note that one can just as easily assume the value of , this , however , runs the risk that the aps algorithm will either fail to explore outside of the discovered high - likelihood points ( if is too small and ) or will ignore the high - likelihood region altogether ( the opposite case ) .we make one final modification to the aps algorithm as currently presented . because of the absolute nature of frequentist parameter constraints , a point in parameter space with likelihood is still not within the confidence limit .if the code finds such a point and immediately resumes its random search through the broader parameter space , it has not learned anything about the allowed values of .we therefore modify the code so that , whenever it finds a point with , it pauses in its search and spends some number of evaluations trying to find a nearby point that is actually inside the confidence limit .to conduct this refined search , we borrow an idea from and note that equations ( [ mu ] ) and ( [ covariogram ] ) offer a straightforwardly differentiable function for in terms of our theoretical parameters .much as equation ( [ mu ] ) does a good job of characterizing the value of at an unknown point , the derivative of equation ( [ mu ] ) should do a good job of characterizing the derivative of at a known point .this allows us to use gradient descent to walk from a point near the likelihood threshold towards a point that is inside the likelihood threshold .the method is as follows . *( 1 g ) starting from the already sampled point which is near the likelihood threshold , assemble a list of the nearest neighbors from the set of sampled points , this time including itself as the absolute nearest neighbor . + * ( 2 g ) differentiate equation ( [ mu ] ) to get and differentiate equation ( [ covariogram ] ) to get + * ( 3 g ) use this derivative to select a point in parameter space a small step away from along the direction that will maximize the change in .sample the likelihood function at this point .this point is now .+ * ( 4 g ) repeat steps ( 1 g ) through ( 3 g ) until you find some fiducial maximum likelihood or you have iterated times , whichever comes first .this modification to aps is referred to as the `` lingering modification . '' to summarize , the aps parameters that must be tuned by the user are ( values in parentheses are the values used on the 6 dimensional parameter space in section [ sec : wmap7 ] ) * ( 1000 ) the number of random distributed starting points evaluated in step ( 1a ) of aps .+ * ( 1000 ) a number of randomly sampled points used to heuristically set the kriging parameter in equation ( [ covraw ] ) . + * ( 250 ) the number of candidate points randomly generated in step ( 3a ) of aps .+ * ( 15 ) the number of nearest neighbors used in the gaussian process . + * ( 100 ) the maximum number of iterations to be spent using gradient descent in the lingering modification .+ * , ( ) the confidence limit threshold value aps is trying to find .+ * , ( ) the value used as a target by gradient descent in the lingering modification .+ * the dimensionality of the parameter space and the maximum and minimum values each parameter is allowed to take .in this section , we will test aps s performance against that of mcmc on a toy likelihood function with two regions of high likelihood .the function exists in four dimensions .the high likelihood regions are centered on the points the function itself is characterized by a statistic with 1199 degrees of freedom , which depends on according to + 1200\exp[-0.5d_2 ^ 2]\nonumber\end{aligned}\ ] ] where are the distances in parameter space from . in this case , the 95% confidence limit threshold corresponds to a value of .figure [ fig : toy ] shows a one - dimensional slice of this function .the red dashed line corresponds to the threshold limit .when testing aps , we set the tunable parameters thus distribution with 1199 degrees of freedom . ] before running either mcmc or aps , we generate three test grids each of 10,000 known points on this likelihood surface .we will use these grids to measure how long it takes mcmc and aps to learn the entire behavior of this likelihood surface . the first test grid is uniformly distributed across the entire likelihood surface ( all parameters are allowed to vary from to ) .the second grid is spherically distributed about the rightmost high likelihood region in figure [ fig : toy ] .the third grid is spherically distributed about the leftmost high likelihood region .the first grid contains no points whose values are below the threshold limit .the second and third grids are roughly half comprised of points that are below the threshold . to compare the efficacy of aps and mcmc at characterizing the likelihood surface, we run each algorithm 200 times . in the case ofmcmc an individual `` run '' consists of four independent chains run in parallel . during each run, we periodically stop and consider the points sampled by each algorithm thus far .we treat these points as the input to a gaussian process which we use to guess the values of the points on our three test grids .we quantify the efficacy of the algorithms in terms of the number of mischaracterizations made on each grid .we define a `` mischaracterization '' as a point on the test grid which satisfies but for which the guassian process predicts or vice - versa .figure [ fig : toytest ] shows the performance of the algorithms on this test averaged over all the 200 instantiations of each algorithm .you will note that while both algorithms were essentially perfect at characterizing the widely distributed test grid 1 ( on which no points satistfied ; note the logarithmic vertical axis in figure [ fig : toyg1 ] ) , only aps with lingering successfully and reliably characterized both test grids centered on the high - likelihood regions in figure [ fig : toy ] .this is an illustration of the second shortcoming of mcmc identified in section [ sec : mcmcsketch ] : once an mcmc chain has identified a high likelihood region , it is unlikely to step out of that region and consider the possibility that other high likelihood regions exist .this problem persists even though each mcmc instantiation consists of four independent chains , each with its own opportunity to fall into one or the other high likelihood region .either the chains all fell into one high likelihood region and not the other , or the chains became trapped at the local minimum in equation ( [ toyfn ] ) at . in section [ sec : wmap7 ] we perform a similar test using actual data from the wmap cmb experiment and the 6-dimensional parameter space of the flat concordance cosmology .in this section , we test the aps algorithm on the 7-year data release of the wmap satellite , which measured the temperature and polarization anisotropy spectrum in the cosmic microwave background . for simplicity ,we only consider anisotropies in the temperature - temperature correlation function and modify the likelihood function to work in the space of anisotropy , rather than working directly in the pixel space .this results in a likelihood function sampled from a -distribution with 1199 degrees of freedom . in this case, the criterion for the 95% confidence limit corresponds to with .we take as our parameter space the six dimensional parameter space describing the set of spatially flat ( cosmological constant and cold dark matter ) concordance cosmologies .those parameters are \}$ ]. these parameters will be familiar to users of the mcmc code cosmomc as the relative density of baryons , the relative denstiy of dark matter , the present day hubble parameter , the optical depth to last scattering , the spectral index controlling the scale - dependence of primordial density fluctuations in the universe , and a normalization parameter controlling the amplitude of primordial density fluctuations in the universe .we use the boltzmann code camb to convert from parameter values to anisotropy spectra . in this case , we test aps with our comparison mcmc is run by the publically available software cosmomc .we compare the results of the two algorithms both in terms of derived constraints on the cosmological parameters and in terms of exploration of the full parameter space below . as discussed in section [ sec : mcmc ] , mcmc determines parameter constraints by integrating over the posterior probability distribution on parameter space while the aps algorithm determines constraints by listing found points which satisfy and determining the region of parameter space spanned by those points .figure [ fig : contours ] compares these two approaches by tracking the development of the 95% confidence limit contour in one two - dimensional slice of our full six - dimensional parameter space as a function of the number of points sampled by each algorithm . in each frame , the solid contours represent contours drawn by considering all of the points found by each algorithm which satisfy the frequentist requirement .the blue contour represents points found by aps .the red contour represents points found by mcmc .the black contour represents points found by mcmc after sampling a total of 1.2 million points .note : for this comparison , we consider all of the points visited by mcmc , not just those points accepted in steps ( 3 m ) and ( 4 m ) from section [ sec : mcmcsketch ] .this is , in some sense , a more complete set of information about the likelihood surface than mcmc usually returns .the dashed black contour is the contour drawn using the usual bayesian inference on the points acccepted by mcmc after the full 1.2 million points have been sampled .the green crosses are the pixels in this 2-dimensional parameter space that bayesian inference believes are inside of the 95% confidence limit after the specified number of points have been sampled .the salient features of this figure are as follows .the solid black contour is larger than the dashed black contour .this means that mcmc visited points that met the frequentist threshold but not with enough frequency to satisfy the 95% bayesian confidence limit .this is an example of mcmc being too restrictive as discussed in section [ sec : mcmcsketch ] .a similar conclusion can be drawn from the fact that the green crosses do not fill the red contour until 400,000 points have been sampled .the green crosses congregate in the center of the contours because mcmc is principally interested in the deep interior of the high likelihood region .this is a manifestation of the inefficiency of mcmc discussed in section [ sec : mcmcsketch ] .the blue and red contours track quite well at all points in the algorithms histories .this shows that aps is at least as good as mcmc at deriving parameter constraints when you treat the points visited by mcmc from a frequentist perspective . comparing the blue contour to the green crosses , in figure [ fig : con100 ], one sees that aps derives accurate parameter constraints faster than mcmc treated from a bayesian perspective .figure [ fig : scatterquant ] recreates figure [ fig : con400 ] , except that the green crosses represent the 50% confidence limit according to bayesian mcmc after sampling 400,000 points . the fact that these pixels still occur outside of the solid black contour ( frequentist mcmc 95% confidence limit after sampling 1.2 million points ) indicates that the false positives in figure [ fig : con400 ] represent a significant fraction of the total posterior probability integrated by bayesian mcmc at this point in the algorithm . in contrast , the aps contour already covers 86% of the final area of the frequentist mcmc contour . because of how they are drawn ( ), aps contours will not include false positives .figure [ fig : tracks ] plots the one - dimensional 95% confidence limits on our six cosmological parameters .again , the solid red lines consider all of the points visited by mcmc ( not just those accepted by the chain ) and set the limit according to the frequentist threshold .the blue lines represent the results from aps .note that aps only sampled 420,000 points before we stopped it .mcmc sampled 1.2 million points .the dashed black lines are the confidence limits inferred from bayesian analysis on only those points accepted by the mcmc chains . as in figure [ fig : contours ] , we see that aps converges to the same answers as freqentist mcmc in comparable time , and that both frequentist analyses find allowed parameter values that are missed by the bayesian analysis. cases where the dashed black lines stry outside of the solid red lines indicate bayesian anaysis applying spurious weight to low - likelihood points . at this point, we have made the case that aps gives more accurate parameter constraints in less time than the usual , bayesian mcmc analysis .however , even if one were simply to modify their mcmc analyses to adopt a frequentist perspective ( the red contours and lines in figures [ fig : contours ] and [ fig : tracks ] ) , we show in section [ sec : explore ] below that aps exhibits superior performance characterizing the entire likelihood surface , not just the high likelihood subsurface . in the language of section[ sec : cartoons ] , aps gives constraints as accurate as mcmc with greater confidence that you have not ignored any additional regions of high likelihood . except that now the green crosses are the 50% bayesian mcmc confidence limit .the fact that these crosses still occur outside of the solid black contours indicates that false positives account for a large fraction of the total posterior probability integrated by mcmc , even after sampling 400,000 points in parameter space . ]section [ sec : constraints ] examined the performance of the aps algorithm within the high likelihood region of parameter space by comparing the derived parameter constraints to those found by mcmc .this section will consider the performance of the aps algorithm in the low likelihood region of parameter space , asking the question `` how certain can we be that the excluded regions of parameter space contain no points of interest ? '' recall the toy model in section [ sec : cartoons ] : mcmc is notoriously inefficient ( or even ineffective ) at exploring multi - modal likelihood functions . by selecting sample points based both on proximity to the confidence limit and on uncertainty in the gaussian process prediction , the aps algorithm ought to improve on that performance . to test this hypothesis , we perform the test illustrated in figure [ fig : toytest ] on the wmap parameter space and likelihood surface .we generate two test grids each of 1,000,000 points .one grid is distributed uniformly across parameter space .this grid contains no good points that satisfy the criterion .the other grid is spherically distributed about the vicinity of the high likelihood region and contains 110,000 good points .we then take the points sampled by mcmc and aps ( again , we take all of the points sampled by mcmc ; not just those points accepted in the chains ) at different times in their history and use those points as the input for a gaussian process , which we use to predict the values of the points on our test grid .figure [ fig : grid ] shows the number of points that are thus misclassified ( points for which the gaussian process predicts and vice - versa ) as a function of the number of points fed into the gaussian process . because the wmap 7 likelihood function is so expensive , we only ran this test once . to find the confidence limit ( dashed ) curves , we consider the uncertainty implied by the gaussian process in equation ( [ sig ] ) .the dashed curves encompass points that are within of being misclassified ( i.e. points for which but and vice versa ) .as you can see from figure [ fig : grid1 ] , aps does a much better job at characterizing the uniform test grid than does mcmc .figure [ fig : grid2 ] shows that the two algorithms perform comparably poor on the high likelihood test grid .this is likely due to the compact nature of the high likelihood region of the wmap 7 likelihood surface .it is small both in terms of extent on parameter space and in terms of the difference in between likely and unlikely points .recall that the 95% confidence limit we are considering corresponds to .the smallest found by either algorithm was .this small difference between likely and unlikely points means that even a fraction of a percent error in the value of predicted by our test grid gaussian process will result in a mischaracterization .figure [ fig : err ] shows that this is indeed what happens .here we plot the fractional error in predicted as a function of actual for both algorithms .red curves are mcmc .blue curves are aps .dotted curves are results after sampling 50,000 points . dashed curves are results after sampling 100,000 points .solid curves are results after sampling 400,000 points .as you can see from figure [ fig : errzoom ] , there is indeed imprecision on the order of 1% when considering test points near the 95% confidence limit threshold . on a more forgiving likelihood surface with greater between likely and unlikely points, this would not result in the large number of mischaracterizations evident in figure [ fig : grid2 ] .the wmap 7 likelihood surface is anything but forgiving .readers interested in seeing how aps learns about the likelihood surface over time can consider figures [ fig : errwide ] , which recreates figure [ fig : errzoom ] for a broader expanse of , and figure [ fig : chi ] , which shows the fraction of mischaracterized points on both test grids as a function of values .as you can see , while aps learns rapidly about the unlikely regions of parameter space , mcmc remains largely oblivious to what is going on in the regions outside of its integral bounds .these results , combined with the parameter constraints illustrated in section [ sec : constraints ] cause us to conclude that aps is at least as effective as mcmc and locating regions of high likelihood parameter space , and is significantly more robust against anomalies in regions of low likelihood parameter space .aps as demonstrated here has three principal advantages over mcmc when deriving parameter constraints . *( ia ) aps is more computationally efficient than mcmc in that it does not spend time exploring the interior of high likelihood regions when only their bounds are of interest . as a result it can yield comparable parameter constraints with significantly fewer calls made to expensive likelihood functions .+ * ( iia ) aps allows for simultaneous robust statements about both high and low likelihood regions of parameter space .mcmc is robust only in the high likelihood region it happens to discover . + * ( iiia ) aps allows investigators to apply frequentist assumptions to their parameter constraints .the shortcomings of aps are twofold . *( ib ) aps has no framework for exploring a likelihood function whose form is not known ( this is the corrollary to ( iiia ) above ) .you must specify or before running aps .you can not use aps to discover .+ * ( iib ) there is no well - accepted stopping criterion for the algorithm equivalent to the convergence criteria usually applied to mcmc ( for example , ) . however , as we observed on the toy problem and as bryan et al .observed by finding a second high likelihood area in the wmap data , mcmc s convergence provides a false sense of security . + * ( iiib ) aps is much more complicated to implement than the most basic mcmc .we hope that by making our code publically available , we can help the community to overcome this hurdle .aps may be used as a more efficient alternative to mcmc . they may also be combined . often, the convergence of mcmc chains is dependent on the size of parameter space to be explored .the larger the region , the slower convergence .investigators can exploit advantage ( ia ) by pre - processing their data with aps and using the discovered high likelihood regions to set the prior bounds for their mcmc analyses .we make our code available at ` https://github.com/uwssg/aps ` .the code is presented as a series of c++ modules with directions indicating where the user can interface with her particular likelihood function .those with questions about the code or the algorithm should not hesitate to contact the authors .sfd would like to thank eric linder , arman shafieloo , and jacob vanderplas for useful conversations about gaussian processes .sfd would also like to acknowledge the hospitality of the institute for the early universe at ewha womans university in seoul , korea , who hosted him while some of this work was done .we acknowledge support from doe award grant number desc0002607 and nsf grant iis-0844580 .
we consider the problem of inferring constraints on a high - dimensional parameter space with a computationally expensive likelihood function . markov chain monte carlo ( mcmc ) methods offer significant improvements in efficiency over grid - based searches and are easy to implement in a wide range of cases . however , mcmcs offer few guarantees that all of the interesting regions of parameter space are explored . we propose a machine learning algorithm that improves upon the performance of mcmc by intelligently targeting likelihood evaluations so as to quickly and accurately characterize the likelihood surface in both low- and high - likelihood regions . we compare our algorithm to mcmc on toy examples and the 7-year wmap cosmic microwave background data release . our algorithm finds comparable parameter constraints to mcmc in fewer calls to the likelihood function and with greater certainty that all of the interesting regions of parameter space have been explored . [ firstpage ]
this year marks the 500th anniversary of the copperplate engraving _melencolia i _ by the great artist - mathematician albrecht d " urer ( 14711528 ) .a good way for math lovers to celebrate this anniversary is to play a time - honored party game " that has attracted art historians and scientists for many years : guessing the nature and meaning of the composition s enigmatic stone polyhedron , illustrated in figure [ setup ] .published attempts at playing the game date back at least a century .in fact , it s a game that may go on indefinitely , because ( 1 ) we have no writings by d " urer definitively saying what the polyhedron is , ( 2 ) our ability to measure and analyze the polyhedron is subject to at least some small errors , and ( 3 ) the image itself has anomalous features , some of which we discuss in the appendix .nevertheless the polyhedron , sometimes known as d " urer s solid " , seems to be accurately drawn for the most part , and authors such as macgillavry have been able to get good fits of their models by allowing for a certain amount of human error in both drawing and measuring . in this articlewe discuss a possible model of the solid and compare it to the results of macgillavry and others .we will try to make the case that one of the most convenient and effective ways to score the game is to use the cross ratio .we show how the cross ratio works as a projectively invariant shape parameter " of the polyhedron , and how it can be used in analyzing various theories of this figure .a commonly accepted rule of the game is the assumption that the polyhedron is formed by starting with a _ trigonal trapezohedron _ or _trigonal deltohedron_a three - dimensional figure whose faces are six congruent rhombi ( a cube is such a figure ) .then , with the longest diagonal held vertically , congruent tetrahedrons are cropped off the top and bottom by horizontal planes , leaving congruent equilateral triangles as the top and bottom faces .the other six faces the truncated rhombi are thus congruent pentagons .the shape of the solid is completely determined by the shape of one of the pentagonal faces , hence many authors describe their model of the solid by describing the shape of one of these pentagons .in addition to rules , the game should have some method for scoring various attempts at it , at least in an informal way .first of all , it makes no sense to play the game if we assume that d " urer was such a poor draftsman that we can simply ignore the engraving and conjecture any shape we want .we therefore require that a proposed model of the solid should have a shape that is close to that derived by perspective analysis of the engraving .in addition to respecting d " urer as an artist , we must acknowledge his reputation as a mathematician . in the words of the respected mathematician morris kline , of all the renaissance artists ,the best mathematician was the german albrecht d " urer " . among other things , d" urer owned a copy of euclid and wrote well - respected treatises on proportion , including significantly for us the golden ratio .thus we assume that d " urer incorporated some kind of interesting mathematical relationships into the design of the solid , rather than just sketching until he found something that looked pleasing to him .a proposed model of the solid should include the description of such relationships .under the assumptions of the game , the shape of the solid is completely determined by two numbers , which we refer to as shape parameters " . as we mentioned earlier , the shape of the solid is determined by the shape of one of the six congruent pentagonal faces .the shape of a pentagonal face ( see figure [ generic - face ] ) can be specified by the acute angle of the rhombus and another parameter , which determines the level at which the rhombus is truncated .one obvious candidate for would be the ratio of distances in figure [ generic - face ] ; that is , the fraction of left after truncation . however , this parameter has one disadvantage ; it can not always be measured directly in a perspective drawing of the face , because perspective often distorts the ratios of lengths . and determine the shape of the truncated rhombic face and hence the shape of the solid . ] fortunately , given four collinear points , , , as in figure [ generic - face ] , there is a quantity whose value is not changed by perspective , namely the _ cross ratio _ , among other things . ] defined by for a truncated rhombic face like the one in figure [ generic - face ] , there is a convenient relationship between the cross ratio and the ratio . letting , we have solving ( [ lambda ] ) for gives from ( [ lambda ] ) and ( [ bc / ac ] ) we see that the truncation of the rhombus determines the value of the cross ratio , and the value of determines the truncation of the rhombus .since is projectively invariant that is , unchanged by the distortion of perspective we choose it as the second shape parameter .perhaps the most thorough and frequently cited perspective analysis of the solid is the one done by macgillavry .macgillavry estimated the shape parameters of the pentagonal faces by computing them several different ways , applying the rules of perspective to different parts of the engraving and averaging the results .macgillavry estimated and .figure [ m - face](a ) shows such a pentagon in solid gray , with macgillavry s minimum value of . from this value of and equation ( [ lambda ] )we see that macgillavry s work implies our own estimate of is close to that of macgillavry . in figure [ setup ]we have extended sides of the three visible pentagonal faces to locate the truncated vertices and of the rhombi . for , the lines are the centerlines of the rhombi , the points are the centers , and the points are the truncation points of the centerlines .if our assumptions about the polyhedron are correct , and if the drawing and our measurements were perfect , then the cross ratios would be exactly the same for . in an imperfect world , we of course expect some variation. one of the authors computed these cross ratios by importing a digital image of _ melencolia i _ into the program geogebra , and found to allow for imperfections as macgillavry did , we average these three values to get . the reader may have noticed that these estimates of and are suspiciously close to a famous number : the golden ratio , where following this clue , our model of the solid is based on the golden ratio , with parameters we denote by and .figure [ m - face](b ) shows our proposed model of a face in black outline .the rhombus is inscribed in a pair of golden rectangles , each with height and width .the truncation line is easily found by drawing a line from the center of the rhombus until it meets an edge of the rhombus as shown .it is straightforward to show that , resulting in a cross ratio , via ( [ lambda ] ) , of version of macgillavry s pentagonal face in solid gray . in ( b ) the golden " pentagon in black outline , with the rhombus inscribed in a pair of golden rectangles . in ( c ) macgillavry s pentagon in solid gray , and the golden pentagon in dashed black outline .when drawn at this size , they are practically identical . ] thus the cross ratio is the golden ratio .the acute angle is given by the values of and are very close to macgillavry s values of and .in fact , as shown in figure [ m - face](c ) , the two pentagons are nearly indistinguishable ( macgillavry s is solid gray and ours is the dashed black outline ) . for the above reasons we ( somewhat playfully ) refer to the pentagon in ( b ) as a _golden pentagon_. to analyze how close the two pentagons really are , observe that the half - width of the pentagon in figure [ m - face](a ) , divided by the half - width of the pentagon in ( b ) is .thus if the half - widths of the pentagons are about an inch ( they re actually a little less ) , then the difference in the half - widths is less than a thousandth of an inch .the ratios of the heights of each pentagon above the horizontal centerline is .thus if these heights are about a half inch ( again they re a little less ) , then the difference between them is about three thousandths of an inch .we conclude that from an intuitive , visual standpoint , macgillavry s minimum - angle solution of and is essentially the same as ours , minus the golden " formulation of the shape parameters : and .over the years many theories have been advanced as to the shape of d " urer s solid , or equivalently , its pentagonal faces . with so many possibilities , any particular one of them ( including this one ) probably stands only a small chance of being right .the advantage of this abundance of theories is that it can be as much fun testing old theories as it is cooking up new ones .following our methods of the previous section , we show how the cross ratio can be applied to this variation of the game .figure [ 4-grid ] depicts a model conjectured by lynch , who guessed that vertices of the solid project orthogonally onto the lattice points of a grid , thus linking it with a magic square that appears in _melencolia i_. however , if this were true , we would have ( assuming for simplicity that the squares are 1 unit on a side ) , which differs significantly ( about ) from our measured value of and macgillavry s estimate of .we conclude that lynch s idea is not in good agreement with d " urer s perspective rendering of the solid .figure [ 72 ] depicts a model by schreiber having an acute rhombus angle of and a truncation chosen so that the face is inscribed in a circle . buta straightforward computation shows that this requires a ratio of .when substituted into ( [ lambda ] ) , this gives a cross ratio value of , which differs from macgillavry s estimate and ours by more than .this can be checked approximately by using a ruler and protractor to draw the rhombus , and a circle template to choose a circle that passes through its left , right , and bottom vertices .the circle then determines the truncation . in either casethe cross ratio of the resulting pentagon contradicts the evidence of the engraving itself .in addition , the angle is less than macgillavry s minimum value of . ,circle - inscribed face conjectured by schreiber . ]figure [ sketch ] shows a drawing from a d " urer sketchbook found by weitzel .weitzel conjectured that this drawing is a preliminary sketch of a face of the solid in _melencolia i _ , and estimated the rhombus angle to be , apparently based on the angle at the upper left in figure [ sketch](b ) .however , we noticed that the drawing is not perfectly symmetrical , as the other angle is about .the average of these values is close to macgillavry s lower bound of .a feature of the sketch that is _ not _ consistent with macgillavry s results is the truncation ratio of approximately indicated in figure [ sketch](b ) , apparently intended to allow for a circumscribed circle .this is significantly different from macgillavry s value of for the faces of d " urer s solid . alternatively , letting in equation ( [ lambda ] ) yields a cross ratio for weitzel s model of which is again significantly different from our measured value of and macgillavry s estimate of .this suggests that d " urer may have used an outer rhombus much like that in figure [ sketch ] for the face of his solid , but truncated it differently for some reason . assuming that the sketch really is for _ melencolia i _, our suggestion for the different truncation would be that the resulting face has a cross ratio equal to the golden ratio .under the usual assumptions , the shape of d " urer s solid can be specified by a pair of shape parameters , as illustrated in figure [ generic - face ] .of the two , the cross ratio is the easiest to investigate empirically , because it is projectively invariant .the two parameters , which are independent of one another , are equally important in determining the solid , but in the literature the angle seems to have attracted more attention .for example , ishizu provides a table of values of proposed by various authors going back to 1900 , but no corresponding table of values of , or any other parameter that would determine the truncation of the rhombus .we urge future players of the game to give equal consideration to both parameters , and we suggest the projectively invariant cross ratio as the parameter to determine the truncation .interestingly , ishizu s table shows that in the literature , the proposed values of cluster around either ( largely because it is consistent with the perspective of the engraving ) or because of its connection with the golden ratio a connection that might have attracted d " urer , namely but if that is a nice connection with the golden ratio one that would appeal to d " urer the mathematician then surely the double connection which we propose , must have some appeal as well , especially since it is compatible with the perspective of d " urer s rendering of the solid . fortunately for the anniversary celebration , neither this model nor any other will end the game of guessing the intended shape of the solid .any proposed values of and can agree only approximately with the engraving and , as we show in the appendix , the engraving itself has some anomalous features that make measurements of it even more uncertain .there s plenty of room for more ideas and more players .we invite the reader to play the game and invent a new and better theory ! 12 a. bogomolny , golden ratio in geometry , from interactive mathematics miscellany and puzzles , http://www.cut-the-knot.org/do_you_know/goldenratio.shtml. accessed 27 march 2014 .k. enomoto , d " urer no tamentai ni tsukarete , _ mizue _ * 891 * ( 1979 ) , 8895. h. ishizu , another solution to the polyhedron in d " urer s melencolia : a visual demonstration of the delian problem , _ aesthetics _ * 13 * ( 2009 ) , 179194 . m. kline , _ mathematical thought from ancient to modern times , vol . 1 _ , oxford university press , new york , 1972 .t. lynch , the geometric body in d " urer s engraving melencolia i , _ journal of the warburg and courtauld institutes _ * 45 * ( 1982 ) , 226232 .c. h. macgillavry , the polyhedron in a. d " urer s melencolia i : an over 450 years old puzzle solved ? , _ nederl .b _ * 84 * ( 1981 ) , 287294 .s. r " osch , die bedeutung des polyeders in d " urers kupferstich ` melancholia i ' ( 1514 ) , _ fortschritte der mineralogie _ * 48 * ( 1970 ) , 8385 .p. schreiber , a new hypothesis on d " urer s enigmatic polyhedron in his copper engraving melencolia i " , _ historia mathematica _ * 26 * ( 1999 ) , 369377 .h. weitzel , a further hypothesis on the polyhedron of a. d " urer s engraving ` melencolia i ' , _ historia mathematica _ * 31 * ( 2004 ) , 1114 .d " urer s rendering of the solid has certain anomalous features , a couple of which are depicted in figure [ anomaly ] .the solid line segments , , and are translations of one another and therefore parallel .observe that segments and very nearly coincide with edges of the solid , while noticeably diverges from the nearest edge .so let us turn our attention away from the lines in the image to the lines in the ( imagined ) actual solid object , sitting in space .if the solid is a trigonal trapezohedron as is usually assumed , the three edges that we are considering are parallel in space to one another , hence their images must be concurrent at a vanishing point .but the images nearest and are essentially parallel , hence their vanishing point is at infinity ( a so - called _ ideal point _ ) . on the other hand ,the image of the edge nearest is clearly not parallel to the images of the other two edges , so the images of the three edge lines are not concurrent at any point , ideal or ordinary .this feature is either an inconsistency in the drawing , or else d " urer s conception of the solid contradicts the usual assumptions . to illustrate another anomaly in figure [ anomaly ] ,we have also drawn the dashed line segments and parallel to one another .segment coincides with the image of a face diagonal , while is close to but deviates slightly from the image of a bottom edge . under the usual assumptions about the solid , the diagonal and the edge are parallel in space ,hence their images in the engraving should converge to a vanishing point on the horizon line , above and to the left of the picture in figure [ anomaly ] .but notice that as the image of the bottom edge goes to the left , it dips below line , hence it actually diverges from line ( or equivalently , it converges in the wrong direction ) .the bottom edge should in fact rise above line ( or the diagonal should dip below line ) as it goes to the left .the above errors in the drawing , if they are errors , are not drastic .indeed , most authors seem not to have noticed them . however , they highlight the importance of computing the parameters of the truncated rhomboid in multiple ways , using different parts of the image , perhaps even avoiding certain parts if they seem to be too inaccurately drawn .
on the 500th anniversary of albrecht d " urer s copperplate engraving _ melencolia i _ , we invite readers to join in a time - honored party game " that has attracted art historians and scientists for many years : guessing the nature and meaning of the composition s enigmatic stone polyhedron . our main purpose is to demonstrate the usefulness of the cross ratio in the analysis of works in perspective . we show how the cross ratio works as a projectively invariant shape parameter " of the polyhedron , and how it can be used in analyzing various theories of this figure . d " urer ; _ melencolia i _ ; engraving ; solid ; polyhedron ; rhombohedron ; perspective ; cross ratio 51n15 ; 51n05
the kelly strategy is attractive to practitioners because of its robust and optimal properties , e.g. that it dominates any other strategy in the long run and minimizes the time to reach a target , as shown by , and among others by , , , , , .it also carries intuitive appeal because of its connection to information theory , which was part of the original motivation in , and in particular by the fact that optimal investing is one and the same thing as optimal application of the available information ; see and . on the other hand , the kelly strategy in its pure unconstrained form , which we will refer to as the ` free kelly ' strategy , generally leads to a too aggressive strategy with too high leverage and risk to be of practical applicability .real investors are subject to various constraints , and a practical version of the kelly principle can be formulated as : optimize the objectively expected compound growth rate of the portfolio subject to the relevant constraints .it is in this form that the kelly principle has become increasingly popular among practitioners over the last 20 years or so . in this paperwe are concerned with the optimal strategy in this sense , for the particular case where the constraint is in the form of a stop - level .unlike for most other forms of constraints , the effect of a stop - level on the optimal strategy varies with the portfolio value and with the time remaining until the stop - level is adjusted ( reset ) .the related problem of finding the growth optimal strategy for a portfolio subject to a drawdown constraint ( a trailing stop - loss rule ) was solved using a combination of hjb and martingale methods by , see also and section [ grossmanzhou ] below .the plan of the paper is as follows .section [ sec : moti ] provides some background , and section [ sec : hjb ] reviews the hjb for the investment problem and introduces notation . in section [ sec : nonlinpde ] we derive a non - linear pde for the optimal strategy , and in section [ sec : known ] we consider a few examples with known solutions , and discuss to what extent the equation can be applied . in section [ koswslr ]we provide a numerical solution for the kelly strategy subject to a periodically reset stop - loss rule . section [ sec : conclusion ] concludes .consider a discretionary portfolio manager running a trading book in a global macro hedge fund .such a trader is subject to various constraints when selecting the appropriate level of risk to run .apart from leverage , stress - test and concentration limits and maybe constraints on the asset selection , the book will typically be subject to a value - at - risk ( var ) limit . in practicemost traders will run their book significantly below these limits because there is another kind of constraint : the stop level .the stop may take several forms .it is for example common for a book with monthly liquidity to be subject to explicit soft and hard stop levels between liquidity dates . a book with a var limit at 1 day , could for example have a soft stop at mtd ( month - to - date ) where the var limit drops to and a hard stop at mtd where the var limit drops to zero for the remainder of the month . in any given monththe stop - level is a fixed number of percentage points below the portfolio value at the beginning of the month .any triggered stop will be released and reset after the first coming month - end , where investors will be given a chance to withdraw their funds . hired ( non - partner )traders managing a relatively small sub - book as part of a larger hedge fund are typically also subject to an explicit terminal stop at around ytd ( year - to - date ) , at which point the book is closed and they lose their job . even founders and senior partners , who may not be subject to any explicit stop levels , will in effect operate under an implicit stop level .this is due to the fact that if the drawdown from the high - water mark becomes too large , then there will be no performance fees for the foreseeable future to pay for the operation , and the fund is likely to find itself in a downward spiral of brain - drain , investor withdrawals , and lower returns .the desire to avoid this situation will in effect mean that the fund can only operate at full risk when it is near its high - water mark .indeed , it has happened more than once that a hedge fund has simply closed after a steep drawdown , only for the managers to open a new fund in a different setting .history has of course also shown what can happen when a trading operation is run without stops or with unenforceable stops due to illiquidity or management failure .leaving the moral aspects aside , it is clear that stop - levels , implicit or explicit , are a major part of the constraints limiting the risk in most funds , and that the constraining effect of the stop level scales with the portfolio value and the time remaining until the stop level is reset .the penalty for hitting the stop is lost opportunity . for a terminal stopthe opportunity cost is potentially infinite since the loss is all future earnings .a strategy can try to avoid hitting the stop by reducing the risk as the portfolio value approaches the stop level , but in practice it is impossible to guarantee that the stop will not be hit unless the risk is zero . in a model world where the risky assets are modelled using brownian motion , it is possible to avoid hitting the stop by reducing the risk , but this does not mean that there is no opportunity loss . scaling down the risk also scales down potential returns , and increases the amount of time the portfolio end up spending in the vicinity of the stop . thus there is a kind of ` dead zone ' just above the stop level , where the portfolio has very little risk , and it can become virtually stuck . the optimal strategy must find the right balance between scaling the risk down too quickly versus too slowly , foregoing potential returns in both cases . with a terminal stop , or an infinitely long time to reset, the solution is to effectively ignore the capital below the stop level , and invest in the kelly optimal way as if the capital above the stop level was the total capital . doing this , the trader is only risking the fraction of the capital he can survive to lose .this strategy which is independent of time , but a function of the portfolio value is discussed in section [ sec : kwts ] , and the related optimal strategy for a portfolio subject to a trailing maximum drawdown limit is discussed in [ grossmanzhou ] . a stop level which resets at the end of each period is more interesting from a practical point of view . with a periodically reset stop - levelthe penalty for hitting the stop , or becoming stuck in the dead zone , is less severe , since it is only a temporary loss of opportunity .if there is only a few days left before the reset , the portfolio value must be very near the stop - level for this to have any effect , but in the other limit where there is a very long time to the reset , the optimal strategy must approach the same strategy as the one for a terminal stop / infinite horizon .it is therefore clear that an optimal strategy will scale the risk up and down , not only according to how far away the stop is from the current portfolio value , but also according to the time remaining until the stop - level reset .it is the kelly growth optimal strategy for this problem , discussed in detail in section [ koswslr ] , which is our main objective .a realistic modelling of the problem would be very complicated , depend on idiosyncratic details , and is beyond our ambitions in this paper .instead our goal is to study the optimal strategy in the simplest non - trivial model with a stop - rule .as it turns out , the resulting strategy , discussed in section [ koswslr ] , corresponds quite closely to common trader intuition , and can provide a rough guide to the appropriate risk level for practical portfolios .we use the standard minimal portfolio model consisting of a risky asset displaying geometric brownian motion and an exponentially growing risk - free asset , respectively defined by the evolution with the growth rate and the volatility of the risky asset , the risk - free rate , and the forward looking increment of a wiener process .a self - financing trading strategy , which invests the fraction in the risky asset and the fraction in the risk - free asset , will result in a portfolio value which evolves according to the equation + \alpha\sigma dw_t.\ ] ] the analysis can be simplified by introducing the discounted relative portfolio value we can consider to be an index for the portfolio with and with the total return at time as measured in time zero dollars .alternatively , if is the rate of inflation , it can be considered the real return .this definition eliminates the constant risk - free drift term from the evolution equation for the discounted portfolio value ,\ ] ] where we have indicated that the strategy may depend on the value of the portfolio as well as time .we are concerned here with problems for which the objective is the maximization or minimization of a terminal ` reward ' function . for a kelly growth optimal portfoliothis would be the period growth rate .the value function is defined as the expectation of the terminal reward over paths starting from the intermediate point at time , and evolving according to ( [ eq : hjb : dpi ] ) with the optimal control , i.e..\ ] ] it is significant that we have excluded a consumption type term in the objective , ( a running cost / reward ) , as it would complicate the analysis considerably , but using the methods of appendix [ app : legendre ] such terms can be handled at least in simple cases ( by adding the equivalent terms to ( [ eq : lt06 ] ) ) .the value function and the control satisfies the hamilton - jacobi - bellman equation which must be solved subject to the final time boundary condition the equations ( [ eq : hjb2])-([eq : hjb2cond ] ) can also be used to find the strategy minimizing the expectation of a terminal objective by replacing by . in either casethe objective function must have a form that makes the problem well - defined . as it standsthe hjb problem defined by ( [ eq : hjb2 ] ) and ( [ eq : hjb2cond ] ) is rather complicated as it involves two unknown functions and linked by a combined pde / maximization equation .the trick which renders the problem solvable is that under certain conditions we can perform the optimization over the control before solving for , ( see e.g. ) .the condition is essentially that must have a shape which makes the optimization problem in ( [ eq : hjb2 ] ) have a well - defined solution .this must be confirmed a posteriori for the actual solution , ( but in many applications it is rather obvious ) .in essence must be concave for a maximization problem and convex for a minimization problem . according to bellman s principle, the optimal control must also be optimal over each sub - interval . in the present contextthis means that must optimize ( [ eq : hjb2 ] ) at each instant of time .this is effected by setting , which gives the formal solution substituting this back into ( [ eq : hjb2 ] ) , the control is eliminated resulting in the hjb equation for the value function alone the problem is then reduced to solving ( [ eq : hjb : value ] ) subject to the terminal time boundary condition ( [ eq : hjb2cond ] ) together with any additional boundary conditions ( for example conditions specified on the edges , which could be asymptotic ) .the standard procedure is thus to start by solving for the value function , and then as the second step to calculate the optimal strategy from ( [ eq : hjb : value : control ] ) .a hint that ( [ eq : hjb : value ] ) hides a simpler underlying structure is provided by the fact that a legendre transform turns it into a linear second order pde as discussed in appendix [ app : legendre ] .it is natural to ask , if there is a pde for the optimal strategy without any reference to the value function .such an equation would not only be interesting in its own , but would potentially allow us to circumvent the value function , and could have considerable practical benefits . in this sectionwe show how it is possible to eliminate the value function , and derive a non - linear partial differential equation which is obeyed by the optimal control itself .the derivation requires that the functions involved are sufficiently smooth in the interior of the domain , but in practice this is not necessarily a great limitation .the starting point is the hjb for the value function ( [ eq : hjb : value ] ) and the equation for the optimal control ( [ eq : hjb : value : control ] ) .if we take the hjb equation ( [ eq : hjb : value ] ) together with the equations which follow by operating on it with and and the equation for the optimal control ( [ eq : hjb : value : control ] ) together with the equations which follow by operating on both sides by , and , then we have a system of seven equations for the unknowns , , , , , , , , , , .our aim is to derive a single equation only involving the stochastic control and its derivatives , , , . on the surfaceit would look like we would need eight equations in order to be able to eliminate the seven derivatives of the value function and still be left with a single equation for the control , but note that the stochastic control as given by ( [ eq : hjb : value : control ] ) only depend on the value function through the ratio between the two derivatives , which means that only six equations are needed to eliminate all the s .if we start by using four equations to eliminate , , , , the three remaining equations can be put in the form ,\\\partial_{\pi}^2 j & = \frac{(\mu - r)^2}{2\sigma^2 } \frac{(\partial_{\pi } j)^2}{\partial_t j}.\end{aligned}\ ] ] this gives us two equations for the ratio , respectively .\end{aligned}\ ] ] it follows that the optimal strategy must obey the equation this is a non - linear pde which is second order in the portfolio value and first order in time .the non - linearity is manifested in the factor on the right hand side . like the hjb for the value function, it must be solved backwards in time from a final time boundary condition . in general the solutionmust also obey boundary conditions in the -direction which may be asymptotic .it is reminiscent of a non - linear diffusion equation in reverse time , but it does not strictly have the required structure ( see appendix [ app : legendre ] ) . the equation is independent of the effective drift rate , but dependency on this quantity may enter via the boundary conditions . since any constant will solve ( [ eq : alpha : pde ] ) , it is of little use for problems with such a solution .rather ( [ eq : alpha : pde ] ) is applicable to problems requiring a non - trivial solution in the sense of an optimal strategy which is a non - constant function of the portfolio value and time .the second caveat is that we must be able to establish the boundary conditions for the control for example from limit or asymptotic properties which the solution must obey .for some problems it may not be obvious how to find these boundary conditions in which case it will be necessary to revert to the hjb for the value function .if , on the other hand , we have a strategy solving ( [ eq : alpha : pde ] ) , then the corresponding value function can be found from the first order equation ( [ eq : alpha2value ] ) from which it is determined up to a constant .the equation ( [ eq : alpha : pde ] ) can be transformed into different forms which may be more or less convenient for a particular investment problem .some examples are discussed in appendix [ app : transf ] .an explanation for the form of the equation , and the nonlinearity in particular , can be found as follows .if we describe the optimal strategy by the amount invested in the risky asset rather than by the fraction of the portfolio value , i.e. using , then the equation takes the form equation ( [ eq : gamma : pde ] ) can also be obtained directly from the legendre transformed hjb problem as discussed in appendix [ app : legendre ] .let denote the discounted risky asset price and the discounted bond price as per ( [ eq : model01])-([eq : model02 ] ) and ( [ eq : disc ] ) .then and , and the discounted portfolio value evolves according to the prescription the process ( [ eq : port : gamma ] ) is self - financing and is a special case of the general self - financing feedback strategy with the self - financing condition , with , where is the number of risky asset shares and the number of risk - free bonds in the portfolio at time . since , not contribute to the dynamics , but is a slave keeping track of the accounting of the cash generated / needed for the re - balancing . in our case the portfolio ( [ eq : port : gamma ] ) can be decomposed as , with the value of the risky assets , and the value of the risk - free investment . whereas the portfolio is self - financing by construction , the risky asset investment , ,does not alone represent a self - financing quantity because a general self - financing strategy can redistribute the portfolio freely between and as long as it keeps the sum unchanged . because we are focussing on smooth strategies here , we can link the increment to the increments in the portfolio value , , and time , , via the expansion + ( \partial_{\pi}\gamma)d\pi_t.\end{aligned}\ ] ] it follows that when represents an optimal trading strategy obeying ( [ eq : gamma : pde ] ) , this reduces to the expression this means that although is not self - financing for a general sub - optimal trading strategy , it will , for the optimal trading strategy , behave as a self - financing quantity with the total increment in ( from market evolution plus re - balancing ) directly proportional to the increment in the portfolio ( which because the portfolio is self - financing is given by market evolution alone ) .the same holds for the risk - free part of the portfolio which for the optimal strategy obeys .we note in particular that the nonlinearity of ( [ eq : alpha : pde ] ) and ( [ eq : gamma : pde ] ) is a consequence of the feedback nature of the portfolio evolution as specified by equation ( [ eq : port : gamma ] ) , and that the optimal strategy displays a certain balance between the risky and risk - free parts of the portfolio by only redistributing between these in proportion to the total increment in the portfolio .in this section we comment on the application of the strategy equation ( [ eq : alpha : pde ] ) to some standard investment problems , where there is a known analytic solution to the hjb equation .the purpose is to investigate to what degree ( [ eq : alpha : pde ] ) can be applied with particular focus on the boundary conditions needed .also [ sec : conststrat ] and [ sec : kwts ] establishes solutions which the optimal strategy in section [ koswslr ] must approach asymptotically .the examples have been chosen to illustrate problems where the optimal strategy is respectively constant , [ sec : conststrat ] , value dependent , [ sec : kwts ] , and dependent on both value and time , [ sec : brownestrat ] .the free kelly strategy is the unconstrained strategy maximizing the period growth rate and can also be interpreted as the strategy maximizing the terminal logarithmic utility , i.e..\ ] ] the well - known optimal strategy , , is to invest a constant fraction , of the portfolio in the risky asset .the corresponding value function is given by being constant ( [ eq : kelly : alpha ] ) is a trivial solution to ( [ eq : alpha : pde ] ) .a slight generalization of ( [ eq : kellyutil ] ) is the power function utility , with , which is also referred to by the acronym crra for constant relative risk aversion .the value function for this utility is given by and the corresponding optimal strategy is the constant the free kelly strategy corresponds to the special case , and a fractional kelly strategy , , investing a constant fraction of the free kelly strategy , corresponds to optimizing a power function utility with .a rational investor would never invest more than the free kelly strategy unless forced to do so by some constraint .this means that would normally be less than one and therefore negative corresponding to a utility which is capped to the upside .benchmarked fractional kelly strategies have recently been investigated in detail by .if an investor has a critical level below which his wealth must never fall , and therefore can only put at risk the part of his capital exceeding this level , then the capital below the critical level is in effect irrelevant to his utility .a kelly investor will therefore optimize the growth rate of the capital exceeding the critical level .this goal is expressed in the value function ,\ ] ] with the hjb solution given by the corresponding optimal strategy is this is the unconditional free kelly strategy multiplied by the fraction that the excess capital constitutes out of the total capital . we observe that ( [ eq : kf03 ] ) is a solution to ( [ eq : alpha : pde ] ) because it is independent of time and obeys .it is therefore completely defined as the time independent solution to ( [ eq : alpha : pde ] ) subject to the boundary conditions and for .we note that ( [ eq : kf03 ] ) is a ` constant proportion portfolio insurance ' ( cppi ) strategy with the multiplier equal to the free kelly strategy , i.e. the amount invested in the risky asset is .more generally we conclude that any time independent optimal strategy must take the form for some constants and . , ( see also ) , have solved the problem of finding the optimal strategy for an investor subject to a maximum drawdown rule for a general class of utility functions .the essence of their solution can in the present context be stated as follows .investors who in accordance with the kelly principle are seeking to maximize the long term real / discounted capital growth rate ,\ ] ] subject to the maximum drawdown constraint ( with a given constant ) where is the high - water mark should follow the strategy of investing an amount equal to the free kelly strategy times the excess over the moving stop level in the risky asset , i.e. , or in other words , follow the strategy we note the obvious similarity between ( [ eq : kf03 ] ) and ( [ eq : gz04 ] ) , but refer to the references above for more information . has shown that the problem of finding the strategy which maximizes the probability of reaching a specified real return target , , in a finite time period , , has the following solution .as argued by browne , the probability that the target is hit at any time is equivalent to the probability that the terminal wealth hits the target , because once the target is reached the probability maximizing strategy would be to invest exclusively in the risk - free asset .the value function for this problem is thus defined by with our notation browne s solution to the problem is given by the value function ( with ) and the corresponding optimal strategy is given by with , and with the standard normal density and the corresponding cumulated probability function .straight forward differentiation will show that ( [ eq : browne:03 ] ) is indeed a solution to ( [ eq : alpha : pde ] ) . as shown in appendix[ app : transf ] , ( [ eq : browne:03 ] ) is an example of a solution which can be found from ( [ eq : alpha : pde ] ) via separation of the variables . it would nevertheless have been difficult to arrive at ( [ eq : browne:03 ] ) directly from ( [ eq : alpha : pde ] ) , because although the boundary condition is obvious , this is not the case for other boundary conditions . the optimal strategy ( [ eq : browne:03 ] )diverges in the limit to such a degree that the probability of hitting the target level approaches a finite value as the remaining time goes to zero , i.e. for .a strategy with finitely capped leverage would have for .we consider the investment problem of finding the strategy which optimizes the long term growth rate when investment is subject to a periodically reset stop - loss rule . to study this problemwe consider the minimal model introduced in ( [ eq : model01])-([eq : hjb : dpi ] ) with an investment universe consisting of one risky and one risk - free asset .the stop - loss rule means that if in a given period the value of the portfolio drops to a certain level ( the stop - level ) , then the risk limit drops to zero , and the portfolio must be invested exclusively in the risk - free asset for the remainder of that period , and therefore the portfolio value will remain at . at the beginning of a new period of duration the stop - loss rule is reset , and the level is set a fixed number of percentage points below the opening portfolio value . in this modelconsecutive investment periods are therefore both independent and identical in distribution .we seek a strategy defined on the domain \times [ 0,t] ] .finally it is clear that sufficiently near the final time boundary as with a portfolio value strictly above the stop - level , , the optimal strategy will approach the free kelly strategy with the limit for .this follows from the fact that the underlying process is continuous in time , and therefore in the limit the stop - rule becomes irrelevant for a portfolio with finite leverage when the portfolio value is strictly above the stop - level , because there is not time enough left for the portfolio value to drop to the stop - level . to summarize we have the three dirichlet boundary conditions , \\\alpha(\pi_c , t)=0 , & \;\;\text { for } t\in [ 0,t],\\\ \alpha(\pi , t)=\alpha_k , & \;\;\text { for } \pi>\pi_c.\end{aligned}\ ] ] to solve the problem we employ the strategy laid out in ( [ eq : transf01])-([eq : transf04a ] ) in appendix [ app : transf ] , i.e. change variables from to and use the scaled optimal strategy .the new variables are respectively the scaled reciprocal portfolio value and the scaled time to period end with .the domain of definition for the variables is \times [ 0,\infty] ] .the ` dead zone ' is indicated in red and the free kelly limit in purple .( b ) a specific example for an ex - ante sharpe ratio of and a period of one month .the scaled strategy is plotted as function of time to period end for four different portfolio values specified by the distance to the stop level in percent . from the bottomthe four curves correspond to stop levels which are respectively , , , below the current portfolio value .time to period end is indicated on the abscissa in weeks.,title="fig : " ] 0.35 for \times [ 0,1] ] ) . to derive the corresponding equation for we have to remember that there is an implicit time - dependence in when is kept fixed and therefore .otherwise the manipulations are straight forward and the result is ( [ eq : gamma : pde ] ) .the non - linear pde for the optimal strategy ( [ eq : alpha : pde ] ) can be transformed in many ways . herewe focus on a few with potential applications .let denote the free kelly strategy , and let denote the scaled control if we introduce the scaled dimensionless time variable then measures the time remaining in units of the characteristic time , and the equation for the optimal strategy takes the form let denote a portfolio value of particular interest .depending on the setting this could for example be the starting value , a stop - loss , or a target level .we can then define the scaled reciprocal portfolio value by the optimal strategy then obeys the equation this form is convenient for investment problems with a stop level , because translates to ] , this means that for , with , the respective boundary conditions . for an investment problem with an upper target level the reciprocal portfolio value is conveniently defined as , and the domain then translates to . in this casethe asymptotic behavior as for a finite value of would be to approach a constant , , which equals the boundary value of the control at the target level .the equation ( [ eq : transf04a ] ) for the optimal strategy has separable solutions of the form with constants and , and where the function must solve the ode browne s probability maximizing strategy ( [ eq : browne:03 ] ) belongs to this class .the transformed solution resulting from the change of variables and scaling can be written as , with , where is the target level .the function , which solves ( [ eq : transf08 ] ) with , is given by where and refer to respectively the pdf and cdf of the standard normal distribution . alternatively , ( [ eq : transf03 ] ) can be simplified in analogy with ( [ eq : gamma : pde ] ) by the transformation , which gives the pde for the scaled strategy variable .the equation ( [ eq : transf06 ] ) has self - similar solutions of the form , where for some constants , and , and where the similarity function must solve the ode [ lastpage ]
from the hamilton - jacobi - bellman equation for the value function we derive a non - linear partial differential equation for the optimal portfolio strategy ( the dynamic control ) . the equation is general in the sense that it does not depend on the terminal utility and provides additional analytical insight for some optimal investment problems with known solutions . furthermore , when boundary conditions for the optimal strategy can be established independently , it is considerably simpler than the hjb to solve numerically . using this method we calculate the kelly growth optimal strategy subject to a periodically reset stop - loss rule .
searching is undoubtedly one of the most basic problems in computer science and computational physics . in this context ,searching is not just restricted to a physical database but could also be searching through a state space for an entry which fulfills a specific clause such as the constraint satisfiability problem ( -sat ) .the classical complexity of such a task scales linearly with the size of the dataset , , to be searched .intuitively , it is easy to see this must be the case as every item must be checked in turn until the specific item is found . on average , half the items will have to be checked before the correct one is located .this leads to the best classical scaling which can be achieved , .one of the most important quantum algorithms discovered thus far is the searching algorithm of grover .grover showed that an item could be found from a set of in a time quadratically faster than the classical case , .grover s algorithm has been shown to be both optimal and also one of the few quantum algorithms which is provably faster than any possible classical algorithm .several years after the introduction of this algorithm , shenvi , kempe and whaley gave a quantum search algorithm based instead on the discrete time quantum walk , which was first introduced with algorithmic applications in mind by aharonov et al . and ambainis et al .this quantum walk approach to the search problem is able to match the quadratic speed up of grover s algorithm .the quantum walk search algorithm has been studied in detail and many improvements have been made since its introduction .in fact , due to the many uses of searching in algorithms , the quantum walk search algorithm has become a standard tool in developing new quantum algorithms .the quantum walk has also recently been shown to be universal for quantum computation and hence a computational primitive , again showing it is a powerful tool . in ,the items of the dataset are laid out as the vertices of an undirected graph , specifically a hypercube of dimension , on which the quantum walk can be solved analytically . other recent work by potoek et al . has improved the original algorithm by adding an additional coin dimension , allowing the probability of the marked state to approach unity after just one run of the algorithm .this brings the running time of the quantum walk search algorithm very close to the optimal for searching an unsorted dataset , .zalka has previously shown that , for a probability of finding the marked state to be one , this is the best that can be achieved .however , the hypercube studied in is a highly connected but non - physical structure . in order to make the algorithm more physical ,the study of the search algorithm on lower dimensional structures was started by benioff .he considered the additional cost of the time it would take a robot searcher to move between different spatially separated data points on -dimensional lattices , stating that in two spatial dimensions , , no speedup was apparent .subsequently , aaronson and ambainis ( aa03 ) introduced an algorithm based on a divide and conquer approach , contradicting this claim with a run time of in dimensions and when ..summary of runtimes of quantum search algorithms in various dimensions . [ cols="<,^,^,^,^",options="header " , ] [ percprobs ] it is fairly obvious that at this critical percolation threshold , the properties of the lattice change significantly . for lattices with a percolation probability below the percolation threshold , it is clear that many of the sites in the lattice will be unreachable , whereas above the threshold the opposite is true ( though perhaps through a less direct route than in a fully connected lattice ) . due to their transport properties , percolation latticesare widely used to model various phenomena including forest fires , disease spread and the size and movement of oil deposits . for a good introduction to both the theory and use of percolation lattices ,see stauffer and aharony . we are using the percolation lattices as a description for the database arrangement that we wish to run the quantum walk search algorithm upon . asthe disorder introduced by using percolation lattices is random , we ran the search algorithm on many different percolation lattices ( 5000 ) , and averaged over the results .it is obvious that at low probabilities of vertices ( or edges ) existing , that there may be sections of the graph that the quantum walk is unable to reach .in fact , at very low probabilities , it is likely that the marked state will be in a small , unconnected region of the lattice where it will never be ` found ' . in these cases , this means the marked state will only ever be able to attain a small portion of the total probability .we set the condition on the algorithm that the probability of the marked state must reach at least twice the value of the initial superposition in order for it to succeed .similarly , the time to find this maximum probability is artificially smaller than it should be if the entire lattice was connected .this is due to the walker only having to coalesce on the marked state over a small piece of the lattice . in order to combat this, we set the time to find the marked state as zero if the algorithm failed .if it succeeded , we took the reciprocal of the time to find the marked state .after averaging over many different percolation lattices , we again took the reciprocal of this averaged time in order to give a clearer view on how the algorithm scaled with time .we also set the probability of the marked state to be zero if the algorithm failed . in order to run the quantum walk search algorithm on percolation lattices, we have to deal with the fact that the lattice is not -regular . in thissetting , we can not just add self loops to make the lattice regular as in as we want to know exactly how the disorder affects the algorithm .instead , we take the grover coin for the degree of the vertex in question and ` pad ' it out with the identity operator for the edges that are missing .for example if we have a vertex with just edge 3 missing , the operator would be where represents the grover coin with edges 1 , 2 and 4 present . in the case of a two dimensional percolation lattice , there are 16 combinations of edges that can be present / missing . for a three dimensional percolation lattice ,this increases to 64 combinations . in order to deal with this, we maintain the labelling of the edges as previously and assign a binary number to each edge , depending on whether an edge is present or not .the example above , eq . ( [ percgrover ] ) , would therefore be .this creates the combinations we require .there is then a fixed mapping between each binary number and the correct coin for each vertex .in addition to the coin operator changing , we must also modify the initial state to account for the missing vertices or edges .this could be done in several ways .we try to stick as closely to the initial state of the basic quantum walk search algorithm by just splitting the state into an equal superposition over all the possible edges present .we now show our initial results for the quantum walk search algorithm on two dimensional site percolation lattices .we firstly show , fig .[ twodprobperc ] , how the maximum probability of the marked state varies with both the size of the dataset and the percolation probability .we see , as we would think intuitively , that as the percolation probability drops and the structure becomes less connected , the maximum probability of the marked state decreases . , varies with the size of the dataset and the percolation probability for site percolation in two dimensions.,scaledwidth=75.0% ] we note that the scaling of the maximum probability initially maintains the logarithmic scaling of the basic 2d lattice before eventually reverting to the scaling of the line , , at lower percolation probabilities . in the case of site percolation, this change in scaling seems to occur at roughly probabilities below , not significantly higher than the critical percolation threshold .this is expected as at the critical threshold , the structure has in general a single path from one side to the other , effectively a 1d lattice .our numerical results match this behaviour , with the scaling of the probability of the marked state matching that of the line at this point . atpercolation probabilities higher than the critical threshold , we see a change in the prefactor to the scaling of the maximum probability of the marked state .we show this prefactor to the logarithmic scaling in fig .[ twodpercprobscaling ] .it is easy to see that as soon as the percolation probability passes the critical threshold , , the scaling increases in a linear fashion .we also note here , after investigation on a finer scale , that there is a gradual change in this prefactor scaling around the critical percolation threshold .the time to find the marked state follows a similar behaviour , gradually changing from the quadratic scaling of the 2d lattice to a classical linear scaling as reduces .we show the time to find the marked state for site percolation in fig .[ twodtimesite ] . , to the scaling of the time to find the maximum probability of the marked state , from the data in fig .[ twodtimesite ] , varies with the size of the dataset and the percolation probability for site percolation in two dimensions .also shown is to indicate the lower bound of the algorithm ( dashed line).,scaledwidth=75.0% ] we see that when , the scaling of the time to find the marked state is very similar to the classical run time , .the kinks in this scaling ( and the other percolation probabilities ) are just from averaging over many percolation lattices . given more time , a higher number could be run and thus a smoother scaling obtained .it can be seen that the time to find the marked state seems to retain the quadratic quantum speed up , even in the presence of a non - trivial level of disorder . as in the work of leung , it seems as though the scaling of the time to find the marked state may follow a fractional scaling from quadratic back to linear as , where is the time to find the marked state and is the size of the dataset .we follow the analysis in to establish how the scaling of the time to find the marked state varies with the percolation probability .we show , in fig .[ twodperctimescaling ] , how the value of the coefficient varies as the level of disorder is increased .we can see the quadratic speedup is maintained , , for percolation probabilities of roughly .below this probability , the quantum speed up disappears gradually to end at the classical run time when .this is for the same reason as in the scaling of the maximum probability of the marked state , at the critical threshold the structure is effectively a line . below the critical threshold ,the algorithm fails ( the marked state is probably in a disconnected region ) .we note here that the coefficient , , is not exactly 0.5 as we expect for the quadratic speed up .this is most probably due to the fact that percolation lattices are random in nature , and we only average over a specific number . if we averaged over more , then we would see a more constant scaling of the coefficient at , i.e. a full quadratic speed up .we now turn our attention to three dimensional site percolation lattices .we follow the same analysis as in the two dimensional case .we firstly show , fig .[ threedprobperc ] , how the maximum probability of the marked state varies as the percolation probability is decreased .we see , as in the two dimensional case , that the basic scaling of the maximum probability matches that of the three dimensional lattice until the percolation probability drops to roughly the critical percolation threshold , .we show in fig .[ threedpercprobscaling ] , how the prefactor to this scaling of the maximum probability varies with the percolation probability . in the same way as the two dimensional case , we see an almost linear scaling of the prefactor once the percolation probability has passed the critical threshold. the scaling here does nt seem to be as close as in the two dimensional case .this is probably because in the case of three dimensional percolation lattices , there are many more combinations of lattice which can be created . averaging over more of these lattices would most probably give a smoother fit ., varies with the size of the dataset and the percolation probability for site percolation in three dimensions.,scaledwidth=75.0% ] the time to find the marked state , in the three dimensional case , follows the same behaviour as in the two dimensional percolation lattices .we show in fig .[ threedtimesite ] , how the time to find the marked state varies with the percolation probability .we see , fig . [ threedperctimescaling ] , as in the two dimensional case , that the scaling coefficient , , gradually changes from the quadratic speed up to the classical run time .again , we note that the quadratic speed up is maintained for a non - trivial amount of disorder before gradually changing to the classical run time at the point . , to the scaling of the time to find the maximum probability of the marked state , from the data in fig .[ threedtimesite ] , varies with the size of the dataset and the percolation probability for site percolation in three dimensions .also shown is to indicate the lower bound of the algorithm ( dashed line).,scaledwidth=75.0% ] we do note , as in the two dimensional case , that the coefficient is not exactly 0.5 .this can be explained in the same way as the two dimensional percolation lattices , and averaging over more lattices should give a constant value of the coefficient .in this paper , we have discussed various factors which affect the efficiency of the quantum walk search algorithm .we introduce a simple form of tunnelling which allows us to modify the substrate we use as the database arrangement , and use this to interpolate between structures with varying dimensionality and degree .we find that although the dependence on the spatial dimension of the underlying substrate is strong , it is not the only factor which affects the efficiency of the algorithm .we also find secondary dependencies on the connectivity and symmetry of the structure .in addition , we use percolation lattices to model disorder in the lattice in a simple way . in this casewe find , counter - intuitively , that the algorithm is able to maintain the quantum speed up even in the presence of non trivial levels of disorder .we now discuss our findings for each factor in turn .we have shown two different ways in which we can interpolate between structures of differing spatial dimension .firstly , we use a tunnelling operator to vary specific edges of a lattice enabling us to gradually change the spatial dimension of the lattice . in this case, we find a sudden change in the scaling of the maximum probability of the marked state as soon as there is even a very small probability of the edges existing .this seems to indicate that the ` strength ' of the edges in the lattice is of little importance , with the dependence on the specific spatial dimension taking precedence .however , we find that the prefactor to the scaling of this probability varies with the strength of the tunnelling edges , increasing as the tunnelling strength increases .the basic scaling of the time to find the marked state is not affected by the change in dimensionality , we note though that the prefactor to the scaling decreases as the tunnelling strength increases , hence the algorithm becomes more efficient .the other case we consider is the case of lattices with varying height or depth , for example , a 3d lattice with fixed width and height but of varying depth .although this structure is still strictly three dimensional , when the depth is very low and the width ( height ) is large , the quantum walker will see the structure as almost a basic 2d cartesian lattice .suprisingly , in this case we see a gradual change in scaling in the maximum probability of the marked state . at low depths of the lattice ,the scaling is almost the same as the lower spatial dimensional structure gradually changing to the higher dimensional structure scaling as the depth increases to become equal to that of the other dimensions .this highlights the importance of full symmetry in the quantum walk search algorithm .we show how the search algorithm is affected by varying connectivity in regular lattices .we use our simple model of tunnelling to allow us to interpolate between structures such as the square lattice ( ) and the triangular lattice ( ) . with this model , we are able to identify how the prefactors to the scaling of both the maximum probability of the marked state and the time to find the marked state vary with the connectivity of the structure .the basic scaling of the time to find the marked state , , is not affected by the increase in connectivity but we find the prefactor to this scaling reduces as the connectivity of the structure being searched increases .this is due to the additional paths the walker can take to coalesce on the marked state , thus increasing the efficiency of the algorithm in both two and three dimensions .the maximum probability of the marked state is also affected by the connectivity of the underlying structure .we find that the additional connectivity does not affect the basic scaling of in the two dimensional case . only moving to three spatial dimensionsallows the walker to find the marked state with a constant probability , .however , we do note that in both two and three dimensions the prefactors to this scaling , in general , increase as the connectivity of the structure increases .again , this increases the efficiency of the algorithm as it may not have to be repeated so many times .we also find that the probability of the marked state does not increase uniformly with the additional connectivity .we see the prefactor in the scaling drop and then recover itself before increasing as the tunnelling strength increases .this is due to the dynamics of the quantum walk on a structure with some broken symmetry , i.e. low tunnelling strength between vertices .we briefly investigated the dynamics of the walk by starting the walker in a single location and monitoring how quickly it spread outwards with varying tunnelling strengths .this confirmed our results for the search algorithm as we found that the spread of the quantum walk also dropped for lower tunnelling strengths before recovering and eventually increasing at higher tunnelling probabilities .however , this work on the spreading of the walk compared to tunnelling strength is by no means exhaustive and it would be interesting to look more deeply into this in the future .we studied both two and three dimensional percolation lattices as a way to model disorder in the quantum walk search algorithm .we are interested in how the algorithm performs with increasing disorder .we use percolation lattices as a random substrate for the database arrangement we wish to search .we find , in both the two and three dimensional cases , that as the level of disorder increases , the maximum probability of the marked state decreases . whilst the percolation probability is higher than the critical percolation threshold , the basic scaling of the maximum probability of the marked state matches that of the basic lattice ( in that spatial dimension ) .once the percolation probability drop to the critical threshold , this scaling changes to that of the line , . thisis expected as at this point the structure is effectively a line .we also note the prefactor to the scaling of the maximum probability of the marked state increases linearly once the percolation probability is greater than the critical threshold .the time to find the marked state follows a similar behaviour .we find that as the disorder increases , the time to find the marked state also increases .surprisingly though , we note that the quadratic speed up is maintained for a non - trivial level of disorder , before gradually reverting to the classical run time , , as the disorder reaches the critical percolation threshold .this seems to match the results of , which show a fractional scaling for the spreading of the quantum walk from a maximal quantum spreading to a classical spreading at and below the critical threshold .however , this is in contrast to the work of krovi and brun who highlight the effect of localisation on the quantum walk when defects are introduced into the substrate .both these factors indicate that the quantum walk search algorithm seems to be more robust to the effects of disorder and symmetry than the basic spreading of the quantum walk .this could be due to the fact that the initial state of the walker is spread across the whole lattice .we have seen that the algorithm becomes less efficient as the disorder increases , but at percolation probabilities greater than the critical threshold , the algorithm still seems to be viable , although more amplification of the result may be required .+ + _ acknowledgments _ the authors would like to thank jiannis pachos and noah linden for useful and interesting discussions .nl was funded by the uk engineering and physical sciences research council .vk is funded by a uk royal society university research fellowship .f. magniez , a. nayak , p. c. richter , and m. santha . .in _ proceedings of the 20th annual acm - siam symposium on discrete algorithms ( soda ) _ , pages 8695 .society for industrial and applied mathematics , 2009 .
we numerically study the quantum walk search algorithm of shenvi , kempe and whaley [ pra * 67 * 052307 ] and the factors which affect its efficiency in finding an individual state from an unsorted set . previous work has focused purely on the effects of the dimensionality of the dataset to be searched . here , we consider the effects of interpolating between dimensions , connectivity of the dataset , and the possibility of disorder in the underlying substrate : all these factors affect the efficiency of the search algorithm . we show that , as well as the strong dependence on the spatial dimension of the structure to be searched , there are also secondary dependencies on the connectivity and symmetry of the lattice , with greater connectivity providing a more efficient algorithm . in addition , we also show that the algorithm can tolerate a non - trivial level of disorder in the underlying substrate .
_ extreme events _ ( also called critical transitions , disasters , catastrophes and crises ) are a most important yet least understood feature of many natural and human - made processes . among examples are destructive earthquakes , el - nios , economic depressions , stock - market crashes , and major terrorist acts .extreme events are relatively rare , and at the same time they inflict a lion s share of the damage to population , economy , and environment . accordingly , studying the extreme events is pivotal both for fundamental predictive understanding of complex systems and for disaster preparedness ( see and references therein ) . in this paperwe work within a framework that emphasizes mechanisms underlying formation of extreme events .prominent among such mechanisms is _ direct cascading _ or _fragmentation_. among other applications , this mechanism is at the heart of the study of 3d turbulence . a statistical model of direct cascade is conveniently given by the branching processes ; they describe populations in which each individual can produce descendants ( offsprings ) according to some probability distribution .a branching process may incorporate _ spatial _ dynamics , several types of particles ( multi - type processes ) , age - dependence ( random lifetimes of particles ) , and immigration due to external driving forces .in many real - world systems , observations are only possible within a specific domain of the phase space of a system .accordingly , we consider here a system with an _ unobservable _ source of external driving ultimately responsible for extreme events .we assume that observations can only be made on a _subspace _ of the phase space .the direct cascade ( branching ) within a system starts with injection of the largest particles into the source .these particles are divided into smaller and smaller ones , while spreading away from the source and eventually reaching the subspace of observations .an important observer s goal is to locate the external driving source .the distance between the observation subspace and the source thus becomes a natural control parameter .an extreme event in this system can be defined as emergence of a large particle in the observation subspace .clearly , as the source approaches the subspace of observation , the total number of observed particles increases , the bigger particles become relatively more frequent , and the probability of an extreme event increases . in this paper, we give a complete quantitative description of this phenomenon for an age - dependent multi - type branching diffusion process with immigration in .it turns out that our model closely reproduces the major premonitory patterns of extreme events observed in hierarchical complex systems .extreme events in such systems are preceded by transformation of size distribution in the permanent background activity ( see _ e.g. , _ ) . in particular , general activity increases , in favor of relatively strong although sub - extreme events . that was established first by analysis of multiplefracturing and seismicity , and later generalized to socio - economic processes .our results suggest a simple universal mechanism of such premonitory patterns .the system consists of particles indexed by their _ generation _ .particles of zero generation ( _ immigrants _ ) are injected into the system by an external forcing .particles of any generation are produced as a result of splitting of particles of generation .immigrants ( ) are born at the origin according to a homogeneous poisson process with intensity .each particle lives for some random time and then transforms ( splits ) into a random number of particles of the next generation . the probability laws of the lifetime and branching are rank- , time- , and space - independent .new particles are born at the location of their parent at the moment of splitting .the lifetime distribution is exponential : .the conditional probability that a particle transforms into new particles ( 0 means that it disappears ) given that the transformation took place is denoted by .the probability generating function for the number of new particles is thus [ branching_pgf ] h(s ) = _the expected number of offsprings ( also called the _ branching number _ ) is ( see _ e.g. _ , ) .each particle diffuses in independently of other particles .this means that the density of a particle that was born at instant at point solves the equation = d(_i ) p d _ p [ diffusion ] with the initial condition .the solution of ( [ diffusion ] ) is given by p(*x , y*,t)= ( 4dt)^-n/2 \{- } , [ sol ] where .it is convenient to introduce particle _ rank _ for an arbitrary integer and thus consider particles of ranks .this reflects our focus on direct cascading , which often assumes that particles with larger size ( _ e.g. _ , length , volume , mass , energy , momentum , _ etc ._ ) split into smaller ones according to an appropriate conservation law .figure [ fig_example ] illustrates the model dynamics . to the origin .model parameters are , , .circle size is proportional to the particle rank .different shades correspond to populations from different immigrants , the descendants of earlier immigrants have lighter shade .the clustering of particles is explained by the splitting histories .note that , as the origin approaches , the particle activity significantly changes , indicating the increased probability of an extreme event ., title="fig : " ] to the origin .model parameters are , , .circle size is proportional to the particle rank .different shades correspond to populations from different immigrants , the descendants of earlier immigrants have lighter shade .the clustering of particles is explained by the splitting histories .note that , as the origin approaches , the particle activity significantly changes , indicating the increased probability of an extreme event ., title="fig : " ] to the origin .model parameters are , , .circle size is proportional to the particle rank .different shades correspond to populations from different immigrants , the descendants of earlier immigrants have lighter shade .the clustering of particles is explained by the splitting histories .note that , as the origin approaches , the particle activity significantly changes , indicating the increased probability of an extreme event ., title="fig : " ] to the origin .model parameters are , , .circle size is proportional to the particle rank .different shades correspond to populations from different immigrants , the descendants of earlier immigrants have lighter shade .the clustering of particles is explained by the splitting histories .note that , as the origin approaches , the particle activity significantly changes , indicating the increased probability of an extreme event ., title="fig : " ]the model decsribed above is a superposition of independent branching processes generated by individual immigrants .we consider first the case of a single immigrant ; then we expand these results to the case of multiple immigrants .finally , we analyze the rank distribution of particles .proofs of all statements will be published in a forthcoming paper .let be the conditional probability that at time there exist particles of generation within spatial region given that at time 0 a single immigrant was injected at point .the corresponding generating function is f_k(g,*y*,t;s ) = _ ip_k , i(g,*y*,t)s^i .the generating functions solve the following recursive system of non - linear partial differential equations : f_k = -d_*y * f_k - f_k+h(f_k-1),k1 , [ pgf ] with the initial conditions , , and f_0(g,*y*,t;s ) = ( 1-p ) + ps , [ pgf_ini ] where [ prop_f ] next , consider the expected number of generation particles at instant within the region produced by a single immigrant injected at point at time .it is given by the following partial derivative ( see _ e.g. _ , ) that satisfies , for any , |a_k(g,*y*,t)=_g a_k(*x*,*y*,t)d*x*. [ gs ] the expectation densities solve the following recursive system of linear partial differential equations : = d_*x * a_k - a_k+ba_k-1,k1 , [ exp_ave ] with the initial conditions the solution to this system is given by [ col ] the system has a transparent intuitive meaning .the rate of change of the expectation density is affected by the three processes : diffusion of the existing particles of generation in ( first term in the rhs of ) , splitting of the existing particles of generation at the rate ( second term ) , and splitting of generation particles that produce on average new particles of generation ( third term ) . herewe expand the results of the previous section to the case of multiple immigrants that appear at the origin according to a homogeneous poisson process with intensity .the expectation of the number of particles of generation is given , according to the properties of expectations , by the steady - state spatial distribution corresponds to the limit and is given by _k(z)=()^k ( ) ^-n/2 z^k_(z ) .[ az ] here , and is the modified bessel function of the second kind .recall that the particle rank is defined as .the spatially averaged steady - state rank distribution is a pure exponential law with index : to analyze deviations from the pure exponent , we consider the ratio between the number of particles of two consecutive generations : _ k():=. [ gamma ] for the purely exponential rank distribution , , the value of is independent of and ; while deviations from the pure exponent will cause to vary as a function of and/or . combining and we find _k()= , [ gamma1 ] where , as before , and .the asymptotic behavior of the function is given by [ gammalim ] proposition [ gammalim ] allows one to describe all deviations of the particle rank distribution from the pure exponential law .figure [ fig_ak ] illustrates our findings .. implies that at any spatial point , the distribution asymptotically approaches the exponential form as rank decreases ( generation increases ) .thus the deviations can only be observed at the largest ranks ( small generation numbers ) .analysis of the large - rank distribution is done using eqs . , .near the origin , where the immigrants enter the system , eq .implies that for .hence , one observes the _ upward deviations _ from the pure exponent : for the same number of rank particles , the number of rank particles is larger than predicted by .the same behavior is in fact observed for ( the details will be published elsewhere ) .in addition , for the ratios do not merely deviate from , but diverge to infinity at the origin .away from the origin , according to eq . , we have , which implies _ downward deviations _ from the pure exponent : for the same number of rank particles , the number of rank particles is smaller than predicted by . of generation particles at distance from the origin ( cf .proposition [ gammalim ] ) .the distance is increasing ( from top to bottom line in each panel ) as .model dimension is ( panel a ) , ( panel b ) , ( panel c ) , and ( panel d ) .other model parameters : , , , .one can clearly see the transition from downward to upward deviation of the rank distributions from the pure exponential form as we approach the origin . ]motivation for this work is the problem of prediction of extreme events in complex systems .our point of departure is a classical model of spatially distributed population of particles of different ranks governed by direct cascade of branching and external driving . in the probability theory this modelis known as the age - dependent multi - type branching diffusion process with immigration .we introduce here a new approach to the study of this process .we assume that observations are only possible on a subspace of the system phase space while the source of external driving remains unobservable .the natural question under this approach is the dependence of size - distributions of particles on the distance to the source .the complete analytical solution to this problem is given by the proposition [ prop_f ] .it is natural to consider rank as a logarithmic measure of the particle size .if we assume a size - conservation law in the model , the exponential rank distriburtion derived in corresponds to a self - similar , power - law distribution of particle sizes , characteristic for many complex systems .thus , the proposition [ gammalim ] describes space - dependent deviations from the self - similarity ( see also fig .[ fig_ak ] ) ; in particular , deviations premonitory to an extreme event .the numerical experiments ( that will be published elsewhere ) confirm the validity of our analytical results and asymptotics in a finite model .the model studied here exhibits very rich and intriguing premonitory behavior .figure [ fig_example ] shows several 2d snapshots of a 3d model at different distances from the source .one can see that , as the source approaches , the following changes in the background activity emerge : a ) the intensity ( total number of particles ) increases ; b ) particles of larger size become relatively more numerous ; c ) particle clustering becomes more prominent ; d ) the correlation radius increases ; e ) coherent structures emerge .in other words , the model exhibits a broad set of premonitory phenomena previously observed heuristically in real and modeled systems : multiple fracturing , seismicity , socio - economics , percolation , hydrodynamics , hierarchical models of extreme event development .these phenomena are at the heart of earthquake prediction algorithms well validated during 20 years of forward world - wide tests ( see _ e.g. , _ ) . in this paperwe analyse only the first - moment properties of the system ; such properties can explain the premonitory intensity increase ( item a above ) and transformation of the particle rank distribution ( item b ) . at the same time , the framework developed here allows one to quantitatively analyze other premonitory phenomena ; this can be readily done by considering the higher - moment properties .v. i. keilis - borok and a. a. soloviev , a. a. ( eds ) , _ nonlinear dynamics of the lithosphere and earthquake prediction ._ ( springer , heidelberg , 2003 ) .d. sornette , _ critical phenomena in natural sciences ._ 2-nd ed .( springer - verlag , heidelberg , 2004 ) .s. albeverio , v. jentsch , and h. kantz ( eds ) , _ extreme events in nature and society _ ( springer , heidelberg , 2005 ) .p. embrechts , c. kluppelberg , and t. mikosch , _ modelling extremal events for insurance and finance _( springer , 2004 ) .u. frisch , _ turbulence : the legacy of a. n. kolmogorov _( cambridge university press , 1996 ) . v.i. keilis - borok , proc .usa , * 93 * , 3748 - 3755 ( 1996 ) . v.i. keilis - borok , ann .earth planet ., * 30 * , 1 - 33 ( 2002 ) .j. rundle , d. turcotte , and w. klein ( eds ) , _ geocomplexity and the physics of earthquakes ._ ( agu , washington dc , 2000 ) .d. l. turcotte , ann .earth planet ., * 19 * , 263 - 281 ( 1991 ) .s. c. jaume and l. r. sykes , pure appl .geophys . ,* 155 * ( 2 - 4 ) : 279 - 305 ( 1999 ) .
we propose a framework for studying predictability of extreme events in complex systems . major conceptual elements _ direct cascading _ or _ fragmentation _ , _ spatial dynamics _ , and _ external driving _ are combined in a classical age - dependent multi - type branching diffusion process with immigration . a complete analytic description of the size- and space - dependent distributions of particles is derived . we then formulate an extreme event prediction problem and determine characteristic patterns of the system behavior as an extreme event approaches . in particlular , our results imply specific premonitory deviations from self - similarity , which have been heuristically observed in real - world and modeled complex systems . our results suggest a simple universal mechanism of such premonitory patterns and natural framework for their analytic study .
it has been suggested that the use of collective systems like a magnet can reduce the intrinsic switching energy ( that is dissipated throughout switching ) significantly compared to that required for individual spins .there is also a lot of experimental effort at this time to implement switching circuits based on magnets .there has been some work on modeling magnetic circuits like mqca s in the atomic scale using quantum density matrix equation but most of the work is in the classical regime using the well known micromagnetic simulators ( oommf ) based on the landau - lifshitz - gilbert ( llg ) equation .this paper too is based on the llg equation , but our focus is not on obtaining the energy requirement of any specific device in a particular simulation .rather it is to obtain generic results that can guide the design of magnet based switching circuits as well as providing a basis for comparison with alternative technologies .+ the results we present are obtained by analyzing the cascadable switching scheme illustrated in fig.1 where the magnet to be switched ( magnet 2 ) is first placed along its hard axis by a magnetic pulse ( see ` mid state ' in fig.1 ) .on removing the pulse , it falls back into one of its low energy states ( up or down ) determined by the ` bias ' provided by magnet 1 .what makes this scheme specifically suited for logic operations is that it puts magnet 2 into a state determined by magnet 1 ( thereby transferring information ) , but the energy needed to switch magnet 2 comes largely from the external pulse _ and not from magnet 1_. this is similar to conventional electronic circuits where the energy needed to charge a capacitor comes from the power supply , although the information comes from the previous capacitors .this feature seems to be an essential ingredient needed to _ cascade logic units_. to our knowledge , the switching scheme shown in fig.1 was first discussed by bennett and is very similar to the schemes described in many recent publications ( see e.g likharev et.al , kummamuru et.al and csaba et.al ) .+ ) where a small bias field due to magnet 1 can tilt it upwards or downwards thereby dictating its final state on removing the pulse.,title="fig:",width=226,height=226 ] + this paper uses the llg equation to establish two central results .one is that the switching energy drops significantly as the ramp time of the magnetic pulse exceeds a critical time given by equation .this is similar to the drop in the switching energy of an rc circuit when .but the analogy is only approximate since the switching energy for magnets drops far more abruptly with increasing .the significance of is that it tells us how slow a pulse needs to be in order to qualify as `` adiabatic '' and thereby reduce dissipation significantly .considering typical magnets used in the magnetic storage industry , and using ramp times of a few , intrinsic switching frequency of 100 mhz to 1 ghz can easily be in the adiabatic regime of switching where dissipation is very small .+ interestingly , we find that the switching energy for the trapezoidal pulses investigated in this paper in both the ` fast ' and ` slow ' limits can be described by a single equation which is the other central result of this paper . later in this paper ( [ s6 ] ) we will discuss how equations and can be used to guide scaling and increase switching speeds . furthermore these equations can be used to compare magnet based switching circuits with alternative technologies .+ it has to be emphasized that dissipation of the external circuitry also has to be evaluated for any new technology .a careful evaluation would require a consideration of actual circuitry to be used ( see e.g. , ) and is beyond the scope of this paper .however following nikonov et.al . , if a wire coil is used to produce the pulse , we can estimate the energy dissipated in creating the field as in cgs system of units . is the quality factor of the circuit and is the volume over which the field extends .depending on q , v and the dissipated energy can be much larger , comparable to or much smaller than which sets the energy scale for the effects considered here in this paper .+ _ overview of the paper _ : as mentioned before our results are based on direct numerical simulation of the llg equation .however we find that in two limiting cases , it is possible to calculate switching energy simply using the energetics of magnetization and these limiting results are described in sections [ s3 ] ( dissipation with fast pulse ) and [ s5 ] ( dissipation with adiabatic pulse ) which are related to equation . in [ s6 ]we use the llg equation to show that the switching energy drops sharply for ramp times larger than the critical time given by equation . in section [ s7 ] using coupled llg equations we analyze a chain of inverters to show that the total dissipation increases linearly with the number of nanomagnets thus making it reasonable to use the one - magnet results in our paper to evaluate complex circuits , at least approximately . finally in section [ s8 ] practical issues such as dissipation versus speed , increasing the switching speed and scaling are qualitatively discussed in the light of these results . +before we get into the discussion of switching energy , let us briefly review the energetics of a magnet .the energy of a magnet with an effective second order uniaxial anisotropy can be described by where measures the deflection from the easy axis which we take as the axis .all isotropic terms have been omitted because they do not affect dynamics and hence dissipation of the magnet .there are two magnetic fields that control the switching ( see fig.[front ] ) : the external pulse and the bias field due to the neighboring magnet . including the internal energy and the interaction energy of magnetic moment with external fields, the energy equation reads is the saturation magnetization .if the unit volume is magnetized to saturation , is equivalent to the magnetic moment per unit volume . is a unit vector in the direction of magnetization .v is the volume of the magnet and is the second order anisotropy constant with dimensions of energy per unit volume .the applied field is along the hard axis , the bias field is along the easy axis so the energy equation becomes where is defined as in a standard spherical coordinate system .using equation [ energy ] we will show that dissipation with a fast pulse ( small ramp time ) can be written as [ disspulsenohdc ] for reasons to be explained , under the condition of equation [ lower ] , logic device will not work .nevertheless it is useful for determining dissipation in the adiabatic limit . in the equations above, is the minimum field necessary to put the magnet along its hard axis .notice that the bias field is a dc field coming from the neighboring magnet . in practice ,whether the bias field is a dc field or not , its magnitude has to be bigger than noise such that when the magnet is put along its hard axis as in fig.[front ] , the bias field can deterministically tilt the magnet towards its direction. we will show in [ s34 ] that for , dissipation can still be calculated using equation [ disspulsenohdc ] .+ to derive equations [ disspulsenohdc ] we find the initial and final state energies under various conditions and evaluate the difference .we have to emphasize that all these states essentially pertain to the energy minima ( equilibrium states ) i.e. they are either the minimum of energy ; or they represent a non - equilibrium state instantaneously after the equilibrium state ( minimum of energy ) has changed . since all the fields considered here are in the plane and no out - of - plane field is considered , the equilibrium states ( the energy minima ) will always lie in the plane for which . , dissipation is equal to the barrier height ( magnet relaxes from point 1 ( or 4 ) to point 2 ) . when the field is turned off fast , magnet relaxes from point 3 to point 4 or 1 depending on any infinitesimal bias again dissipating an amount equal to the barrier height.,title="fig:",width=302,height=226 ] + fig.[energy landscape with no dc field ] is plotted using equation [ energy ] with and which is the first case to be discussed .the different contours correspond to different values of .+ _ derivation of equation [ 2kv ] _ : let s start with equation [ 2kv ] which is the most important and also easiest .dissipation occurs both during turn - on and turn - off of the pulse and the overall switching energy is sum of the two in general .the dashed contour in fig.[energy landscape with no dc field ] corresponds to which is the minimum value needed to make ( point 2 ) the energy minimum . for a pulse with fast ( ) _ turn - on _, dissipation can be calculated using equation [ energy ] as the difference between the initial and the final energies which are given by point 1 ( or 4 ) and point 2 on the dashed contour .this value is for a pulse with fast ( ) _ turn - off _ , the energy contour immediately changes from the dashed one to the uppermost one in fig.[energy landscape with no dc field ] . under any infinitesimal bias ,magnetization falls down the barrier to the left ( relaxing to point 1 ) or to the right ( relaxing to point 4 ) giving a dissipation of equal to the turn - on dissipation . the switching energy ( _ total dissipationis sum of the values for turn - on and turn - off which gives us equation [ 2kv ] .+ _ derivation of equation [ higher ] _ : this is the case with . the bottom most energy contour in fig.[energy landscape with no dc field ] shows such a situation as an example .the minimum of energy is still at ( point 5 ) however now the energy well is deeper . for a pulse with fast ( ) _ turn - on _ , dissipation is the difference between the initial and final state energies ( where is used as a generic notation for the bottom of any well with ) . for a pulse with fast ( ) _ turn - off _, the energy contour immediately changes from the bottom most curve to the uppermost curve in fig.[energy landscape with no dc field ] . depending on any infinitesimal bias magnetwill relax from point 3 to either point 1 or 4 dissipating the difference the switching energy is sum of the values for turn - on and turn - off which with straightforward algebra gives us equation [ higher ] .+ _ derivation of equation [ lower ] _ : with , magnetization will not align along its hard axis ( ) .this can be seen in fig.[energy landscape with no dc field ] where for a pulse lower than there are two minima of energy not located along the hard axis .the logic device will not work in this regime because it needs to be close to its hard axis so that the field of another magnet can tilt it towards one minima deterministically .nevertheless we derive dissipation for these pulses because we use the results in section [ s51 ] to show switching energy in the adiabatic limit . for a pulse with fast ( )_ turn - on _ , dissipation is the difference between the initial and final state energies for a pulse with fast ( ) _ turn - off _ , the energy contour suddenly becomes the uppermost one in fig.[energy landscape with no dc field ] . at that momentmagnetization is still at the same ( point 7 ) .it follows down the barrier with the dissipation given by the _ total dissipation _ is sum of the values for turn - on and turn - off which gives us equation [ lower ] . in this sectionwe show that for , so long as switching energy can be calculated fairly accurately using equation [ 2kv ] considering only the effect of . for effect of is even less pronounced as compared to and equation [ higher ] can be used to calculate dissipation .again we are interested in initial and final state energies which can be calculated using equation [ energy ] with . in the direction for two values of the pulse : 0 and . upon _ turn - on _ , if magnetization starts from ( _ case 1 _ ) , it drops from point 1 ( ) to point 2 dissipating the difference .if it starts from , it drops from point 4 ( ) to point 2 dissipating the difference . upon _ turn - off_ , both cases 1 and 2 drop from point 3 to point 4 dissipating the difference.,title="fig:",width=302,height=226 ] + can be positive ( along ) or negative ( along ) . fig.[energy landscape dc ] shows the energy landscape with an in the direction .if then the up and down states ( points 1 and 4 ) of the magnet have different initial energies which result in two different cases to be analyzed ._ case 1 _ designates the situation where initial magnetization ( point 1 ) and are in the _ opposite _ direction ._ case 2 _ designates the situation where initial magnetization ( point 4 ) and are in the _ same _ direction .+ for a pulse with fast ( ) _ turn - on _ , _ case 1 _ dissipates the difference between points 1 and 2 and _ case 2 _ dissipates the difference between points 4 and 2 .when the pulse is suddenly turned off , in both cases magnetization finds itself at point 3 , drops down to point 4 and dissipates the difference .it is not possible to give an exact closed form expression for the value of dissipation with non - zero bias .instead based on numerical calculations , we show figures that provide useful insight to conclude that for pulses with fast ramp time the effect of bias on switching energy is negligible .+ the energy of point 2 ( and subsequently point 3 ) depicted in fig.[energy landscape dc ] changes as the relative magnitude of and are changed .we like to know how dissipation changes as a function of the ratio .the numerical results are plotted in fig.[dbi ] using equation [ energy ] .+ fig.[dbi]a shows that for a pulse with fast _ turn - on _ and small values of , both cases dissipate about . as this ratiois increased , the energy separation between points 1 and 2 ( see fig.[energy landscape dc ] ) increases and that of points 4 and 2 decreases which results in higher dissipation of _ case 1 _ and lower dissipation of _ case 2_. fig.[dbi]b shows the dissipation for a pulse with fast _ turn - off _ which is less than the barrier height and is expected because under the presence of , after turn - on , magnetization ends up closer to the final state ( see fig.[energy landscape dc ] ) as compared to the case where ( see fig.[energy landscape with no dc field ] ) .the switching energy is sum of the dissipation values for turn - on and turn - off plotted in fig.[dbi]c .for the bias field alone can switch the magnet and it is completely an unwanted situation .note that for practical purposes , values of are small compared to ( for instance ) and the switching energy is more or less about which gives us equation [ 2kv ] . for the effect of biasis even less pronounced and switching energy can be calculated using equation [ higher ] .we have seen in section [ s3 ] that for pulses with fast ramp times , the effect of bias ( ) is negligible for and switching energy is obtained fairly accurately even if we set .by contrast for pulses with slow ramp time , switching energy can be made arbitrarily small for and the actual switching energy is determined entirely by the that is used . in this section we will first show why the switching energy can be arbitrarily small for and then show that for it will saturate in _ case 1 _ but can be made arbitrarily small in _ _ case 2__ .two points are in order .first , the analysis presented here is exact in the absence of noise .if thermal noise is present the analysis may not be true in general and needs to be modified accordingly .second , if in the process of switching , a bit of information is destroyed as in two inputs and one output gates ( e.g. and/or ) , then there will be a finite switching energy even for adiabatic switching . gradual _ turn - on _ of the pulse corresponds to increasing the pulse in many small steps .fig.[adiabatic progression]a shows the energy landscape . as the field is gradually turned - on the energy contours change little by little from top to bottom .the minimum of energy gradually shifts from point 1 ( or 4 ) to point 2 .magnetization hops from one minimum of energy to the other .but why is it that gradual turn - on of the pulse dissipates less than sudden turn - on ? +if the external pulse is turned on to in equal steps , we show that there is equal amount of dissipation at each step .then total dissipation is times that of each step .we show that dissipation of each step is proportional to ; hence as the number of steps increases , dissipation decreases as and in the limit of , ( this is not unlike a similar argument that has been given for charging up a capacitor adiabatically ) . at each stepwhen the pulse is increased by , the dissipated energy is the difference between initial and final state energies .+ such a situation is illustrated in fig.[adiabatic progression]a where denotes a minimum on an energy contour corresponding to ( magnitude of the pulse after steps ) . when the pulse is stepped up to ,magnetization suddenly finds itself at point ( initial state ) and falls down to ( final state ) .note that dissipation is and not .this is because when the field suddenly changes from to , magnet has not had time to relax and dissipate energy .here we use and as generic notations for initial and final energy of any step . can be found by finding the which corresponds to point ( the minimum of energy with ) and substituting it in equation [ energy ] with . with straightforward algebrawe get .equation [ low field dissipation on ] can be used to calculate . using the identities and , the dissipated energy per stepis obtained as for gradual _ turn - off _ consider points , and .when , magnetization is at and after the pulse is decreased by one step to , it finds itself at , falls down to dissipating the difference . can be found by finding the which corresponds to point ( the minimum of energy with ) and substituting it in equation [ energy ] with .we get .again equation [ low field dissipation on ] can be used to give . using the identities and , we obtain for the dissipated energy per step the switching energy is sum of the dissipation values for _ turn - on _ : and _ turn - off _ : which in the limit of , tends to 0 ( ) . for _ turn - on _let s consider _ case 1 _ first where initial magnetization and are in opposite directions ( point in fig.[adiabatic progression]b ) . as the field is gradually turned - on, magnetization starts from point and hops from one minimum of energy to the next .increasing the number of steps brings the minima closer to each other so that magnetization stays in its ground state while being switched .however when magnetization gets to point , situation changes . at that pointthe energy barrier which formerly separated the two minima on the two sides disappears .magnetization falls down from point a to b and dissipates the energy difference .this sudden change in the minimum of energy occurs no matter how slow the pulse is turned on and causes the switching energy to saturate so long as .quantitatively this can be seen by plotting vs. ( fig.[adiabatic progression]c ) using equation [ energy ] . when the left solid curve is traced from ,it is evident that there is a discontinuous jump in the values which minimize energy when the pulse is increases from to in infinitesimal steps .this discontinuity goes away only when ( right solid curve ) . in _ case 2_ , magnetization starts from point , i.e. ( see fig.[adiabatic progression]b and c ) , gets to point b at which there is _ no _ sudden change of minimum and as the pulse is increased further to , it gradually moves to point . during _ turn - off _ in both cases 1 and 2 , magnetization gradually moves from ( see fig.[adiabatic progression]c ) point to b and then finally to point all along staying in its minimum of energy with no discontinuity .dissipation tends to zero as the pulse is turned off in infinitesimal steps .+ is less than the barrier height .the dashed line is plotted using equation [ adis].,title="fig:",width=302,height=226 ] + in the slow limit the entire dissipation is determined by the energy difference between points and , in fig.[adiabatic progression]b . for a given , one has to find that particular value of for which the local energy maximum in the middle disappears which means that the second derivative of energy with respect to must be zero ( no curvature ) .since magnetization has been in the minimum of energy while getting to point , first derivative of energy with respect to must also be equal to zero . under these conditions, the value of at and subsequently can be found using equation [ energy ] . can be found as the true minimum of energy from equation [ energy ] where the first derivative of energy with respect to is zero but the second derivative is not .what affects is the relative magnitude of and .it is not possible to give an analytical closed form expression for this saturating value of dissipation . instead we venumerically plotted dissipation versus ( solid curve in fig.[adiabatic dissipation ] ) . for small values of , dissipation can be written as where the value of is obtained by an almost perfect fit to the solid curve for .the dashed curve is plotted using equation [ adis ] . as is evident from fig.[adiabatic dissipation ] , this equation is fairly accurate .there is some digression from the actual value of dissipation for large values of which are not of practical interest especially for which alone can switch the magnet and is completely an unwanted situation .+ it is important to note that the switching energy in the adiabatic limit is case dependent . for _ case 1 _, it is given by equation [ adis ] and it is not zero as it might have been expected for dissipation in the adiabatic limit . interestingly if was equal to 1 , the dissipation would be equal to the energy difference between initial and final states ( see points and in fig.[adiabatic progression]b ) .however the actual value is significantly smaller .+ dissipation in both the fast and slow limits can be casted into a single equation in the fast limit , is the magnitude of the pulse while in the slow limit , is related to the magnitude of the small bias field as states above . is the height of the anisotropy energy barrier separating the two stable states of the magnet , and has to be large enough so that the magnet retains its state while computation is performed without thermal fluctuations being able to flip it .the retention time for a given can be calculated using where is the attempt frequency with the range which depends in a nontrivial fashion on variables like anisotropy , magnetization and damping . +thus far we ve shown switching energy in the two limiting cases of and . to understand how switching energy changes in between and also how fast it decreases we need to start from the llg equation which in the gilbert form reads : and in the standard formreads : is the gyromagnetic ratio of electron and its magnitude is equal to in si and in cgs system of units . is the phenomenological dimensionless gilbert damping constant . is the magnetization . here where . in general can be derived as the overall effective field : .+ the following expressions are all equivalent statements of dissipated power : the dissipated power has to be integrated over time to give the total dissipation . in general ,llg can be solved numerically using the method . to obtain generic results that are the same for various parameters, we recast llg and the dissipation rate into a dimensionless form .this will also show the significance of and demonstrate why for ramp times exceeding , there is a significant drop in dissipation .+ using scaled variables and equation [ llg2 ] in dimensionless form can be written as + where with given by equation [ tau_c importance ] .the energy dissipation normalized to can be written as to estimate the time constant involved in switching a magnet it is instructive to plot the integrand appearing above in equation [ dissipated energy ] assuming a step function for and obtaining the corresponding from equation [ llgn ] .note that the integrands die out exponentially for a wide range of s from to .in other words , all the curves ( ignoring the oscillations ) can be approximately described by thus suggesting that the approximate time constant is + this is more evident from fig.[ramp time ] where we show the energy dissipation for pulses with different ramp times .the dissipated energy drops when exceeds as we might expect , but the drop is sharper than an rc circuit . needless to say ,the dissipation values calculated from llg equation for the two limits of fast pulse and adiabatic pulse are consistent with the values calculated using energetics previously .fig.[ramp time]a shows the _ turn - on _ dissipation where _ case 1 _ has saturated and _ case 2 _ goes down as ramp time is increased .the curve in the middle is the case with infinitesimal bias and it is just provided for reference .fig.[ramp time]b shows the _ turn - off _ dissipation where both cases 1 and 2 dissipate arbitrarily small amounts as the ramp time is increased . with slow pulses , overall switching energy of _ case2 _ is very small and the entire switching energy of _ case 1 _ essentially occurs during _ turn - on _ which is illustrated in fig.[ramp time]c .this dissipation was discussed in section [ s52 ] ; and it is associated with the sudden fall down from point a to b ( see fig.[adiabatic progression]b , c ) .it has a saturating nature and will never become zero . as is applied more and more gradually ,the dissipated power in fig.[ramp time]c becomes narrower and taller . in the true adiabatic limitit will become a delta function occurring for one particular value of .fig.[invchaindiss]a shows an array of spherical nanomagnets ( mqca ) that interact with each other via dipole - dipole coupling .the objective is to determine the switching energy if we are to switch magnet 2 according to the state of magnet 1 . in section [ s71 ] we will show a clocking scheme under which propagation of information can be achieved and basically shows how magnets can be used as _ cascadable logic _ building blocks . in section [ s72 ] , we briefly go over the method and equations used to simulate the dynamics and dissipation of the coupled magnets . in section [ s73 ]we analyze the dissipation of the chain of inverters where we show that after cascading the magnetic bits , dissipation changes linearly with the number of magnets that the pulse is exerted on .this shows that the switching energy of larger more complicated circuits can be calculated using the one - magnet results presented in this paper at least approximately .+ + in the introduction we mentioned that in the clocking scheme the role of the clock field is to provide energy whereas field of another magnet acts as a guiding input . using a clock we can operate an array of exactly similar magnets as a chain of inverters .fig.[invchaindiss]a shows a 3 phase inverter chain where the unit cell is composed of 3 magnets .each magnet has two stable states showed as _ up _ and _ down _ in the figure .we want to switch magnet 2 according to the state of magnet 1 .first consider only magnets 1 and 2 .we ve already explained ( see section [ s1 ] ) how magnet 1 can determine the final state of magnet 2 .but what happens if more magnets are present ?+ consider magnets 1 , 2 and 3 .just like magnet 1 , magnet 3 also exerts a field on magnet 2 and if it is in the opposite direction can cancel out the field of magnet 1 . to overcome this, we apply the pulse to magnet 3 as well thereby diminishing the exerted field of magnet 3 on magnet 2 so that magnet 1 becomes the sole decider of the final state of magnet 2 . in the processthe data in magnet 3 has been destroyed ( it will end up wherever magnet 4 decides ) .it takes 3 pulses to transfer the bit ( in an inverted manner ) in magnet 1 to magnet 4 .magnet 4 has been included because it affects the dissipation of magnet 3 through affecting its dynamics .inclusion of more magnets to the right or left of the array will not change the quantitative or qualitative results of this paper .next we ll briefly go over the method used to simulate the chain of inverters .equations [ llgn ] ( with ) and [ dissipated energy ] are used to simulate the dynamics and dissipation of each magnet respectively .the overall scaled ( divided by ) magnetic field of equation [ llgn ] for each magnet at each instant of time is modified to composed of the applied pulse : the anisotropy ( internal ) field of each magnet : and exerted dipolar fields of other magnets which in general in cgs system of units reads all field values are time dependent . here denotes any one magnet and runs over magnetic moments of the other magnets . though this equation can be simplified for an array of magnets along the same line , in this form it can be used for more complicated arrangement of magnets .fig.[invchaindiss]b shows the llg simulations of the chain of inverters where magnet 2 is switched solely according to the state of magnet 1 irrespective of its history or the state of magnets 3 and 4 .fig.[invchaindiss]c shows dissipation of the entire array after one application of the pulse as a function of ramp time .the pulse is exerted on magnets 2 and 3 which accounts for the value in the fast limit .this essentially points out that after cascading these logic building blocks , dissipation changes linearly with the number of magnets .+ in the slow limit , depending on the initial configuration , dissipation will be affected .the 4 magnet array can initially be in any of its 16 possible states .some configurations saturate and some do nt . herethe field of magnet 1 plays the role of the bias field for magnet 2 and the field of magnet 4 is like another bias field on magnet 3 which accounts for the 3 groups of curves in fig.[invchaindiss]c .the upper curves correspond to the situation where initial magnetization of both magnets 2 and 3 are opposite to the fields exerted from magnets 1 and 4 respectively .the middle curves correspond to only one of magnets 2 or 3 initially being opposite to the exerted fields of magnet 1 or 4 respectively .the lower curves correspond to both magnets 1 and 3 initially being in the same direction as the exerted fields from magnets 2 and 4 respectively . + an added complication is the field of the other neighbor ( magnet 3 ) which is diminished in the direction but has a non - negligible component exerted on magnet 2 .all this directed field does is to wash away a tiny bit the effect of the field of magnet 1 which has little bearing on the qualitative or quantitative results as illustrated in fig.[invchaindiss]c .the speed of switching can be increased by increasing the magnitude of the external pulse above .larger fields will dissipate more energy but have the advantage of aligning the magnet faster during the turn - on segment but are of no use for increasing the speed of the turn - off segment because the magnet relaxes to its stable state under its own internal field . if can be altered , then it is a better idea to increase and always set . this waythe speed of switching is increased by shortening the time of both turn - on and turn - off segments .+ consider equation ( [ tau_c importance ] ) .increasing shortens the switching time constant ( note that is usually less than 1 ) ; however this parameter is not very controllable in experiments . is a physical constant and can not be altered .so to increase the switching speed , one has to increase .thermal stability of a magnet requires to be larger than a certain amount for the desired retention time .for instance with an attempt frequency of about 1ghz ( see the discussion at the end of [ s5 ] ) and of about 0.5 ev , magnet is stable for about 0.5 seconds which is large enough because switching takes place in the nano - second scale .a higher retention time requires higher .once is set because of stability requirements , the only way to increase is to decrease .assuming that volume is magnetized to saturation , is the magnetic moment of the magnet . is the number of spins giving rise to the magnetization and is bohr magneton .so decreasing translates to making the magnet smaller or decreasing its saturation magnetization .+ the discussion just presented is similar to the theory of scaling in cmos technology where decreasing the capacitance causes an increase in the switching speed by decreasing the time constant . with the same operating voltage , smaller capacitance results in lower number of charges stored on the capacitor . in the case of cmos , as decreases , energy dissipated i.e. also decreases . in the case of magnet however , energy dissipation is fixed around so for a lower , dissipation of the ferro - magnetic logic element ( already very small ) is _ not _ altered ; however one might be able to reduce the dissipated energy in the external circuitry since it needs to provide the energy for a shorter period of time .again we should emphasize that a thorough analysis of external dissipation also has to be done .this has to do with generating the external source of energy for switching . in the case of mqca circuitsthis is done by running currents through wires and generating magnetic fields . in principle , spin transfer torque phenomena or electrically controlled multi - ferroicity could also be used to provide the source of energy .these methods would also have energy dissipation associated with them .a complete lay - out circuit is necessary to properly evaluate the integration density of logic circuits made of magnets .for example fringing fields and unwanted cross talks have to be taken into account .external circuitry will take up space .efficient methods have to be developed to porperly address these issues .one component of the lay - out is the magnetic logic bit itself which we discuss here .the barrier height between the stable states of a magnet can be engineered by adjusting ( anisotropy constant ) and ( volume ) . increasing the anisotropy constant is of great interest for the magnetic storage industry because it allows stable magnets of smaller volume that translates to higher densities .many experiments report values on the order of a few .this results in stable magnets with volumes of only 10s of ; which means that stable magnets can be made as small as a few in each dimension . even though a complete lay - out is necessary , nevertheless these numbers are very promising and could potentially result in very high integration densities .in this paper we analyzed the switching energy of single domain nanomagnets used as cascadable logic building blocks .a magnetic pulse was used to provide the energy for switching and a bias field was used as an input to guide the switching .the following conclusions can be drawn from this study .+ ( 1 ) through analyzing the complete dependence of the switching energy on ramp time of the pulse , it was concluded that there is a significant and sharp drop in dissipation for ramp times that exceed a critical time given by equation [ tau_c importance ] whose significance is separating the energy dissipation characteristic of a fast pulse ( small ramp time ) and energy dissipation characteristic of a slow pulse ( big ramp time ) .+ ( 2 ) the switching energy can be described by a single equation ( equation [ general ] ) in both fast and slow limits for trapezoidal pulses analyzed in this paper . in the fast limitthe effect of the bias field or equivalently the field of neighboring magnet in mqca systems is negligible so long as the bias field is less than 10th of the switching field of the magnet . in the slow limithowever , dissipation is largely determined by the value of the bias field .+ ( 3 ) by evaluating switching energy of both one magnet and a chain of inverters for mqca systems , it was shown that the switching energy increases linearly with the number of magnets so that the one magnet results provided in this paper can be used to calculate the switching energy of larger more complicated circuits , at least approximately .+ ( 4 ) practical issues such as dissipation versus speed , increasing the switching speed and scaling were discussed qualitatively .it was concluded that by proper designing , ferromagnetic logic bits can have scaling laws similar to the cmos technology . + noise was not directly included in the models ; however we took it into account indirectly : thermal noise is the limiting factor on the anisotropy energy ( that determines the magnet s thermal stability ) of a magnet which we discussed thoroughly .thermal noise also limits the lowest possible magnitude of the bias field ( or equivalently coupling between magnets in mqca systems ) .we ve provided the results for a wide range of bias values .more thorough discussions of dissipation in the external circuitry can be found in references , .100 s. salahuddin and s. datta,``interacting systems for self correcting low power systems '' , appl .lett . , vol.90 , pp.093503.1 - 093503.3 , feb .cowburn and m.e .welland,``room temperature magnetic quantum cellular automata '' , science , vol .287 , pp.1466 - 1468 , feb . 2000 .cowburn , a.o .adeyeye and m.e .welland,``controlling magnetic ordering in coupled nanomagnet arrays '' , new j. of phys . 1 , pp.16.1 - 16.9 , nov .a.imre , g. csaba , l. ji , a. orlove , g. h. bernstein and w. porod,``majority logic gate for magnetic quantum - dot cellular automata'',science , vol .311 , pp.205 - 208 , jan . 2006 .a. ney , c. pampuch , r. koch and k.h .ploog,``programmable computing with a single magnetoresistivevelement '' , nature , vol .425 , pp.485 - 487 , oct . 2003 .d. a. allwood , gang xiong , m. d. cooke , c. c. faulkner , d. atkinson , n. vernier and r. p. cowburn,``submicrometer ferromagnetic not gate and shift register '' , science , vol .296 , pp.2003 - 2004 , jun . 2002 .allwood , g. xiong , c. c. faulkner , d. atkinson , d. petit and r. p. cowburn , `` magnetic domain - wall logic '' , science , vol .309 , pp.1688 - 1692 , sep . 2002 .nikonov , g.i .bourianoff , p.a .gargini , `` simulation of highly idealized , atomic scale mqca logic circuits '' , http://arxiv.org/ , arxiv:0711.2246v1 [ cond-mat.mes-hall ] , nov . 2007 .g. csaba , w. porod and a. i , csurgay,``a computing architecture composed of field - coupled single domain nanomagnets clocked by magnetic field '' , int . j. circ .appl . , vol .31 , pp.67 - 82 , jan . 2003 .g. csaba , a. imre , g. h. bernstein , w. porad and v. metlushko,``nanocomputing by field - coupled nanomagnets '' , ieee trans . on nanotech . ,vol.1 , pp.209 - 213 , dec .g. csaba , p. lugli and w.porod,``power dissipation in nanomagnetic logic devices '' , 4th ieee conference on nanotechnology , pp.346 - 348 , augg. csaba , p. lugli , a. csurgay and w.porod,``simulation of power gain and gissipation in field - coupled nanomagnets '' , j. comp ., vol . 4 , pp.105 - 110 , aug .m. niemier , m. alam , x. s. hu , g. bernstein , w. porod , m. putney and j. deangelis , `` clocking structures and power analysis for nanomagnet - based logic devices '' , proceedings of the 2007 international symposium on low power electronics and design ( islped ) , new york , acm , 2007 .nikonov , g.i .bourianoff and p.a .gargini , `` power dissipation in spintronic devices out of thermodynamic equilibrium '' , j. super . novel .magn . , vol .19 , no.6 , pp .497 - 513 , aug . 2006 .l. landau and e. lifshitz,``on the theory of the dispersion of magnetic permeability in ferromagnetic bodies '' , phys .z. sowjetunion , vol .8 , pp.153 - 169 , 1935 .gilbert , `` a phenomenological theory of damping in ferromagnetic materials '' , ieee trans . magn .40 , pp.3443 - 3449 , nov . 2004 .b. hillebrands ( editor ) and k. ounadjela ( editor),``spin dynamics in confined magnetic structures i , ii and iii '' , new york , springer , 2001 - 2007 . c.h .bennet,``the thermodynamics of computation - a review '' , intern .905 - 940 , dec .likharev and a.n .korotkov,``single - electron parametron : reversible computation in a discrete - state system '' , science , vol .763 - 765 , aug .kummamuru , a.o .orlov , r. ramasubramaniam , c.s .lent , g.h .bernstein , and g. snider , `` operation of a quantum - dot cellular automata ( qca ) shift register and analysis of errors '' , ieee tran . elec . devi . ,vol.50 , pp.1906 - 1913 , sep .r. street and j.c .woolley , `` a study of magnetic viscosity '' , proc ., sec . a , vol .62 , pp.562 - 572 , sep . 1949l. neel , `` thermoremanent magnetization of fine powders '' , rev .293 - 295 , jan .brown,``thermal fluctuations of a single - domain particle '' , phys .130 , pp.1677 - 1686 , jun . 1963 .gaunt,``the frequency constant for thermal activitation of a ferromagnetic domain wall '' , j. appl .48 , pp.3470 - 3474 , aug .the isotropic terms , like the demagnetizing field for a sphere or the weiss field have not been taken into account because they can be expressed as where is the magnetization and is an scalar .based on the llg equation , fields of this form will _ not _ alter the dymanics of magnetization and have no bearing on dissipation . in principle magnet a can switch magnet b unidirectionally with no need for an external pulse given that : holds .this entails designing circuits with magnets of different parameters ( e.g. volume ) so no two magnets in the circuit can have the same parameters ; not to mention the complexities caused by the fields exerted from other neighbors .this essentially states that it is not a matter of principle that interacting magnets must have saturating dissipation as was previously suggested .this is also shown by analyzing the dissipation of a chain of coupled magnets where for some configurations dissipation arbitrarily goes down whereas for some other configurations it saturates .the concepts behind such effects can easily be traced back to the two cases ( 1 and 2 ) discussed in the context of one magnet and a bias field .cavin iii , v.v .zhirnov , j. a. hutchby , and g. i. bourianoff , `` energy barriers , demons , and minimum energy operation of electronic devices ( plenary paper ) '' , proceedings of spie , vol .5844 , pp.1 - 9 , may 2005 .sun and x.r .wang , `` fast magnetization switching of stoner particles : a nonlinear dynamics picture '' , phys .174430 - 1 to 174430 - 9 , may 2005 .note that the the term should not have been included in equation 4 of reference 1 .however this will not change the results of that paper since this term was assumed to be zero anyways .shouheng sun , c. b. murray , d. weller , l. folks , and a. moser , `` monodisperse fept nanoparticles and ferromagnetic fept nanocrystal superlattices '' , science , vol .287 , pp.1989 - 1992 , mar .wu , c. liu , l.li , p. jones , r.w .chantrell and d. weller , `` nonmagneti shell in surfactant - coated fept nanoparticles '' , j. appl .95 , no.11 , pp.6810 - 6812 , jun . 2004 .a. perumal , h.s .ko and s.c .shin , `` magnetic properties of carbon - doped fept nanogranular films '' , appl .83 , no.16 , pp.3326 - 3328 , oct . 2003 .k. elkins , d. li , n. poudyal , v. nandwana , z. jin , k. chen and j.p .liu , `` monodisperse face - centered tetragonal fept nanoparticles with giant coercivity '' , j. phys .38 , pp.2306 - 2309 , juljackson , `` classical electrodynamics '' , new york , wiley , 1999 , pp.198 - 200 .
power dissipation in switching devices is believed to be the single most important roadblock to the continued downscaling of electronic circuits . there is a lot of experimental effort at this time to implement switching circuits based on magnets and it is important to establish power requirements for such circuits and their dependence on various parameters . this paper analyzes switching energy which is dissipated in the switching process of single domain ferromagnets used as _ cascadable logic _ bits . we obtain generic results that can be used for comparison with alternative technologies or guide the design of magnet based switching circuits . two central results are established . one is that the switching energy drops significantly if the ramp time of an external pulse exceeds a critical time . this drop occurs more rapidly than what is normally expected of adiabatic switching for a capacitor . the other result is that under the switching scheme that allows for logic operations , the switching energy can be described by a single equation in both fast and slow limits . furthermore , these generic results are used to quantitatively examine the possible operation frequencies and integration densities of these logic bits which show that nanomagnets can have scaling laws similar to cmos technology .
integrity constraints provide means for ensuring that database evolution does not result in a loss of consistency or in a discrepancy with the intended model of the application domain .a relational database that do not satisfy some of these constraints is said to be inconsistent . in practiceit is not unusual that one has to deal with inconsistent data , and when a conjunctive query ( cq ) is posed to an inconsistent database , a natural problem arises that can be formulated as : _ how to deal with inconsistencies to answer the input query in a consistent way ? _this is a classical problem in database research and different approaches have been proposed in the literature .one possibility is to clean the database and work on one of the possible coherent states ; another possibility is to be tolerant of inconsistencies by leaving intact the database and computing answers that are `` consistent with the integrity constraints '' . in this paper, we adopt the second approach which has been proposed by under the name of _ consistent query answering _( cqa ) and focus on the relevant class of _ primary key _ constraints . formally , in our setting : a database is _ inconsistent _ if there are at least two tuples of the same relation that agree on their primary key ; a _ repair _ of is any maximal consistent subset of ; and a tuple of constants is in the _ consistent answer _ to a cq over if and only if , for each repair of , tuple is in the ( classical ) answer to over .intuitively , the original database is ( virtually ) repaired by applying a minimal number of corrections ( deletion of tuples with the same primary key ) , while the consistent answer collects the tuples that can be retrieved in every repaired instance .cqa under primary keys is conp - complete in data complexity , when both the relational schema and the query are considered fixed . due to its complex nature ,traditional rdbms are inadequate to solve the problem alone via sql without focusing on restricted classes of cqs . actually ,in the unrestricted case , cqa has been traditionally dealt with logic programming .however , it has been argued that the practical applicability of logic - based approaches is restricted to data sets of moderate size .only recently , an approach based on binary integer programming has revealed good performances on large databases ( featuring up to one million tuples per relation ) with primary key violations . in this paper, we demonstrate that logic programming can still be effectively used for computing consistent answers over large relational databases .we design a novel decomposition strategy that reduces ( in polynomial time ) the computation of the consistent answer to a cq over a database subject to primary key constraints into a collection of smaller problems of the same sort . at the core of the strategyis a cascade pruning mechanism that dramatically reduces the number of key violations that have to be handled to answer the query .moreover , we implement the new strategy using answer set programming ( asp ) , and we prove empirically the effectiveness of our asp - based approach on existing benchmarks from the database world .in particular , we compare our approach with some classical and optimized encodings of cqa in asp that were presented in the literature .the experiment empirically demonstrate that our logic - based approach implements cqa efficiently on large data sets , and can even perform better than state - of - the - art methods .we are given two disjoint countably infinite sets of _ terms _ denoted by and and called _ constants _ and _ variables _ , respectively .we denote by sequences ( or sets , with a slight abuse of notation ) of variables , and by sequences of terms .we also denote by ], is the subsequence .for example , if and , then .a ( _ relational _ ) _ schema _ is a triple where is a finite set of _ relation symbols _ ( or _ predicates _ ) , is a function associating an _arity _ to each predicate , and is a function that associates , to each , a nonempty set of positions from ] , ] .a bcq is _ true _ in , denoted , if .consistent answer _ to a cq over a database ( w.r.t . ) , denoted , is the set of tuples .clearly , holds .a bcq is _ consistently true _ in a database ( w.r.t . ) , denoted , if .to deal with large inconsistent data , we design a strategy that reduces in polynomial time the problem of computing the consistent answer to a cq over a database subject to primary key constraints to a collection of smaller problems of the same sort . to this end , we exploit the fact that the former problem is logspace turing reducible to the one of deciding whether a bcq is consistently true ( recall that the consistent answer to a cq is a subset of its answer ) . hence , given a database over a schema , and a bcq , we would like to identify a set of pairwise disjoint subsets of , called fragments , such that : _ iff there is ] from a triple consists of reducing the arity of by one , cutting down the -th term of each -atom of and , and adapting the positions of the primary key of accordingly .moreover , let ~|~r \in { \mathcal{r}}~\textrm { and } ~i \in [ \alpha(r)]\} ] is _ relevant _ ( w.r.t . ) if contains an atom of the form such that at least one of the following conditions is satisfied : ; or is a constant ; or is a variable that occurs more than once in ; or is a free variable of .an attribute which is not relevant is _ idle _ ) .an example is reported in [ sec : exampleidle ] .the following theorem states that the consistent answer to a cq does not change after removing the idle attributes .[ thm : idle ] consider a cq , the set \in \mathit{attrs}({\sigma})~|~r[i]~\textrm { is relevant w.r.t . }q\} ] , the terms of the -atoms of that are associated to idle attributes can be removed via the rule .hereafter , let us assume that contains no idle attribute , and .program is depicted in figure [ fig : encoding ] . ' '' '' = = = + % + .+ . ` + .+ . +[ 2 mm ] % + .+ . ` + . ` + . ` + [ 2 mm ] % + + .+ .+ , + . + . +[ 2 mm ] % + + .+ . + . + . + . + . + + . + . +[ 2 mm ] % + .+ . + .+ [ 2 mm ] % + . + .+ + .+ ' '' '' _ computation of the safe answer ._ via rule , we identify the set is a substitution and .it is now possible ( rule ) to identify the atoms of that are involved in some substitution . here, for each atom , we recall that is the subsequence of containing the terms in the positions of the primary key of , and we assume that are the terms of in the remaining positions . in particular , we use two function symbols , and , to group the terms in the key of and the remaining ones , respectively .it is now easy ( rule ) to identify the conflicting components involved in some substitution .let .we now compute ( rule ) the safe answers. _ hypergraph construction ._ for each candidate answer that has not been already recognized as safe , we construct the hypergraph associated to the bcq , where , as usual .hypergraph is identified by the functional term , the substitutions of ( collected via rule ) are identified by the set and of functional terms , while the key components of ( collected via rule ) are identified by the set and and of functional terms . to complete the construction of the various hypergraphs, we need to specify ( rules and ) which are the atoms in each hyperedge ._ pruning ._ we are now ready to identify ( rules ) the strongly redundant components and the strongly unfounded substitutions ( as described in section [ sec : pruning ] ) to implement our cascade pruning mechanism .hence , it is not difficult to collect ( rule ) the substitutions that are not unfounded , that we call _residual_. _ fragments identification ._ key components involving at least a residual substitution ( i.e. , not redundant ones ) , can be aggregated in fragments ( rules ) by using the notion of bunch introduced in section [ sec : fragments ] . in particular ,any given fragment associated to a candidate answer , and collecting the key components is identified by the functional term where , for each , the functional term associated to lexicographically precedes the functional term associated to . _ repair construction ._ rules can be evaluated in polynomial time and have only one answer set , while the remaining part of the program can not in general .in particular , rules generate the search space .actually , each answer set of is associated ( rule ) with only one fragment , say , that we call _ active _ in .moreover , for each key component of , answer set is also associated ( rule ) with only one atom of , that we also call _ active _ in .consequently , each substitution which involves atoms of but also at least one atom which is not active , must be ignored in ( rule ) . _new query ._ finally , we compute the atoms of the form via rules .the experiment for assessing the effectiveness of our approach is described in the following .we first describe the benchmark setup and , then , we analyze the results ._ benchmark setup ._ the assessment of our approach was done using a benchmark employed in the literature for testing cqa systems on large inconsistent databases .it comprises 40 instances of a database schema with 10 tables , organized in four families of 10 instances each of which contains tables of size varying from 100k to 1 m tuples ; also it includes 21 queries of different structural features split into three groups depending on whether cqa complexity is conp - complete ( queries ) , ptime but not fo - rewritable ( queries ) , and fo - rewritable ( queries ) .( see [ app : bench ] ) .we compare our approach , named _ pruning _ , with two alternative asp - based approaches . in particular ,we considered one of the first encoding of cqa in asp that was introduced in , and an optimized technique that was introduced more recently in ; these are named _ bb_and _ mrt _, respectively ._ bb_and _ mrt_can handle a larger class of integrity constrains than _ pruning _ , and only _mrt_features specific optimization that apply also to primary key violations handling .we constructed the three alternative encodings for all 21 queries of the benchmark , and we run them on the asp solver wasp 2.0 , configured with the iterative coherence testing algorithm , coupled with the grounder gringo ver .4.4.0 .for completeness we have also run claspver .3.1.1 obtaining similar results. waspperformed better in terms of number of solved instances on _ mrt_and _ bb_.the experiment was run on a debian server equipped with xeon e5 - 4610 cpus and 128 gb of ram . in each execution ,resource usage was limited to 600 seconds and 16 gb of ram .execution times include the entire computation , i.e. , both grounding and solving .all the material for reproducing the experiment ( asp programs , and solver binaries ) can be downloaded from www.mat.unical.it/ricca/downloads/mrticlp2015.zip ._ analysis of the results ._ concerning the capability of providing an answer to a query within the time limit , we report that _pruning_was able to answer the queries in all the 840 runs in the benchmark with an average time of 14.6s . _ mrt _ , and _ bb_solved only 778 , and 768 instances within 600 seconds , with an average of 80.5s and 52.3s , respectively .the cactus plot in figure [ fig : comp : cac ] provides an aggregate view of the performance of the compared methods .recall that a cactus plot reports for each method the number of answered queries ( solved instances ) in a given time .we observe that the line corresponding to _ pruning_in figure [ fig : comp : cac ] is always below the ones of _ mrt_and _ bb_. in more detail , _pruning_execution times grow almost linearly with the number of answered queries , whereas _ mrt_and _bb_show an exponential behavior .we also note that _mrt_behaves better than _ bb _ , and this is due to the optimizations done in _mrt_that reduce the search space . the performance of the approaches w.r.t .the size of the database is studied in figure [ fig : comp : avgsol ] . the x - axis reports the number of tuples per relation in tenth of thousands , in the upper plotis reported the number of queries answered in 600s , and in the lower plot is reported the corresponding the average running time .we observe that all the approaches can answer all 84 queries ( 21 queries per 4 databases ) up to the size of 300k tuples , then the number of answered queries by both _ bb_and _mrt_starts decreasing .indeed , they can answer respectively 74 and 75 queries of size 600k tuples , and only 67 and 71 queries on the largest databases ( 1 m tuples ) . instead , _pruning_is able to solve all the queries in the data set . the average time elapsed by running _pruning_grows linearly from 2.4s up to 27.4s ._ mrt_and _ bb_average times show a non - linear growth and peak at 128.9s and 85.2s , respectively .( average is computed on queries answered in 600s , this explains why it apparently decreases when a method can not answer some instance within 600s . ) the scalability of _pruning_is studied in detail for each query in figures [ fig : ratioscalability](d - f ) , each plotting the average execution times per group of queries of the same theoretical complexity .it is worth noting that _ pruning_scales almost linearly in all queries , and independently from the complexity class of the query .this is because _pruning_is able to identify and deal efficiently with the conflicting fragments .we now analyze the performance of _ pruning_from the perspective of a measure called _ overhead _ , which was employed in for measuring the performance of cqa systems . given a query q the overhead is given by , where is time needed for computing the consistent answer of q , and is the time needed for a plain execution of q where the violation of integrity constraints are ignored .note that the overhead measure is independent of the hardware and the software employed , since it relates the computation of cqa to the execution of a plain query on the same system .thus it allows for a direct comparison of _pruning_with other methods having known overheads . following what was done in , we computed the average overhead measured varying the database size for each query , and we report the results by grouping queries per complexity class in figures [ fig : ratioscalability](a - c ) .the overheads of _pruning_is always below 2.1 , and the majority of queries has overheads of around 1.5 .the behavior is basically ideal for query q5 and q4 ( overhead is about 1 ) .the state of the art approach described in has overheads that range between 5 and 2.8 on the very same dataset ( more details on [ app : bench ] ) .thus , our approach allows to obtain a very effective implementation of cqa in asp with an overhead that is often more than two times smaller than the one of state - of - the - art approaches .we complemented this analysis by measuring also the overhead of _ pruning_w.r.t .the computation of safe answers , which provide an underestimate of consistent answers that can be computed efficiently ( in polynomial time ) by means of stratified asp programs .we report that the computation of the consistent answer with _pruning_requires only at most 1.5 times more in average than computing the safe answer ( detailed plots in [ app : bench ] ) .this further outlines that _pruning_is able to maintain reasonable the impact of the hard - to - evaluate component of cqa .finally , we have analyzed the impact of our technique in the various solving steps of the evaluation .the first three histograms in figure [ fig : grounding ] report the average running time spent for answering queries in databases of growing size for _ pruning_(fig .[ fig : hpr ] ) , _ bb_(fig .[ fig : hbb ] ) , and _ mrt_(fig .[ fig : hmrt ] ) . in each bardifferent colors distinguish the average time spent for grounding and solving .in particular , the average solving time over queries _ answered within the timeout _ is labeled solving - sol , and each bar extends up to the average cumulative execution time computed over all instances , where each timed out execution counts 600s .recall that , roughly speaking , the grounder solves stratified normal programs , and the hard part of the computation is performed by the solver on the residual non - stratified program ; thus , we additionally report in figure [ fig : grperc ] the average number of facts ( knowledge inferred by grounding ) and of non - factual rules ( to be evaluated by the solver ) in percentage of the total for the three compared approaches .the data in figure [ fig : grounding ] confirm that most of the computation is done with _pruning_during the grounding , whereas this is not the case for _ mrt_and _ bb_. figure [ fig : grperc ] shows that for _ pruning_the grounder produces a few non - factual rules ( below 1% in average ) , whereas _ mrt_and _bb_produce 5% and 63% of non - factual rules , respectively .roughly , this corresponds to about 23k non - factual rules ( resp ., 375k non - factual rules ) every 100k tuples per relation for _ mrt_(resp ., _ bb _ ) , whereas our approach produces no more than 650 non - factual rules every 100k tuples per relation .logic programming approaches to cqa were recently considered not competitive on large databases affected by primary key violations . in this paper , we proposed a new strategy based on a cascade pruning mechanism that dramatically reduces the number of primary key violations to be handled to answer the query .the strategy is encoded naturally in asp , and an experiment on benchmarks already employed in the literature demonstrates that our asp - based approach is efficient on large datasets , and performs better than state - of - the - art methods in terms of overhead . as far as future workis concerned , we plan to extend the _pruning_method for handling inclusion dependencies , and other tractable classes of tuple - generating dependencies . ,hull , r. , and vianu , v. 1995 . .addison - wesley . , dodaro , c. , and ricca , f. 2014a .anytime computation of cautious consequences in answer set programming . _14 , _ 4 - 5 , 755770 . ,dodaro , c. , and ricca , f. 2014b .preliminary report on wasp 2.0 . _abs/1404.6999_. , bertossi , l. e. , and chomicki , j. 1999 .consistent query answers in inconsistent databases . in _ proceedings of pods99_. 6879 . ,bertossi , l. e. , and chomicki , j. 2003 . answer sets for consistent query answering in inconsistent databases ._ 3 , _ 4 - 5 , 393424. \2003 . .cambridge university press .logic programs for querying inconsistent databases . in _ proceedings of padl03_. lncs , volspringer , 208222 .synthesis lectures on data management .morgan & claypool publishers . , hunter , a. , and schaub , t. , eds .lncs , vol .springer , berlin / heidelberg . ,eiter , t. , and truszczynski , m. 2011 .answer set programming at a glance ._ 54 , _ 12 , 92103 . ,faber , w. , gebser , m. , ianni , g. , kaminski , r. , krennwallner , t. , leone , n. , ricca , f. , and schaub , t. 2013 .asp - core-2 input language format .available at https://www.mat.unical.it/aspcomp2013/files/asp-core-2.03b.pdf . ,ianni , g. , and ricca , f. 2014 . the third open answer set programming competition . _ 14 , _ 1 , 117135 .minimal - change integrity maintenance using tuple deletions ._ 197 , _ 1 - 2 , 90121 . ,fink , m. , greco , g. , and lembo , d. 2003 .efficient evaluation of logic programs for querying data integration systems . in _ proceedings of iclp03_. lncs , vol .springer , 163177 . , ipeirotis , p. g. , and verykios , v. s. 2007 .duplicate record detection : a survey ._ 19 , _ 1 , 116 . ,fazli , e. , and miller , r. j. 2005 .conquer : efficient management of inconsistent databases . in _ proceedings of sigmod05_. acm , 155166 .first - order query rewriting for inconsistent databases ._ 73 , _ 4 , 610635 . , kaminski , r. , knig , a. , and schaub , t. 2011 .advances in _ gringo _ series 3 . in _ logic programming and nonmonotonic reasoning - 11th international conference , lpnmr 2011 , vancouver , canada , may 16 - 19 , 2011 . proceedings _ , j. p. delgrande and w. faber , eds .lecture notes in computer science , vol . 6645 .springer , 345351 . ,kaufmann , b. , and schaub , t. 2013 .advanced conflict - driven disjunctive answer set solving . in _ijcai 2013 , proceedings of the 23rd international joint conference on artificial intelligence , beijing , china , august 3 - 9 , 2013 _ , f. rossi , ed .ijcai / aaai .classical negation in logic programs and disjunctive databases ._ 9 , _ 3/4 , 365386 . ,greco , s. , and zumpano , e. 2001 . a logic programming approach to the integration , repairing and querying of inconsistent databases . in _ proceedings of iclp01_. lncs , vol .springer , 348364 . , greco , s. , and zumpano , e. 2003 . a logical framework for querying and repairing inconsistent databases ._ 15 , _ 6 , 13891408. \2012 .a dichotomy in the complexity of consistent query answering for queries with two atoms ._ 112 , _ 3 , 7785 . ,pema , e. , and tan , w .- c .efficient querying of inconsistent databases with binary integer programming ._ 6 , _ 6 , 397408 . ,ricca , f. , and terracina , g. 2013 .consistent query answering via asp from different perspectives : theory and practice ._ 13 , _ 2 , 227252 .\2009 . on the consistent rewriting of conjunctive queries under primary key constraints ._ 34 , _ 7 , 578601 .certain conjunctive query answering in first - order logic ._ 37 , _ 2 , 9 .[ sec : appendix ]here we report the proofs of theorems and propositions reported in section [ sec : pruning ] .let us assume that .this means that is true in every repair of .since , by definition , for each repair of , there exists a repair of such that , we conclude that must be true also in every repair of .we we will prove the contrapositive . to this end , let be the bunches of . assume that , for each ] , there exists a repair such that .consider now the instance } r_i ] .hence , in each relation , there are approximately distinct values in the third attribute , each value appearing approximately 10 times . we refrain from reporting here all the asp encodings employed in the experiment since they are very lengthy .instead we report as an example the asp program used for answering query q7 , and provide all the material in an archive that can be downloaded from www.mat.unical.it/ricca/downloads/mrticlp2015.zip .the zip package also contains the binaries of the asp system employed in the experiment .let us classify the variables of : * all the variables are : ; * the free variables are : ; * the variables involved in some join are : ; * the variables in primary - key positions are : ; * the variable in idle positions are : * the variable occurring in relevant positions are : [ [ computation - of - the - safe - answer . ] ] computation of the safe answer .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ` sub(x , y , z , x1,w , d ) ` + ` : - ` + ` r5(x , y , z ) , r6(x1,y , w ) , r7(y , u , d ) . ` + ` involvedatom(k - r5(x ) , nk - r5(v2,v3 ) ) : - sub(x , y , z , x1,w , d ) , r5(x , v2,v3 ) . `+ ` involvedatom(k - r6(x1 ) , nk - r6(v2,v3 ) ) : - sub(x , y , z , x1,w , d ) , r6(x1,v2,v3 ) . `+ ` involvedatom(k - r7(y ) , nk - r7(v3 ) ) : - sub(x , y , z , x1,w , d ) , r7(y , v2,v3 ) . `+ ` confcomp(k ) : - involvedatom(k , nk1 ) , involvedatom(k , nk2 ) ,nk1 > nk2 .` + ` safeans(z , w , d ) : - sub(x , y , z , x1,w , d ) , not confcomp(k - r5(x ) ) , ` + ` not confcomp(k - r6(x1 ) ) , not confcomp(k - r7(y ) ) . `[ [ hypergraph - construction . ] ] hypergraph construction .+ + + + + + + + + + + + + + + + + + + + + + + + + ` subeq(sid(x , y , z , x1,w ,d ) , ans(z , w , d ) ) : - sub(x , y , z , x1,w , d ) , not safeans(z , w , d ) . ` + + ` compek(k - r5(x ) , ans ) : - subeq(sid(x , y , z , x1,w , d ) , ans ) . `+ ` compek(k - r6(x1 ) , ans ) : - subeq(sid(x , y , z , x1,w , d ) , ans ) . `+ ` compek(k - r7(y ) , ans ) : - subeq(sid(x , y , z , x1,w , d ) , ans ) . `+ ` insubeq(atom - r5(x , y , z ) , sid(x , y , z , x1,w , d ) ) : - subeq(sid(x , y , z , x1,w , d ) , _ ) . `+ ` insubeq(atom - r6(x1,y , w ) , sid(x , y , z , x1,w , d ) ) : - subeq(sid(x , y , z , x1,w , d ) , _ ) . ` + ` insubeq(atom - r7(y , d ) , sid(x , y , z , x1,w , d ) ) : - subeq(sid(x , y , z , x1,w , d ) , _ ) . ` + + ` incompek(atom - r5(x , v2,v3 ) , k - r5(x ) ) : - compek(k - r5(x ) , ans ) , ` + ` involvedatom(k - r5(x ) , nk - r5(v2,v3 ) ) . `+ ` incompek(atom - r6(x1,v2,v3 ) , k - r6(x1 ) ) : - compek(k - r6(x1 ) , ans ) , ` + ` involvedatom(k - r6(x1 ) , nk - r6(v2,v3 ) ) . ` + ` incompek(atom - r7(y , v3 ) , k - r7(y ) ) : - compek(k - r7(y ) , ans ) , ` + ` involvedatom(k - r7(y ) , nk - r7(v3 ) ) . `[ [ pruning . ] ] pruning .+ + + + + + + + + ` redcomp(k , ans ) : - compek(k , ans ) , incompek(a , k ) , ` + ` # count{s : insubeq(a , s ) , subeq(s , ans ) } = 0 . `+ + ` unfsub(s , ans ) : - subeq(s , ans ) , insubeq(a , s ) , incompek(a , k ) , redcomp(k , ans ) . ` + + ` redcomp(k , ans ) : - compek(k , ans ) , incompek(a ,k ) , ` + ` x = # count{s : insubeq(a , s ) , subeq(s , ans ) } ` + ` # count{s : insubeq(a , s ) , unfsub(s , ans ) } > = x. ` + + ` residualsub(s , ans ) : - subeq(s , ans ) , not unfsub(s , ans ) . ` [ [ fragments - identification . ] ] fragments identification .+ + + + + + + + + + + + + + + + + + + + + + + + + + ` sharesub(k1,k2,ans ) : - residualsub(s , ans ) , insubeq(a1,s ) , insubeq(a2,s ) , ` + ` a1 < > a2 , incompek(a1,k1 ) , incompek(a2,k2 ) , k1 < > k2 . ` + + ` ancestorof(k1,k2,ans ) : - sharesub(k1,k2,ans ) , k1 < k2 . `+ ` ancestorof(k1,k3,ans ) : - ancestorof(k1,k2,ans ) , sharesub(k2,k3,ans ) , k1 < k3 . `+ + ` child(k , ans ) : - ancestorof(_,k , ans ) . `+ + ` keycompinfrag(k1 , fid(k1,ans ) ) : - ancestorof(k1,_,ans ) , not child(k1,ans ) . `+ ` keycompinfrag(k2 , fid(k1,ans ) ) : - ancestorof(k1,k2,ans ) , not child(k1,ans ) . `+ + ` subinfrag(s , fid(kf , ans ) ) : - residualsub(s , ans ) , insubeq(a , s ) , ` + ` incompek(a , k ) , keycompinfrag(k , fid(kf , ans ) ) . `+ + ` frag(fid(k , ans),ans ) : - keycompinfrag(_,fid(k , ans ) ) . ` [ [ repairs - construction . ] ] repairs construction .+ + + + + + + + + + + + + + + + + + + + + + ` 1 < = { activefrag(f):frag(f , ans ) } < = 1 : - frag ( _ , _ ) . `+ + ` 1< = { activeatom(a):incompek(a , k ) } < = 1 : - activefrag(f ) , keycompinfrag(k , f ) . ` + + ` ignoredsub(s ) : - activefrag(f ) , subinfrag(s , f ) , insubeq(a , s ) , not activeatom(a ) . ` [ [ new - query . ] ] new query .+ + + + + + + + + + + ` q``(s , z , w , d ) : - safeans(z , w , d ) . ` + ` q``(f , z , w , d ) : - frag(f , ans(z , w , d ) ) , not activefrag(f ) . `+ ` q``(f , z , w , d ) : - activefrag(f ) , subinfrag(s , f ) , not ignoredsub(s ) , frag(f , ans(z , w , d ) ) . `we report in this appendix some additional plots . in particular , we provide detailed plots for the overhead of _ pruning_w.r.t . safe answer computation ; scatter plots comparing , execution by execution , _pruning_with _ bb_and _ mrt _ ; and , an extract of concerning the overhead measured for the mip - based approach for easing direct comparison with our results .we report in the following the detailed plots concerning the overhead of _ pruning_w.r.t . the computation of safe answers .the results are reported in three plots grouping queries per complexity class in figures [ fig : ratiosafe ] .one might wonder what is the picture if the asp - based approaches are compared instance - wise .an instance by instance comparison of _ pruning_with _ bb_and _ mrt _ , is reported in the scatter plots in figure [ fig : scatter ] . in these plots a point is reported for each query , where is the running time of _ pruning _ , and is the running time of _ bb_and _ mrt _ , respectively in figure [ fig : scatter : abc ] and figure [ fig : scatter : mrt ] .the plots also report a dotted line representing the secant ( ) , points along this line indicates identical performance , points above the line represent the queries where the method on the -axis performs better that the one in the -axis and vice versa .figure [ fig : comp ] clearly indicates that _ pruning_is also instance - wise superior to alternative methods .
consistent query answering over a database that violates primary key constraints is a classical hard problem in database research that has been traditionally dealt with logic programming . however , the applicability of existing logic - based solutions is restricted to data sets of moderate size . this paper presents a novel decomposition and pruning strategy that reduces , in polynomial time , the problem of computing the consistent answer to a conjunctive query over a database subject to primary key constraints to a collection of smaller problems of the same sort that can be solved independently . the new strategy is naturally modeled and implemented using answer set programming ( asp ) . an experiment run on benchmarks from the database world prove the effectiveness and efficiency of our asp - based approach also on large data sets . to appear in theory and practice of logic programming ( tplp ) , proceedings of iclp 2015 . inconsistent databases , primary key constraints , consistent query answering , asp
the solutions of linear elliptic and parabolic equations , both with cauchy and dirichlet boundary conditions , have a probabilistic interpretation , which not only provides intuition on the nature of the problems described by these equations , but is also quite useful in the proof of general theorems .this is a very classical field which may be traced back to the work of courant , friedrichs and lewy in the 20 s . in spite of the pioneering work of mckean , the question of whether useful probabilistic representations could also be found for a large class of nonlinear equations remained an essentially open problem for many years .it was only in the 90 s that , with the work of dynkin , such a theory started to take shape .for nonlinear diffusion processes , the branching exit markov systems , that is , processes that involve diffusion and branching , seem to play the same role as brownian motion in the linear equations .however the theory is still limited to some classes of nonlinearities and there is much room for further mathematical improvement .another field , where considerable recent advances were achieved , was the probabilistic representation of the fourier transformed navier - stokes equation , first with the work of lejan and sznitman , later followed by extensive developments of the oregon school . in all casesthe stochastic representation defines a process for which the mean values of some functionals coincide with the solution of the deterministic equation .stochastic representations , in addition to its intrinsic mathematical relevance , have several practical implications : \(i ) they provide an intuitive characterization of the equation solutions ; \(ii ) they provide a calculation tool which may replace , for example , the need for very fine integration grids at high reynolds numbers ; \(iii ) by associating a stochastic process to the solutions of the equation , they provide an intrinsic characterization of the nature of the fluctuations associated to the physical system . in some cases the stochastic process is essentially unique , in others there is a class of processes with means leading to the same solution . the physical significance of this feature is worth exploring . a field where stochastic representations have not yet been developed ( and where for the practical applications cited above they might be useful ) is the field of kinetic equations for charged fluids . as a first step towards this goal ,a stochastic representation is here constructed for the solutions of the poisson - vlasov equation . the comments in the final section point towards future work , in particular on how a stochastic representation may be used for a characterization of fluctuations , alternative to existing methods .this is what we call _ the stochastic principle_.consider a poisson - vlasov equation in 3 + 1 space - time dimensions with being a background charge density . passing to the fouriertransform with and , one obtains being the fourier transform of .changing variables to where is a positive continuous function satisfying leads to with .eq.([2.6 ] ) written in integral form , is for convenience , a stochastic representation is going to be written for the following function with a constant and a positive function to be specified later on .the integral equation for is with and eq.([2.9 ] ) has a stochastic interpretation as an exponential process ( with a time shift in the second variable ) plus a branching process . is the probability that , given a mode , one obtains a branching with in the volume . is computed from the expectation value of a multiplicative functional associated to the processes .convergence of the multiplicative functional hinges on the fulfilling of the following conditions : \(a ) \(b ) \(c ) condition ( c ) is satisfied , for example , for indeed computing one obtains this integral is bounded by a constant for all , therefore , choosing sufficiently small , condition ( c ) is satisfied .once consistent with ( c ) is found , conditions ( a ) and ( b ) only put restrictions on the initial conditions and the background charge .now one constructs the stochastic process . because is the survival probability during time of an exponential process with parameter and the decay probability in the interval , in eq.([2.9 ] ) is obtained as the expectation value of a multiplicative functional for the following backward - in - time process : starting at , a particle lives for an exponentially distributed time up to time . at its death a coin ( probabilities )is tossed .if two new particles are born at time with fourier modes and with probability density . if only the particle is born and the process also samples the background charge at .each one of the newborn particles continues its backward - in - time evolution , following the same death and birth laws .when one of the particles of this tree reaches time zero it samples the initial condition .the multiplicative functional of the process is the product of the following contributions : - at each branching point where two particles are born , the coupling constant is - when only one particle is born and the process samples the background charge , the coupling is - when one particle reaches time zero and samples the initial condition the coupling is the multiplicative functional is the product of all these couplings for each realization of the process , this process being obtained as the limit of the following iterative process } + g_{2}\left ( \xi _ { 1},\xi _ { 1}^{^{\prime } } , s\right ) \\ & & \times x^{\left ( k\right ) } \left ( \xi _ { 1}-\xi _ { 1}^{^{\prime } } , \xi _ { 2}+s% \frac{\xi _ { 1}}{\gamma \left ( \left| \xi _ { 2}\right| \right ) } , \tau -s\right ) x^{\left ( k\right ) } \left ( \xi _ { 1}^{^{\prime } } , 0,\tau -s\right ) \mathbf{1}_{\left [ s<\tau \right ] } \mathbf{1}_{\left [ l_{s}=0\right ] } \\ & & + g_{1}\left ( \xi _ { 1},\xi _ { 1}^{^{\prime } } \right ) x^{\left ( k\right ) } \left ( \xi _ { 1}^{^{\prime } } , 0,\tau -s\right ) \mathbf{1}_{\left [ s<\tau \right ] } \mathbf{1}_{\left [ l_{s}=1\right ] } \end{aligned}\ ] ] then , is the expectation value of the functional . for example , for the realization in fig.1 the contribution to the multiplicative functional is and with the conditions ( a ) and ( b ) , choosing and the absolute value of all coupling constants is bounded by one .the branching process , being identical to a galton - watson process , terminates with probability one and the number of inputs to the functional is finite ( with probability one ) . with the bounds on the coupling constants , the multiplicative functionalis bounded by one in absolute value almost surely .once a stochastic representation is obtained for , one also has , by ( [ 2.8 ] ) , a stochastic representation for the solution of the fourier - transformed poisson - vlasov equation .the results are summarized in the following : * * theorem 2.1 * * _ _ - there is a stochastic representation for the fourier - transformed solution of the poisson - vlasov equation __ _ _ for any arbitrary finite value of the arguments , provided the initial conditions at time zero and the background charge satisfy the boundedness conditions ( a ) and ( b ) . __ as a corollary one also infers an existence result for ( arbitrarily large ) finite time .notice that existence by the stochastic representation method requires only boundedness conditions on the initial conditions and background charge and not any strict smoothness properties .in the past , the fluctuation spectrum of charged fluids was studied either by the bbgky hierarchy derived from the liouville or klimontovich equations , with some sort of closure approximation , or by direct approximations to the n - body partition function or by models of dressed test particles , etc .( see reviews in ) .alternatively , by linearizing the vlasov equation about a stable solution and diagonalizing the hamiltonian , a canonical partition function may be used to compute correlation functions . however , one should remember that , as a model for charged fluids , the vlasov equation is just a mean - field collisionless theory .therefore , it is unlikely that , by itself , it will contain full information on the fluctuation spectrum .kinetic and fluid equations are obtained from the full particle dynamics in the 6n - dimensional phase - space by a chain of reductions . along the way , information on the actual nature of fluctuations and turbulence may have been lost .an accurate model of turbulence may exist at some intermediate ( mesoscopic ) level , but not necessarily in the final mean - field equation .when a stochastic representation is constructed , one obtains a process for which the mean value is the solution of the mean - field equation .the process itself contains more information .this does not mean , of course , that the process is an accurate mesoscopic model of nature , because we might be climbing up a path different from the one that led us down from the particle dynamics .nevertheless , insofar as the stochastic representation is qualitatively unique and related to some reasonable iterative process , it provides a surrogate mesoscopic model from which fluctuations are easily computed .this is what we refer to as _ the stochastic principle_. at the minimum , one might say that the stochastic principle provides another closure procedure .
a stochastic representation for the solutions of the poisson - vlasov equation is obtained . the representation involves both an exponential and a branching process . the stochastic representation , besides providing an alternative existence proof and an intuitive characterization of the solutions , may also be used to obtain an intrinsic definition of the fluctuations .
at the turn of the 20th century studies of heat capacity of solids at low temperatures played an important role in revealing the quantum character of nature . this example vividly illustrates the power of heat capacity measurements in physics . as a matter of fact measurements of heat capacity reveal so great a deal of information about matter that calorimetry has become an indispensable tool for modern day research in chemistry , physics , materials science , and biology .unfortunately , however , calorimetry is a relatively insensitive method , and it is particularly difficult to obtain an accurate absolute value of heat capacity of samples with minute masses . since new materials with interesting physical properties are usually not synthesized in quantity , it is of extreme necessity for the advance of materials science or condensed matter physics to secure a convenient means to measure absolute heat capacity of sub - milligram samples . on another front , the generalization of calorimetry into the dynamic regime has attracted wide attention in recent years . although measurement of a thermodynamic quantity is the usual notion that is tied to calorimetry , it is possible to go beyond this traditional understanding and generalize heat capacity as a dynamic quantity .the concept of dynamic heat capacity appears natural if one recalls that static thermodynamic quantities are time - averaged ( or ensemble - averaged ) . in other words , they are static not because they do not change in time , but because they change too rapidly on the experimental time scale . then , suppose that a system contains a dynamic process relaxing with a characteristic time which lies within our experimental time window , this will result in a time - dependent ( or frequency - dependent ) heat capacity depending on the time scale of measurements .conversely , measurements of dynamic heat capacity of condensed matter would provide insights which may not be available to other dynamic probes .thus , the development of a convenient dynamic calorimeter for solid samples appears to be of great necessity . in this paper, we describe a new type of the ac calorimeter , termed _ peltier ac calorimeter _ ( pac ) , which fills both needs described above ; pac is not only a microcalorimeter capable of measuring heat capacity of sub - milligram samples but also a dynamic calorimeter with wide dynamic range .for the heat capacity measurements of a minute sample adiabatic calorimetry does not appear to be suitable due to the so - called addenda problem ; in other words , calorimetry requires indispensable addenda ( heater and sensor ) to be put on a sample and the mass of the addenda may even be greater than that of the sample in the case of minute samples . although one may expect the same kind of problem in ac calorimetry , we have devised a way of avoiding the addenda problem in a new ac calorimeter by utilizing the peltier effect of extremely thin thermocouple wires .it is also obvious that an ac calorimeter has a potential of being a dynamic one .suppose that a voltage difference and/or a temperature difference exist across a metallic wire , the electric current and the heat current through the wire can be expressed as where is the resistance , the thermoelectric power , the peltier coefficient , and the thermal conductance .the thermoelectric power and the peltier coefficient are related by .now if an electric current is run through a thermocouple , consisting of two distinct metal wires , under an isothermal condition , then the junction acts as either a heat sink or a heat source depending on the current direction .the heat current caused by this peltier effect is directly proportional to .if an ac electric current at an angular frequency , , is applied to a thermocouple under an isothermal condition , an ac power oscillation at the same frequency will be induced at the junction by the peltier effect .the amplitude of the ac power is equal to ; can be accurately determined , since is well tabulated for thermocouples and the ac current can be measured with high precision .thus it is evident that the peltier effect of a thermocouple junction can be utilized as a power source for ac calorimetry .various ac calorimetric techniques with non - contact energy sources such as chopped light or light emitting diode were developed previously . however , it should be pointed out that these power sources can only supply heat and can not act as a heat sink . as a consequence of this ,the average temperature of a sample is always above that of the heat bath in these traditional ac calorimetric methods .( this so - called dc shift is equal to the average dc power divided by the thermal conductance between the sample and the bath . ) in addition , it is not easy in traditional ac calorimetry to determine an absolute value of heat capacity due to inaccuracy in the determination of input power and heat leak . on the other hand , the pac which utilizes the peltier effect as an ac power source is free from these difficulties : first , it is capable of both heating and cooling .this means that there is no dc shift in temperature , and lack of the dc shift in turn has an important implication on the working frequency range of the ac calorimeter .( see below . )second , the power generated at a thermocouple junction can be measured with high accuracy .third , the mass of the thermocouple junction attached to the sample is entirely negligible in most situations .all these factors make the pac superior to previous methods in that the experimental setup is simple , it directly yields absolute values of heat capacity of sub - milligram samples , and it may be used as a dynamic calorimeter .the implementation of the principle described in the previous section is rather straightforward ; the schematic diagram and the photograph of the peltier ac calorimeter we constructed is shown in fig . 1 .we made thermocouple junctions by spot - welding with chromel and constantan wires of 25 ( or 12 ) in diameter .a couple of thermocouple junctions ( tc1 ) were connected via copper wires to a function generator supplying an ac electric current , which was measured by a digital ammeter with 1 na sensitivity . using a very small amount of ge 7031 varnish for electrical insulation and good thermal contacts ,one of the junctions was attached to one side of a sample ( typically of linear size less than 1 mm ) and the other to a copper block ( heat bath ) .the heat bath was then attached to a closed - cycle he refrigerator and its temperature was controlled within 2 mk stability in the range of 15420 k. a digital voltmeter was used to read the voltage difference across another thermocouple junction ( tc2 ) attached to the other side of the sample . from the voltage readings , we could measure the sample temperature with the sensitivity of 1.67 mk at 15 k and 0.14 mk at 420 k. since the mass of the thermocouple junctions and the varnish is completely negligible compared to the sample mass even for sub - milligram samples , the amplitude of the temperature oscillation at the imposed frequency ( measured by tc2 ) can be written as where is the sample heat capacity , the external or sample - to - bath relaxation time , the internal diffusion time in the sample , the thermal conductance of the link between the sample and the bath , and the thermal conductance of the sample . since only the thin thermocouple wires of diameter 12 or 25 m provide paths for heat conduction from the sample to the heat bath , is extremely small and negligible compared to . and the third term in the parenthesis of eq .( [ eq_accal ] ) can be neglected .thus , if one can select the frequency range of , heat capacity may be directly obtained from .in addition to providing a means for measuring the absolute value of heat capacity of minute solid samples , the pac possesses a distinct ability to function as a dynamic calorimeter .this capability of the pac stems from the fact that is extremely small as noted above ( heat conduction paths are provided by microns thick thermocouple wires only . ) and thus is exceedingly large .large then allows a wide frequency range where the relationship holds .this situation is contrasted to that of traditional ac calorimeters where one is required to have a reasonable size of , because otherwise the dc shift of the sample due to a dc power would become prohibitively large .( remember that the dc shift is given by the dc power divided by . )this size requirement of then places a limit to the working frequency range of the ac calorimeter by decreasing .note that the origin of this limitation of the traditional ac calorimeter goes to the fact that the energy source is only capable of heating , but not cooling .on the other hand , the energy source of the pac is able to both heat and cool , the average temperature of the sample is the same as the bath temperature , and the pac can afford to have exceedingly small and long .the performance of the pac was tested with small pieces of synthetic sapphire ( - ) , the standard material designated by nist . we first examined the induced temperature oscillation in tc2 , in response to an oscillating electric current in tc1 , at 30 k , 150 k , and 320 k with a test sample of mass 0.54 mg and dimension 1.5.3 mm .the frequency and amplitude of the electric current were 0.25 hz and 0.4 ma , respectively . from fig .2(a ) and ( b ) , it is seen that nice temperature oscillations at the applied frequency are obtained at 150 k and 320 k. however , fig .2(c ) reveals that a significant amount of the second harmonic appears at 30 k. it is readily clear that this second harmonic originates from joule heating in the thermocouple wires . to precisely determine which will yield heat capacity ,the raw data were fitted to a sinusoidal function containing the fundamental and second harmonic terms .2(d ) shows the power amplitude at the fundamental and second harmonics as a function of temperature .the power amplitude at the fundamental were converted from the measured using the table for . the joule heating part was obtained from the fitting procedure . above 50 k, the joule heating effect is completely negligible ; however , there appears an increasing amount of the second harmonic as temperature decreases below 50 k. this is probably caused by the reduction in thermal resistance of the thermocouple wires at low temperatures , and thus part of the heat generated along the wires flows back toward the sample .however , the existence of the second harmonic does not cause too much problem in the heat capacity measurements , since the governing equation for the present problem is linear and therefore one needs only to measure the signal at the fundamental frequency .nevertheless , it is desirable to reduce the joule heating as much as possible to attain the sensitivity . the dynamic characteristics of the pac was then checked by measuring the frequency dependence of for the test sample ( 0.54 mg ) at a fixed amplitude of the applied current .3 is the plot of the results obtained at three temperatures .it is seen from the figure that is proportional to in the whole measured frequency range at high temperatures ( 150 k and 320 k ) , and the figure clearly illustrates that the pac is indeed a dynamic calorimeter . the data obtained at 30 k , however , shows a large deviation from the behavior at frequencies below 0.13 hz .this is caused by the fact that the smallness of is compensated by a rapid decrease in heat capacity at low temperatures , and the external relaxation time becomes short enough to be comparable to the oscillation period at 0.13 hz .this compensation effect is unavoidable and the dynamic capability of the pac is of limited use below roughly liquid nitrogen temperature .it may be also noted that the deviation from the behavior is also expected at high frequencies , since our power source is an extremely local one and the internal diffusion time will interfere above a certain frequency .this high cutoff should be size - dependent , and present no problem for small samples . in order to ascertain the capability of the pac as a microcalorimeter, we carried out the heat capacity measurements for two test samples of - with mass 0.54 mg and 2.25 mg .the measuring frequency was set at 0.25 hz and the measured temperature range was from 15 k to 420 k. it is stressed that background subtraction , required in most calorimetric methods , is not necessary for the pac .4 ( a ) is the plot of the heat capacity data for two sapphire samples .the two sets of data coincide very well and display excellent reproducibility of the pac .also plotted in the figure is the reference data ( ) of the same material . the agreement is again excellent in the whole temperature range from 15 k to 420 k. in order to estimate the accuracy and precision of the pac , we plotted the residual heat capacity values , _i.e. _ , ( ) in fig .4(b ) . from the figure, we estimate the absolute accuracy for heat capacity of a sub - milligram sample to be % for the temperature range of 30150 k and % for 150420k . at temperatures below 30 k ,the absolute accuracy worsens due to very small values of heat capacity of - and becomes even more than 10% .however , the precision is better than 0.5% in the whole temperature range of 15420 k.having demonstrated the capability of the pac as a microcaloriemter and dynamic calorimeter , we briefly outline the possible extensions of the pac , which are currently under development .first of all , it should be noted that the pac would function as a microcalorimeter even at helium temperatures if cryogenic thermocouple wires of au - fe or cu - fe are used .this replacement of thermocouples would have an additional effect of suppressing unwanted joule heating .a more novel extension of the pac would be the peltier thermal microscope , which would enable to measure local thermophysical properties of matter at submicron length scales .here it is proposed that the tip for a atomic force microscope is replaced by a thermocouple tip .an important feature of our proposal is that the thermocouple tip here is not just a temperature sensor , but it plays dual roles of a heater and sensor .for this purpose , we have shown that a single junction can indeed be used as both a heat source and sensor simultaneously . the successful development of the peltier thermal microscope would be an exciting and important event for this so - called nano - age when the submicron local thermophysical properties are in great demand .a. einstein : ann .physik * 22 * , 180 ( 1907 ) .y. h. jeong : thermochimica acta * 304/305 * , 67 ( 1997 ) .w. nernst , ann .physik * 36 * , 395 ( 1911 ) .p. f. sullivan and g. seidel , phys. rev . * 173 * , 679 ( 1968 ) .h. b. callen : _ thermodynamics and an introduction to thermostatistics _ , john wiley & sons ( 1985 ). i. hatta , pure & appl . chem . * 64 * , 79 ( 1992 ) .n. overend , m. a. howson , i. d. lawrie , s. abell , p. j. hirst , c. changkang , s. chowdhury , j. w. hodby , s. e. inderhees , and m. b. salamon , phys .b * 54 * , 9499 ( 1996 ) . k. f. sterrett , d. h. blackburn , a. b. bestul , s. s. chang , and j. horman , j. res .* 69c * , 19 ( 1965 ) ; s. s. chang , proc .seventh symp ., a. cezairliyan ( ed . ) 75 , asme ( 1977 ) . or 12 are used .a copper block plays a role of heat bath .a function generator applies an oscillating current to ch - cn thermocouples ( tc1 ) in contact with the sample , and a digital voltmeter measures an ensuing voltage oscillation from another thermocouple ( tc2 ) attached to the sample .( b ) the photograph of the peltier ac calorimeter .the linear dimension of the sample is approximately 1 mm , and the diameter of the thermocouple wires is 25 .,title="fig:",height=226 ] or 12 are used .a copper block plays a role of heat bath .a function generator applies an oscillating current to ch - cn thermocouples ( tc1 ) in contact with the sample , and a digital voltmeter measures an ensuing voltage oscillation from another thermocouple ( tc2 ) attached to the sample .( b ) the photograph of the peltier ac calorimeter .the linear dimension of the sample is approximately 1 mm , and the diameter of the thermocouple wires is 25 .,title="fig:",height=226 ] 0.5.3 mm .the broken line of ( b ) shows the electric current oscillation .the solid lines indicate the fitting results with the fundamental and second harmonics , and represents the dc component .( d ) the amplitude of the power oscillations at the fundamental ( peltier ) and second harmonics ( joule).,width=377 ] is plotted as a function of frequency .the thick solid line represents the behavior .note that the the law is well obeyed in the whole frequency range at high temperatures ; the pac is able to function as a dynamic calorimeter .the arrow indicates the low cutoff frequency below which the deviation from the behavior appears at low temperatures.,width=377 ]
a new ac calorimeter , utilizing the peltier effect of a thermocouple junction as an ac power source , is described . this peltier ac calorimeter allows to measure the absolute value of heat capacity of small solid samples with sub - milligrams of mass . the calorimeter can also be used as a dynamic one with a dynamic range of several decades at low frequencies .
principal component analysis ( pca ) is one of the oldest and most fundamental techniques in data analysis , enjoying ubiquitous applications in modern science and engineering .given a data matrix of data points of dimension , pca gives a closed form solution to the problem of fitting , in the euclidean sense , a -dimensional linear subspace to the columns of . even though the optimization problem associated with pca is non - convex ,it does admit a simple solution by means of the singular value decomposition ( svd ) of .in fact , the -dimensional subspace of that is closest to the column span of is precisely the subspace spanned by the first left singular vectors of .using as a model for the data is meaningful when the data are known to have an approximately linear structure of underlying dimension , i.e. they lie close to a -dimensional subspace . in practice ,the principal components of are known to be well - behaved under mild levels of noise , i.e. , the angle between and is relatively small and more importantly is optimal when the noise is gaussian .however , in the presence of even a few outliers in , i.e. , points whose angle from the underlying ground truth subspace is large , the angle between and its estimate will in general be large .this is to be expected since , by definition , the principal components are orthogonal directions of maximal correlation with _ all _ the points of .this phenomenon , together with the fact that outliers are almost always present in real datasets , has given rise to the important problem of outlier detection in pca .traditional outlier detection approaches come from robust statistics and include _ influence - based detection _ , _ multivariate trimming _ , -estimators _ _ , _ iteratively _ _ weighted recursive least squares _ and _ random sampling consensus _ ( ransac ) .these methods are usually based on non - convex optimization problems , admit limited theoretical guarantees and have high computational complexity ; for example , in the case of ransac many trials are required .recently , two attractive methods have appeared with tight connections to _ compressed sensing _ and _ low - rank representation _ .both of these methods are based on convex optimization problems and admit theoretical guarantees and efficient implementations .remarkably , the self - expressiveness method of does not require an upper bound on the number of outliers as the method of does .however , they are both guaranteed to succeed only in the low - rank regime : the dimension of the underlying subspace associated to the inliers should be small compared to the ambient dimension . in this paperwe adopt a _ dual _ approach to the problem of robust pca in the presence of outliers , which allows us to transcend the low - rank regime of modern methods such as .the key idea of our approach comes from the fact that , in the absence of noise , the inliers lie inside any hyperplane that contains the underlying linear subspace .this suggests that , instead of attempting to fit directly a low - dimensional linear subspace to the entire data set , as done e.g. in , we can search for a hyperplane that contains as many points of the dataset as possible .when the inliers are in general position inside the subspace , and the outliers are in general position outside the subspace , this hyperplane will ideally contain the entire set of inliers together with possibly a few outliers . after removing the points that do not lie in that hyperplane, the robust pca problem is reduced to one with a potentially much smaller outlier percentage than in the original dataset .in fact , the number of outliers in the new dataset will be at most , an upper bound that can be used to dramatically facilitate the outlier detection process using existing methods .we think of the direction of the normal to the hyperplane as a _ dual principal component _ of , as ideally it is an element of .naturally , one can continue by finding a second dual principal component by searching for a hyperplane , with , that contains as many points as possible from , and so on , leading to a _ dual principal component analysis _ of .we pose the problem of searching for such hyperplanes as an cosparsity - type problem , which we relax to a non - convex problem on the sphere .we provide theoretical guarantees under which every global solution of that problem is a dual principal component .more importantly , we relax this non - convex optimization problem to a sequence of linear programming problems , which , after a finite number of steps , yields a dual principal component . experiments on synthetic data demonstrate that the proposed method is able to handle more outliers and higher dimensional subspaces than the state - of - the - art methods .we begin by establishing our data model in section [ subsection : datamodel ] , then we formulate our dpcp problem conceptually and computationally in sections [ subsection : conceptualformulation ] and [ subsection : computationalformulation ] , respectively .we employ a deterministic noise - free data model , under which the inliers consist of points \in \re^{d \times n} ] that lie on .the dataset , that we assume given , is \boldsymbol{\gamma } \in \re^{d \times l} ] and \boldsymbol{\gamma} ] and integer , define to be the maximum circumradius among all polytopes , where are distinct integers in $ ] , and indicates the convex hull operator .[ thm : discretenonconvex ] suppose that the quantity satisfies for all positive integers such that .then any global solution to will be orthogonal to . towards interpreting this result ,consider first the asymptotic case where we allow and to go to infinity , while keeping the ratio constant. under point set uniformity , i.e. under the hypothesis that and , we will have that and , in which case is satisfied .this suggests the interesting fact that when the number of inliers is a linear function of the number of outliers , then will always give a normal to the inliers even for arbitrarily large number of outliers and irrespectively of the subspace dimension .along the same lines , for a given and under the point set uniformity hypothesis , we can always increase the number of inliers and outliers ( thus decreasing and ) , while keeping constant , until is satisfed , once again indicating that is possible to yield a normal to the space of inliers irrespectively of their intrinsic dimension .+ + + + + + in this section we consider the sequence of convex relaxations ; in particular , there are two important issues to be addressed .first , note that relaxing the constraint in with a linear constraint as in , has already been found to be of limited theoretical guarantees .so it is natural to ask whether the idea of considering a sequence of such relaxations has an intrinsic merit or not , irrespectively of the data distribution .for example , if the data is _ perfectly well distributed _ , yet the sequence does not yield vectors orthogonal to the inlier space , then we will know that a - priori the method is limited .fortunately , this is not the case : when the data is perfectly well distributed , i.e. when we restrict our attention to the continuous analog of , given by , \label{eq : convexrelaxationscontinuousraw}\end{aligned}\ ] ] then the sequence achieves the property of interest : [ thm : convexrelaxationscontinuous ] consider the sequence of vectors generated by recursion , where is arbitrary .let be the corresponding sequence of angles from .then , provided that .this result suggests that relaxing with the sequence is intrinsically the right idea .the second issue is how the distribution of the data affects the ability of this sequence of relaxations to give vectors orthogonal to .the answer is given by theorem [ thm : discreteconvexrelaxations ] , which says that when the angle between and is large enough and the data points are well distributed , the sequence will consist of vectors orthogonal to the inlier space , for sufficiently large indices .[ thm : discreteconvexrelaxations ] let be the angle between and .suppose that condition on the outlier ratio holds true and consider the vector sequence generated by recursion .then after a finite number of terms , for some , every term of will be orthogonal to , providing that first note that if is true , then the expression of always defines an angle between and .second , theorem [ thm : discreteconvexrelaxations ] can be interpreted using the same asymptotic arguments as theorem [ thm : discretenonconvex ] . in particular , notice that the lower bound on the angle tends to zero as go to infinity with constant .note also that this result does not show convergence of the sequence : it only shows that this sequence will eventually satisfy the desired property of being orthogonal to the space of inliers ; a convergence result remains yet to be established .so far we have established a mechanism of obtaining an element of , where : run the sequence of linear programs until the function converges within some small ; then assuming no pathological point set distributions , any vector can be taken as .there are two possibilities : either is a hyperplane of dimension or . in the first case , is the unique up to scale element of , which proves that in this case the sequence of in fact converges . in such a case , we can identify our subspace model with the hyperplane defined by the normal .next , if , we can proceed to find a second element of that is orthogonal to and so on .this naturally leads to the _ dual principal component pursuit _ shown in algorithm [ alg : dpcp ] . ; ; ; ; [ dpcp : update ] ; ; ; ; ; a few comments are in order . in algorithm[ alg : dpcp ] , is an estimate for the codimension of the inlier subspace .if is rather large , then in the computation of each , it is more efficient to reduce the coordinate representation of the data by replacing with , where is the orthogonal projection onto , and solve the linear program in step [ dpcp : update ] in the projected space . notice further how the algorithm initializes : this is effectively the right singular vector of , that corresponds to the smallest singular value . as it will be demonstrated in section [ section : experiments ] , this choice has the effect that the angle of from the inlier subspace is typically large , in particular , larger than the smallest initial angle required for the success of the principal component pursuit of .in this section we investigate experimentally the proposed dpcp alg .[ alg : dpcp ] . using both synthetic ( subsection [ subsection : synthetic ] ) and real data ( subsection [ subsection : real ] ) , we compare dpcp to the three methods se , l21 and ransac discussed in section [ subsection : outlierpca ] as well as to the method of eq . discussed in section [ subsection : dl ] , which we will refer to as svs ( _ sparsest vector in a subspace _ ) .the parameters of the methods are set to fixed values , chosen such that the methods work well across all tested dimension and outlier configurations .in particular , we use and ; see and for details . regarding dpcp , we fix , and unless otherwise noted , we set equal to the true codimension of the subspace .to begin with , we evaluate the performance of dpcp in the absence of noise , for various subspace dimensions and outlier percentages .we fix the ambient dimension , sample inliers uniformly at random from and outliers uniformly at random from .we are interested in examining the ability of dpcp to recover a single normal vector ( ) to the subspace , by means of recursion .the results are shown in fig .[ figure : dpcp ] for independent trials .[ figure : dpcp_theory ] shows whether the theoretical conditions of are satisfied or not . in checking these conditions , we estimate the abstract quantities by monte - carlo simulation .whenever these conditions are satisfied , we choose in a controlled fashion , so that its angle from the subspace is larger than the minimal angle of , and then we run dpcp ; if the conditions are not true , we do not run dpcp and report a .fig [ figure : dpcp_controlled ] shows the angle of from the subspace .we see that whenever is true , dpcp returns a normal after only iterations .fig [ figure : dpcp_random ] shows that if we initialize randomly , then its angle from the subspace becomes less than the minimal angle , as increases .even so , fig .[ figure : dpcp_randomsuccess ] shows that dpcp still yields a numerical normal , except for the regime where both and are very high . notice that this is roughly the regime where we have no theoretical guarantees in fig .[ figure : dpcp_theory ] . fig .[ figure : dpcp_smart ] shows that if we initialize as the right singular vector of corresponding to the smallest singular value , then is true for most cases , and the corresponding performance of dpcp in fig .[ figure : dpcp_smartsuccess ] improves further . finally , fig .[ figure : dpcp_angle ] plots .we see that for very low this angle is almost zero , i.e. dpcp does not depend on the initialization , even for large . as increases though , so does , and in the extreme case of the upper rightmost regime , where and are very high , is close to , indicating that dpcp will succeed only if is very close to .next , for the same range of and , and still in the absence of noise , we examine the potential of each of se , l21 , svs , ransac and dpcp to perfectly distinguish outliers from inliers .note that each of these methods returns a _ signal _ , which can be thresholded for the purpose of declaring outliers and inliers . for se, is the -norm of the columns of the coefficient matrix , while for l21 it is the -norm of the columns of . since ransac ,svs and dpcp directly return subspace models , for these methods is simply the distances of all points to the estimated subspace model . in fig .[ figure : separation ] we depict success versus failure , where success is interpreted as the existence of a threshold on that perfectly separates outliers and inliers .as expected , the low - rank methods se and l21 can not cope with large dimensions even in the presence of outliers . as expected , ransac is very successful irrespectively of dimension , when is small , since the probability of sampling outlier - free subsets is high .but as soon as increases , its performance drops dramatically . moving on , svs is the worst performing method , which we attribute to its approximate nature .remarkably , dpcp performs perfectly irrespectively of dimension for up to outliers .note that we use the true codimension of the subspace as input to dpcp ; this is to ascertain the true limits of the method . certainly , in practice only an estimate for can be used . as we have observed from experiments ,the performance of dpcp typically does not change much if the codimension is underestimated ; however performance can deteriorate significantly if the true is overestimated .moreover , we note that while se , l21 and svs are extremely fast , as they rely on admm implementations , dpcp is much slower , even if we use an optimizer such as gurobi .speeding up dpcp is the subject of current research . finally , in fig .[ figure : rocsynthetic ] we show roc curves associated with the thresholding of for varying levels of noise and outliers .when is small , fig .[ figure : roc_synthetic_ld ] shows that se , l21 and dpcp are equally robust giving perfect separation between outliers and inliers , while svs and ransac perform poorly .interestingly , for large ( fig .[ figure : roc_synthetic_ld ] ) , dpcp gives considerably less false positives ( fp ) than all other methods across all cases , indicating once again its unique property of being able to handle large subspace dimensions in the presence of many outliers . in this subsectionwe consider an outlier detection scenario in pca using real images .the inliers are taken to be all face images of a single individual from the extended yale b dataset , while the outliers are randomly chosen from caltech101 .all images are cropped to size as was done in . for a fair comparison ,we run se on the raw -dimensional data , while all other methods use projected data onto dimension . since it is known that face images of a single individual under different lighting conditions lie close to an approximately -dimensional subspace , we choose the codimension parameter of dpca to be . we perform independent trials for each individual across all individuals for a different number of outliers and report the ensemble roc curves in fig .[ figure : rocfaces ] . as is evident , dpca is the most robust among all methods .we presented _ dual principal component pursuit ( dpcp ) _ , a novel outlier detection method , which is based on solving an problem on the sphere by linear programs over a sequence of tangent spaces on the sphere .dpcp is able to handle subspaces of as low codimension as in the presence of as many outliers as .future research will be concerned with speeding up the method as well as extending it to multiple subspaces and other types of data corruptions , such as missing entries and entry - wise errors .this work was supported by grant nsf 1447822 .
we consider the problem of outlier rejection in single subspace learning . classical approaches work directly with a low - dimensional representation of the subspace . our approach works with a dual representation of the subspace and hence aims to find its orthogonal complement . we pose this problem as an -minimization problem on the sphere and show that , under certain conditions on the distribution of the data , any global minimizer of this non - convex problem gives a vector orthogonal to the subspace . moreover , we show that such a vector can still be found by relaxing the non - convex problem with a sequence of linear programs . experiments on synthetic and real data show that the proposed approach , which we call dual principal component pursuit ( dpcp ) , outperforms state - of - the art methods , especially in the case of high - dimensional subspaces .
expert advice has become a well - established paradigm of machine learning in the last decade , in particular for prediction .it is very appealing from a theoretical point of view , as performance guarantees usually hold in the worst case , without any ( statistical ) assumption on the data .such assumptions are generally required for other statistical learning methods , often however not resulting in stronger guarantees . using expert advice in the standard wayseems a rather bad idea in some cases where the decisions of the learner or master algorithm influence the behavior of the environment or adversary .one example is the repeated prisoner s dilemma when the opponent plays tit for tat " ( see section [ sec : active ] ) .this was noted and resolved by , who introduced a strategic expert algorithm " for so - called reactive environments .their algorithm works with a finite class of experts and attains asymptotically optimal behavior. no convergence speed is asserted , and the analysis is quite different from that of standard experts algorithms . in this paper , we show how the more general task with a countably infinite expert class can be accomplished , building on standard experts algorithms , and simultaneously also bounding the convergence rate ( , which can be actually improved to ) . to this aim , we will combine techniques from and obtain a master algorithm which performs well on loss functions that may _ increase in time_. then this is applied to ( possibly ) reactive problems by yielding the control to the selected expert for an increasing period of time steps . using a universal expert class defined by the countable set of all programs on some fixed universal turing machine , we obtain an algorithm which is in a sense asymptotically optimal with respect to _ any _ computable strategy .an easy additional construction guarantees that our algorithm is computable , in contrast to other universal approaches which are non - computable . to our knowledge, we also propose the first algorithm for non - stochastic bandit problems with countably many arms .the paper is structured as follows .section [ sec : algorithm ] introduces the problem setup , the notation , and the algorithm . in sections [ sec :master ] , we give the ( worst - case ) analysis of the master algorithm . the implications to active experts problems and a universal master algorithms are given in section [ sec : active ] .we discuss our results in section [ sec : discussion ] .we are acting in an online decision problem . we " is here an abbreviation for the master algorithm which is to be designed .an online decision problem " is to be understood in a very general sense , it is just a sequence of decisions each of which results in some loss. this could be e.g. a prediction task , a repeated game , etc . in each round , that is at each time step , we have access to the recommendations of countably infinitely many experts " or strategies .( for simplicity , we restrict our notation to a countably infinite expert class , all results also hold for finite classes . ) we do not specify what exactly a recommendation " is we just follow the advice of one expert . _ before _ we reveal our move , the adversary has to assign losses to _ all _ experts .there is an upper bound on the maximum loss the adversary may use .this quantity may depend on and is not controlled by the adversary . after the move , only the loss of the selected expert is revealed .our goal is to perform nearly as well as the best available strategy ( expert ) in terms of cumulative loss , after any number of time steps which is not known in advance .the difference between our loss and the loss of some expert is also termed _ regret_. we consider the general case of an _ adaptive _ adversary , which may assign losses depending on our past decisions . if there are only finitely many experts or strategies , then it is common to give no prior preferences to any of them .formally , this is realized by defining uniform _ prior weights _ for each expert .this is not possible for countably infinite expert classes , as there is no uniform distribution on the natural numbers . in this case , we need some non - uniform prior and require for all experts and .we also define the complexity of expert as .this quantity is important since in the full observation game ( i.e. after our decision we get to know the losses of _ all _ experts ) , the regret can usually be bounded by some function of the best expert s complexity . our algorithm follow or explore " ( , specified in fig .[ fig : foe ] ) builds on mcmahan and blum s online geometric optimization algorithm .it is a bandit version of a follow the perturbed leader " experts algorithm .this approach to online prediction and playing repeated games has been pioneered by hannan . for the full observation game and uniform prior, gave a very elegant analysis which is clearly different from the standard analysis of exponential weighting schemes .it has one advantage over other aggregating algorithms such as exponential weighting schemes : the analysis is not complicated if the learning rate is dynamic rather than fixed in advance .a dynamic learning rate is necessary if there is no target time known in advance . for non - uniformprior , an analysis was given in .the following issues are important for s design . .since we are playing the bandit game ( as opposed to the full information game ) , we need to explore sufficiently . at each time step , we decide randomly according to some exploration rate whether to explore or not .if so , we would like to choose an expert according to the prior distribution .there is a caveat : in order to make the analysis go through , we have to assure that we are working with _ unbiased _ estimates of the losses .this is achieved by dividing the observed loss by the probability of choosing the expert . but this quantity could become arbitrarily large if we admit arbitrarily small weights .we address this problem by _ finitizing _ the expert pool at each time . for each expert , we define an _ entering time _ , that is , expert is active only for .we denote the set of active experts at time by . for exploration , the prioris then replaced by the finitized prior distribution , [ eq : unonu ] ( u_t = i)=w^i_t^i_j w^j_t^j .consequently , the maximum unbiasedly estimated instantaneous loss is ( note that the exploration probability also scales with the exploration rate ) [ eq : maxest ] b_t=. it is convenient for the analysis to assign estimated loss of to all currently inactive experts .observe finally that in this way , our master algorithm always deals with a finite expert class and is thus computable .( , specified in fig .[ fig : fpl ] ) is invoked if does not explore .just following the leader " ( the best expert so far ) may not be a good strategy .instead we subtract an exponentially distributed perturbation from the current score ( the complexity penalized past loss ) of the experts .an important detail of the subroutine is the _ learning rate _ , which should be adaptive if the total number of steps is not known in advance .please see e.g. for more details .also the variant of we use ( specified in fig . [ fig : fpl ] ) works on the finitized expert pool .note that each time randomness is used , it is assumed to be _ independent _ of the past randomness .performance is evaluated in terms of true or estimated cumulative loss , this is specified in the notation .e.g. for the true loss of up to and including time we write , while the estimated loss of and not including time is .the following analysis uses mcmahan and blum s trick in order to prove bounds against adaptive adversary . with a different argument, it is possible to circumvent lemma [ lemma : behbeh ] , thus achieving better bounds .this will be briefly discussed in the last section .let be some sequence of upper bounds on the instantaneous losses , be a sequence of exploration rates , and be a _ decreasing _ sequence of learning rates .the analysis proceeds according to the following diagram ( where is an informal abbreviation for the loss and always refers to cumulative loss , but sometimes additionally to instantaneous loss ) .[ eq : diagram ] ^ ^ each " means that we bound the quantity on the left by the quantity on the right plus some additive term .the first and the last expressions are the losses of the algorithm and the best expert , respectively . the intermediate quantities belong to different algorithms , namely , , and a third one called for infeasible " fpl . , as specified in fig .[ fig : ifpl ] , is the same as except that it has access to an oracle providing the current estimated loss vector ( hence infeasible ) . then it assigns scores of instead of .the randomization of and gives rise to two filtrations of -algebras .by we denote the -algebra generated by the s randomness up to time , meaning _ only _ the random variables .then is a filtration ( is the trivial -algebra ) .we may also write .similarly , is the -algebra generated by the s _ and _ s randomness up to time ( i.e. ). then clearly for each .the reader should think of the expectations in ( [ eq : diagram ] ) as of both ordinary and _ conditional _ expectations .conditional expectations are mostly with respect to s past randomness .these conditional expectations of some random variable are abbreviated by _t[x]:=. then ] for each and , with probability at least we have _ t=1^t_t_t + .the sequence of random variables ] and =\loss_t\foe ] holds .this follows immediately from the specification of .clearly , a corresponding assertion for the ordinary expectations holds by just taking expectations on both sides .this is the case for all subsequent lemmas , except for lemma [ lemma : behbeh ] .the next lemma relating and is technical but intuitively clear .it states that in expectation , the real loss suffered by is the same as the estimated loss .this is simply because the loss estimate is unbiased . a combination of this and the previous lemma was shown in .note that is the loss estimated by , but for the expert chosen by .[ lemma : fplfpl ] ] be the probability distribution over actions which uses at time , depending on the past randomness .let be the finitized prior distribution ( [ eq : unonu ] ) at time . then_ t[_t ] ( 1-_t)0 + _ t _ i=1^f_t^i [ ( 1-u_t^i)0+u_t^i _ t^i|_r_t=1i_t = i ] _ i=1^f_t^i_t^i = _ t[_t ] , where is the estimated loss under the condition that decided to explore ( ) and chose action .the following lemma relates the losses of and .it is proven in and .we give the full proof , since it is the only step in the analysis where we have to be careful with the upper loss bound .let be the upper bound on the estimated loss ( [ eq : maxest ] ) .( we remark that also for _ weighted averaging forecasters _ , losses which grow sufficiently slowly do not cause any problem in the analysis . in this way , it is straightforward to modify the algorithm by auer et al . for reactive tasks with a finite expert class . )[ lemma : fplifpl ] ] assume that and depends monotonically on , i.e. if and only if .assume decreasing learning rate .for all and all , _t=1^t_t _ t^i+ .this is a modification of the corresponding proofs in and .we may fix the randomization and suppress it in the notation .then we only need to show [ eq : showtau ] _i1 \{^i+ } , where the expectation is with respect to s randomness .assume first that the adversary is oblivious .we define an algorithm as a variant of which samples only one perturbation vector in the beginning and uses this in each time step , i.e. .since the adversary is oblivious , is equivalent to in terms of expected performance .this is all we need to show ( [ eq : showtau ] ) .let and , then .recall .we argue by induction that for all , [ eq : ifplbehtau ] _ t=1^t _ t^a_t^i+ _t\{}. this clearly holds for . for the induction step , we have to show [ eq : showtau2 ] _ t^i+ _ t\{}+ _t+1^a & ^i^a_t+1 + _ t+1\{}+ _ t+1^i^a_t+1 + & = _ t+1_1:t+1^i+_ t+1\{}. the inequality is obvious if .otherwise , let .then _ t^i+ _ t\ { } ^j+ & = & _ t=1^t_t^j _ t=1^tb_t + & = & _ t=1^t_t^i_t+1^a ^i^a_t+1 + _ t+1\ { } shows ( [ eq : showtau2 ] ) . rearranging terms in ( [ eq : ifplbehtau ] ) , we see _ t=1^t _ t^a _ t^i + _t^i\ { } + _ t=1^t ( q - k)^i_t^a(- ) the assertion ( [ eq : showtau ] ) still for oblivious adversary and then follows by taking expectations and using [ eq : showtau3 ] _ t^i _ t\{^i+ - } & & _ i1\{^i+ } + [ eq : showtau4 ] _t=1^t ( q - k)^i_t^a(- ) & & _t\ { } . the second inequality of ( [ eq : showtau3 ] ) holds because depends monotonically on , and , and maximality of for . the second inequality of ( [ eq : showtau4 ] )can be proven by a simple application of the union bound , see e.g. ( * ? ? ?* lem.1 ) . sampling the perturbations independently is equivalent under expectation to sampling only once .so assume that are sampled independently , i.e. that is played against an oblivious adversary : ( [ eq : showtau ] ) remains valid .in the last step , we argue that then ( [ eq : showtau ] ) also holds for an _ adaptive _ adversary .this is true because the future actions of do not depend on its past actions , and therefore the adversary can not gain from deciding after having seen s decisions .this argument can be made formal , as shown in ( * ? ? ?* lemma 12 ) .( note the subtlety that the future actions of would depend on its past actions . ) finally , we give a relation between the estimated and true losses ( adapted from ) .[ lemma : behbeh ] ] .then the master algorithm s instantaneous losses are bounded by .we denote the algorithm , which is completely specified in fig .[ fig : foetilde ] , by .then the following assertion is an easy consequence of the previous results .[ cor : active ] assume plays a repeated game with bounded instantaneous losses $ ] .choose , , and .then for all experts and all , _1:t & & _ 1:t^i+ o(()^22+k^i t^ ) 1-t^- + _ 1:t & & _1:t^i+o(()^22+k^i t^ ) .consequently , a.s .the rate of convergence is at least , and it can be improved to at the cost of a larger power of .this follows from changing the time scale from to in corollary [ cor : arbitrary ] : is of order .consequently , the regret bound is .broadly spoken , this means that performs asymptotically as well as the best expert .asymptotic performance guarantees for the strategic experts algorithm have been derived in .our results improve upon this by providing a rate of convergence. one can give further corollaries , e.g. in terms of flexibility as defined in .since we can handle countably infinite expert classes , we may specify a _ universal _ experts algorithm .to this aim , let expert be derived from the ( valid ) program of some fixed universal turing machine .the program can be well - defined , e.g. by representing programs as binary strings and lexicographically ordering them .before the expert is consulted , the relevant input is written to the input tape of the corresponding program .if the program halts , an appropriate part of the output is interpreted as the expert s recommendation . e.g. if the decision is binary , then the first bit suffices .( if the program does not halt , we may for well - definedness just fill its output tape with zeros . )each expert is assigned a prior weight by , where is the length of the corresponding program and we assume the program tape to be binary . this construction parallels the definition of solomonoff s _ universal prior _ .[ cor : universal ] if is used together with a universal expert class as specified above and the parameters are chosen as in corollary [ cor : active ] , then it performs asymptotically at least as well as _ any computable expert _ .the upper bound on the rate of convergence is exponential in the complexity and proportional to ( improvable to ) .the universal prior has been used to define a universal agent aixi in a quite different way . note that like the universal prior and the aixi agent , our universal experts algorithm is not computable , since we can not check if a the computation of an expert halts .on the other hand , if used with computable experts , the algorithm is computationally feasible ( at each time we need to consider only finitely many experts ) .moreover , it is easy to impose an additional constraint on the computation time of each expert and abort the expert s computation after operations on the turing machine .we may choose some ( possibly rapidly ) growing function , e.g. .the resulting master algorithm is fully computable and has small regret with respect to all resource bounded strategies .it is important to keep in mind that corollaries [ cor : active ] and [ cor : universal ] give assertions relative to the experts performance merely on the _ actual _ action - observation sequence . in other words ,if we wish to assess how well does , we have to evaluate the actual _ value _ of the best expert .note that the whole point of our increasing time construction is to cause this actual value to coincide with the value under _ ideal _ conditions . for passive tasks, this coincidence always holds with any experts algorithm . with , the actual and the ideal value of an expert coincide in many further situations , such as finitely controllable tasks " . by thiswe mean cases where the best expert can drive the environment into some optimal state in a fixed finite number of time steps .an instance is the prisoner s dilemma with tit - for - tat being the opponent .the following is an example for a formalization of this statement .suppose acts in a ( fully or partially observable ) markov decision process .let there be a computable strategy which is able to reach an ideal ( that is optimal w.r.t .reward ) state sequence in a fixed number of time steps .then performs asymptotically optimal .this statement may be generalized to cases where only a close to optimal state sequence is reached with high probability .however , we need assumptions on the closeness to optimality for a given target probability , which are compatible with the sampling behavior of . not all environments have this or similar nice properties . as mentioned above, any version of would not perform well in the heaven - hell " example .the following is a slightly more interesting variant of the heaven - hell task , where we might wish to learn optimal behavior , however will not .consider the heaven - hell example from the beginning of this section , but assume that if at time i am in hell and i pray " for consecutive time steps , i will get back into heaven . then it is not hard to see that s exploration is so dominant that almost surely , will eventually stay in hell .simulations with some matrix games show similar effects , depending on the opponent .we briefly discuss the repeated game of chicken " , the learner is the column player , the opponent s loss matrix is the transpose , choosing the fist column means to defect , the second to cooperate .hence , in the repeated game , it is socially optimal to take turns cooperating and defecting . ] . in this game, it is desirable for the learner to become the dominant defector " , i.e. to defect in the majority of the cases while the opponent cooperates. let s call an opponent primitive " if he agrees to cooperate after a fixed number of consecutive defecting moves of , and let s call him stubborn " if this number is high. then learns to be the dominant defector against any primitive opponent , however stubborn . on the other hand, if the opponent is some learning strategy which also tries to be the dominant defector and learns faster ( we conducted the experiment with aixi ) , then settles for cooperating , and the opponent will be the dominant defector .interestingly however , aixi would not learn to defect against a stubborn primitive opponent . under this point of view, it seems questionable that there is something like a universally optimal balance of exploration vs. exploitation in active learning at all .as mentioned in the beginning of section [ sec : master ] , the analysis we gave uses a trick from .such a trick seems necessary , as the basic fpl analysis only works for oblivious adversary .the simple argument from which we used in the last paragraph of the proof of lemma [ lemma : ifplbeh ] works only for full observation games ( note that considering the estimated losses , we were actually dealing with full observations there ) . in order to obtain a similar result in the partial observation case , we may argue as follows .we let the game proceed for time steps with independent randomization against an adaptive adversary .then we analyze s performance _ in retrospective_. in particular , we note that for the losses assigned by the adversary , s expected regret coincides with the regret of another , virtual algorithm , which uses ( in its fpl subroutine ) identical perturbations .performing the analysis for this virtual algorithm , we arrive at the desired assertion , however without needing lemma [ lemma : behbeh ] .this results in tighter bounds as stated above .the argument is formally elaborated in . in practice ,the bounds we have proven seem irrelevant except for small expert classes , although asserting almost sure optimality and even a convergence rate .the exponential of the complexity may be huge .imagine for instance a moderately complex task and some good strategy , which can be coded with mere 500 bits .then its prior weight is , a constant which is not distinguishable from zero in all practical situations .thus , it seems that the bounds can be relevant at most for small expert classes with uniform prior .this is a general shortcoming of bandit style experts algorithms : for uniform prior a lower bound on the expected loss which scales with ( where is the size of the expert class ) has been proven . in order to get a lower bound on s regret in the time ,observe that is a _ label - efficient _ learner : according to the definition in , we may assume that in each exploration step , we incur maximal loss .it is immediate that the same analysis then still holds . for label - efficient prediction , cesa - bianchi et al . shown a lower regret bound of . since according to the remark at the end of section [ sec : master ] , we have an upper bound of , this is almost tight except for the additive term .it is an open problem to state a lower bound simultaneously tight in both and .even if the bounds , in particular , seem not practical , maybe would learn sufficiently quickly in practice anyway ?we believe that this is not so in most cases : the design of is too much tailored towards worst - case environments , is too _defensive_. assume that we have a good " and a bad " expert , and learns this fact after some time .then it still would spend a relatively huge fraction of to exploring the bad expert .such defensive behavior seems only acceptable if we are already starting with a class of good experts . de farias , d.p . ,megiddo , n. : how to combine expert ( and novice ) advice when actions impact the environment ? in thrun , s. , saul , l. , schlkopf , b. , eds . : advances in neural information processing systems 16 . mit press , cambridge , ma ( 2004 )kalai , a. , vempala , s. : efficient algorithms for online decision . in : proc .16th annual conference on learning theory ( colt-2003 ) .lecture notes in artificial intelligence , berlin , springer ( 2003 ) 506521 mcmahan , h.b . , blum , a. : online geometric optimization in the bandit setting against an adaptive adversary . in : 17th annual conference on learning theory ( colt ) .volume 3120 of lecture notes in computer science . , springer ( 2004 ) 109123 auer , p. , cesa - bianchi , n. , freund , y. , schapire , r.e . : gambling in a rigged casino : the adversarial multi - armed bandit problem . in : proc .36th annual symposium on foundations of computer science ( focs 1995 ) , los alamitos , ca , ieee computer society press ( 1995 ) 322331 hutter , m. : towards a universal theory of artificial intelligence based on algorithmic probability and sequential decisions .12 european conference on machine learning ( ecml-2001 ) ( 2001 ) 226238 cesa - bianchi , n. , lugosi , g. , stoltz , g. : minimizing regret with label efficient prediction . in : 17th annual conference on learning theory ( colt ) .volume 3120 of lecture notes in computer science . , springer ( 2004 ) 7792
this paper shows how universal learning can be achieved with expert advice . to this aim , we specify an experts algorithm with the following characteristics : ( a ) it uses only feedback from the actions actually chosen ( bandit setup ) , ( b ) it can be applied with countably infinite expert classes , and ( c ) it copes with losses that may grow in time appropriately slowly . we prove loss bounds against an adaptive adversary . from this , we obtain a master algorithm for reactive " experts problems , which means that the master s actions may influence the behavior of the adversary . our algorithm can significantly outperform standard experts algorithms on such problems . finally , we combine it with a universal expert class . the resulting universal learner performs in a certain sense almost as well as any computable strategy , for any online decision problem . we also specify the ( worst - case ) convergence speed , which is very slow . * keywords . * prediction with expert advice , responsive environments , partial observation game , bandits , universal learning , asymptotic optimality . tcs - tr - a-05 - 4 july 2005 idsia-15 - 05
in dynamic asset pricing models , stochastic discount factors ( sdfs ) are stochastic processes that assign prices to claims to future payoffs over different investment horizons . , and show that sdf processes may be decomposed into permanent and transitory components .the _ permanent component _ is a martingale that induces an alternative probability measure which is used to characterize pricing over long investment horizons .the _ transitory component _ is related to the return on a discount bond of ( asymptotically ) long maturity . andthe subsequent literature on bounds has found that sdfs must have nontrivial permanent and transitory components in order to explain a number of salient features of historical returns data . show that the permanent - transitory decomposition obtains even in very general semimartingale environments , suggesting that the decomposition is a fundamental feature of arbitrage - free asset pricing models .this paper introduces econometric methods to extract the permanent and transitory components of the sdf process .specifically , we show how to estimate the solution to the perron - frobenius eigenfunction problem introduced by from a time series of data on state variables and the sdf process . by estimating directly the eigenvalue and eigenfunction, one can reconstruct empirically the time series of the permanent and transitory components and investigate their properties ( e.g. the size of the components , their correlation , etc ) .the methodology also allows one to estimate both the yield and the change of measure which characterizes pricing over long investment horizons .this approach is fundamentally different from existing empirical methods for studying the permanent - transitory decomposition , which produce bounds on various moments of the permanent and transitory components as functions of asset returns .the pathbreaking work of shows that the sdf decomposition may be obtained analytically in markovian environments by solving a perron - frobenius eigenfunction problem .the permanent and transitory components are formed from the sdf , the eigenvalue , and its eigenfunction .the eigenvalue is a long - run discount factor that determines the average yield on long - horizon payoffs .the eigenfunction captures dependence of the price of long - horizon payoffs on the state .the probability measure that characterizes pricing over long investment horizons may be expressed in terms of the eigenfunction and another eigenfunction that is obtained from a time - reversed perron - frobenius problem .see , , , and for related theoretical developments .the methodology introduced in this paper complements the existing theoretical literature by providing an _ empirical _framework for estimating the perron - frobenius eigenvalue and eigenfunctions from a time series of data on the markov state and the sdf process .empirical versions of the permanent and transitory components can then be recovered from the estimated eigenvalue and eigenfunction .the methodology is _ nonparametric _ ,i.e. , it does not place any tight parametric restrictions on the law of motion of state variables or the joint distribution of the state variables and the sdf .this approach is coherent with the existing literature on bounds on the permanent and transitory components , which are derived without placing any parametric restrictions on the joint distribution of the sdf , its permanent and transitory components , and asset returns .this approach is also in line with conventional moment - based estimators for asset pricing models , such as gmm and its various extensions .examples include conditional moment based estimation methodology of which has been applied to estimate asset pricing models featuring habits and recursive preferences and the extended method of moments methodology of which is particularly relevant for derivative pricing . in structural macro - finance models , sdf processes ( and their permanent and transitory components )are determined by both the preferences of economic agents and the dynamics of state variables .several works have shown that standard preference and state specifications can struggle to explain salient features of historical returns data . find that certain specifications appear unable to generate a sdf whose permanent component is large enough to explain historical return premia without also generating unrealistically large spreads between long- and short - term yields . find that historical returns data support positive covariance between the permanent and transitory components , but that this positive association can not be replicated by workhorse models such as the long - run risks model .the role of dynamics can be subtle , especially with recursive preferences that feature forward - looking components .the nonparametric methodology introduced in this paper may be used together with parametric methods to better understand the roles of dynamics and preferences in building models whose permanent and transitory components have empirically realistic properties .of course , if state dynamics are treated nonparametrically then certain forward - looking components , such as the continuation value function under recursive preferences , are not available analytically .we therefore introduce nonparametric estimators of the continuation value function in models with recursive preferences with elasticity of intertemporal substitution ( eis ) equal to unity ..our approach is different from theirs for a number of reasons discussed in section [ s : emp ] .further , we study formally the large - sample properties of our estimators whereas do not . ]this class of preferences is used in prominent empirical work , such as , and may also be interpreted as risk - sensitive preferences as formulated by ( see ) .we reinterpret the fixed - point problem solved by the value function as a _ nonlinear _ perron - frobenius problem . in so doing ,we draw connections with the literature on nonlinear perron - frobenius theory following .the methodology is applied to study an environment similar to that in .we assume a representative agent with preferences with unit elasticity of intertemporal substitution . however , instead of modeling consumption and earnings using a homoskedastic gaussian var as in , we model consumption growth and earnings growth as a general ( nonlinear ) markov process .we recover the time series of the sdf process and its permanent and transitory components without assuming any parametric law of motion for the state .the permanent component is large enough to explain historical returns on equities relative to long - term bonds , strongly countercyclical , and highly correlated with the sdf .we also show that the permanent component induces a probability measure that tilts the historical distribution of consumption and earnings growth towards regions of low earnings and consumption growth and away from regions of high consumption growth . to understand better the role of dynamics, we estimate the permanent and transitory components corresponding to different calibrations of preference parameters .we find that the permanent and transitory components can be positively correlated for high ( but not unreasonable ) values of risk aversion .in contrast , we find that parametric linear - gaussian and linear models with stochastic volatility fitted to the same data have permanent and transitory components that are negatively associated .these findings suggest that nonlinear dynamics may have a useful role to play in explaining the long end of the term structure .although the paper is presented in the context of sdf decomposition , the methodology can be applied to study more general processes such as the valuation and stochastic growth processes in , , and .the sieve approach that we use for estimation builds on earlier work on nonparametric estimation of markov diffusions by and .this approach approximates the infinite - dimensional eigenfunction problem by a low - dimensional matrix eigenvector problem whose solution is trivial to compute .( is the sample size ) whereas with sieves the dimension is . ]this approach also sidesteps nonparametric estimation of the transition density of the state .the main theoretical contributions of the paper may be summarized as follows .first , we study formally the large - sample properties of the estimators of the perron - frobenius eigenvalue and eigenfunctions .we show that the estimators are consistent , establish convergence rates for the function estimators , and establish asymptotic normality of the eigenvalue estimator and estimators of other functionals .these large - sample properties are established in a manner that is sufficiently general that it can allow for the sdf process to be either of a known functional form or containing components that are first estimated from data ( such as preference parameters and continuation value functions ) .although the analysis is confined to models in which the state vector is observable , the main theoretical result applies equally to models in which components of the state are latent .second , semiparametric efficiency bounds for the eigenvalue and related functionals are also derived for the case in which the sdf is of a known functional form and the estimators are shown to attain their bounds .third , we establish consistency and convergence rates for sieve estimators of the continuation value function for a class of models with recursive preferences .the derivation of the large sample properties of the eigenfunction / value estimators is nonstandard as the eigenfunction / value are defined implicitly by an unknown , nonselfadjoint operator .the literature on nonparametric eigenfunction estimation to date has focused almost exclusively on the selfadjoint case ( see and for sieve estimation and , , and for a kernel approach ) .the exception is and who establish asymptotic normality of kernel - based eigenfunction and eigenvalue estimators for nonparametric euler equation models .the derivation of convergence rates for the time - reversed eigenfunction , semiparametric efficiency bounds , and asymptotic normality of the eigenvalue estimator ( and estimators of related functionals ) are all new .the remainder of the paper is as follows .section [ s : setup ] briefly reviews the theoretical framework in and related literature and discusses both the scope of the analysis and identification issues .section [ s : est ] introduces the estimators of the eigenvalue , eigenfunctions , and related functionals and establishes their large - sample properties .nonparametric continuation value function estimation is studied in section [ s : recursive ] .section [ s : mc ] presents a simulation exercise which illustrates favorable finite - sample performance of the estimators .section [ s : emp ] presents the empirical application and section [ s : conc ] concludes .an appendix contains additional results on estimation , identification , and all proofs .this subsection summarizes the theoretical framework in , ( hs hereafter ) , , and ( bhs hereafter ) .we work in discrete time with denoting the set of non - negative integers . in arbitrage - free environments ,there is a positive _ stochastic discount factor _process that satisfies : = 1\ ] ] where is the ( gross ) return on a traded asset over the period from to , denotes the information available to all investors at date , and ] ( almost surely ) .hs show that the martingale induces an alternative probability measure which is used to characterize pricing over long investment horizons .the transitory component is the reciprocal of the return to holding a discount bond of ( asymptotically ) long maturity from date to date . provide conditions under which the permanent and transitory components exist . show that the decomposition obtains in very general semimartingale environments . to formally introduce the framework in hs and bhs , consider a probability space on which there is a time homogeneous , strictly stationary and ergodic markov process taking values in .let be the filtration generated by the histories of .when we consider payoffs that depend only on future values of the state and allow trading at intermediate dates , we may assume the sdf process is a _ positive multiplicative functional _ of .that is , is adapted to , for each ( almost surely ) and : where is the time - shift operator given by for each .is a function of and is a function of .the methodology applies to models in which the increments of the process are functionals of both and an additional noise process as in bhs .we maintain this simpler presentation for notational convenience .] define the operators which assign date- prices to date- payoffs by : \,.\ ] ] it follows from the time - homogeneous markov structure of and the multiplicative functional property of that for each , where : \label{e : m : def } \\\frac{m_{t+1}}{m_t } & = m(x_t , x_{t+1 } ) \label{e : sdf : m}\end{aligned}\ ] ] for some positive function . for convenience, we occasionally refer to as the sdf .hs introduce and study the perron - frobenius eigenfunction problem : where the eigenvalue is a positive scalar and the eigenfunction is positive .. ] we are also interested in the time - reversed perron - frobenius problem : where ] ( almost surely ) for each .hs show that there may exist multiple solutions to ( [ e : pev ] ) , but only one solution leads to processes and that may be interpreted correctly as permanent and transitory components .such a solution has a martingale term that induces a change of measure under which is _ stochastically stable _ ; see condition 4.1 in bhs for sufficient conditions. loosely speaking , stochastic stability requires that conditional expectations under the distorted probability measure converge ( as the horizon increases ) to an unconditional expectation ] will typically be different from the expectation ] is finite .when a result like ( [ e : lrr ] ) holds , we may interpret and from ( [ e : pctc ] ) as the permanent and transitory components .we may also interpret as the _ long - run yield_. further , the result shows that captures state dependence of long - horizon asset prices .the theoretical framework of hs may be used to characterize properties of the permanent and transitory components analytically by solving the perron - frobenius eigenfunction problem .below , we describe an empirical framework to estimate the eigenvalue and eigenfunctions from time series data on and the sdf process . the markov state vector is assumed throughout to be fully observable to the econometrician .however , we do not constrain the transition law of to be of any parametric form .appendix [ ax : filter ] discusses potential extensions of our methodology to models in which or a subvector of is unobserved by the econometrician .the main theoretical results on consistency and convergence rates ( theorem [ t : rate ] and [ t : fpest ] ) apply equally to such cases .we assume the sdf function is either observable or known up to some parameter which is first estimated from data on ( and possibly asset returns ) .[ [ case-1-sdf - is - observable ] ] case 1 : sdf is observable + + + + + + + + + + + + + + + + + + + + + + + + + here the functional form of is known ex ante .for example , consider the ccapm with time discount parameter and risk aversion parameter both pre - specified by the researcher . herewe would simply take provided consumption growth is of the form for some function .other structural examples include models with external habits and models with durables with pre - specified preference parameters .[ [ case-2-sdf - is - estimated ] ] case 2 : sdf is estimated + + + + + + + + + + + + + + + + + + + + + + + + here the is of the form where the functional form of is known up to a parameter . here could be of several forms : * a finite - dimensional vector of preference parameters in structural models ( e.g. and ) or risk - premium parameters in reduced - form models ( e.g. ) . * a vector of parameters together with a function , so .one example is models with recursive preferences , where the continuation value function is not known when the transition law of the markov state is modeled nonparametrically ( see and the application in section [ s : emp ] ) . for such models , would consist of discount , risk aversion , and intertemporal substitution parameters and would be the continuation value function .another example is in which consists of time discount and homogeneity parameters and is a nonparametric internal or external habit formation component .* we could also take to be itself , in which case would be a nonparametric estimate of the sdf .prominent examples include , , and . in this second case, we consider a two - step approach to sdf decomposition . in the first step we estimate from a time series of data on the state , and possibly also contemporaneous data on asset returns . in the second stepwe plug the first - stage estimator into the nonparametric procedure to recover , , , and related quantities . in this sectionwe present some sufficient conditions that ensure there is a unique solution to the perron - frobenius problems ( [ e : pev ] ) and ( [ e : pev : star ] ) .the conditions also ensure that a long - run approximation of the form ( [ e : lrr ] ) holds .therefore , the resulting and constructed from and as in ( [ e : pctc ] ) may be interpreted correctly as the permanent and transitory components . hs and bhs established very general identification , existence and long - run approximation results that draw upon markov process theory .the operator - theoretic conditions that we use are more restrictive than the conditions in hs and bhs but they are convenient for deriving the large - sample theory that follows .specifically , the conditions ensure certain continuity properties of , and with respect to perturbations of the operator .our results are also derived for the specific parameter ( function ) space that is relevant for estimation , whereas the results in hs and bhs apply to a larger class of functions .connections between our conditions and the conditions in hs and bhs are discussed in detail in appendix [ ax : id ] , which also treats separately the issues of identification , existence , and long - run approximation . for estimation ,all that we require is for the conclusions of proposition [ p : id ] below hold .therefore , the following conditions could be replaced by other sets of sufficient conditions .let denote the space of all measurable such that , where denotes the stationary distribution of .our parameter space for is the set of all positive functions in .let and denote the norm and inner product .we say that is bounded if and compact if maps bounded sets into relatively compact sets .finally , let denote the product measure on .[ a : id:0 ] let in ( [ e : mtau : def ] ) and in ( [ e : m : def ] ) satisfy the following : 1 . is a bounded linear operator of the form : for some ( measurable ) that is positive ( -almost everywhere ) 2 . is compact for some .* discussion of assumptions : * part ( a ) are mild boundedness and positivity conditions .if the unconditional density and the transition density of exist , then is of the form : in this case , the positivity condition will hold provided and the densities are positive ( almost everywhere ) .part ( b ) is weaker than requiring itself to be compact . to introduce the identification result ,let denote the spectrum of ( see , e.g. , chapter vii in ) .we say that is _ simple _ if it has a unique eigenfunction ( up to scale ) and _ isolated _ if there exists a neighborhood of such that .as and are defined up to scale , we say that and are unique if they are unique up to scale , i.e. if is a positive eigenfunction of then ( almost everywhere ) for some .[ p : id ] let assumption [ a : id:0 ] hold .then : 1 .there exists positive functions and a positive scalar such that solves ( [ e : pev ] ) and solves ( [ e : pev : star ] ) .the functions and are the unique positive solutions ( in ) to ( [ e : pev ] ) and ( [ e : pev : star ] ) .the eigenvalue is simple and isolated and it is the largest eigenvalue of .4 . the representation ( [ e : lrr ] ) holds for all with given by : = { \mathbb{e}}\left [ \psi ( x_t ) \phi^*(x_t ) \right]\ ] ] under the scale normalization = 1 ] . therefore , estimating and directly allows one to estimate this change of measure .the above identification result is based on an extension of the classical krein - rutman theorem in the mathematics literature .recently , similar operator - theoretic results have been applied to study identification in nonparametric euler equation models ( see , , and ) and semiparametric euler equation models featuring habits .time - reversed perron - frobenius problems and long - run approximation do not feature in these other works , whereas they are important from our perspective .identification under weaker , but related , operator - theoretic conditions is studied in .this section introduces the estimators of the perron - frobenius eigenvalue and eigenfunctions and and presents the large - sample properties of the estimators .we use a sieve approach in which the infinite - dimensional eigenfunction problem is approximated by a low - dimensional matrix eigenvector problem whose solution is easily estimated .this methodology is an empirical counterpart to projection methods in numerical analysis ( see , e.g. , chapter 11 in ) .let be a dictionary of linearly independent basis functions ( polynomials , splines , wavelets , fourier basis , etc ) and let denote the linear subspace spanned by .the sieve dimension is a smoothing parameter chosen by the econometrician and should increase with the sample size .let denote the orthogonal projection onto .consider the projected eigenfunction problem : where is the largest real eigenvalue of and is its eigenfunction . under regularity conditions, is unique ( up to scale ) for all large enough ( see lemma [ lem : exist ] ) .the solution to ( [ e : symprob ] ) must belong to the space .therefore , for a vector , where .we may rewrite the eigenvalue problem in display ( [ e : symprob ] ) as : is nonsingular . ] where the matrices and are given by : \label{e : gmat } \\ { \mathbf{m}}_k & = & { \mathbb{e}}[b^k(x_t ) m(x_t , x_{t+1 } ) b^k(x_{t+1 } ) ' ]\label{e : mmat}\end{aligned}\ ] ] and where is the largest real eigenvalue of and is its eigenvector .we refer to as the _ approximate solution _ for .the approximate solution for is where is the eigenvector of corresponding to .together , solve the generalized eigenvector problem: where is the largest real generalized eigenvalue of the pair .we suppress dependence of and on hereafter to simplify notation . to estimate , and , we solve the sample analogue of ( [ e : gev ] ) , namely : where and are described below and where is the maximum real generalized eigenvalue of the matrix pair ., , can be computed using the command ` [ c , d , cstar]=eig(mhat , ghat ) ` where ` mhat ` and ` ghat ` are the estimators and . then is the maximum diagonal entry of ` d ` , is the column of ` c ` corresponding to , and is the column of ` cstar ` corresponding to . simultaneous computation of and is also possible in r using , for example , the function ` qz.dggev ` in the ` qz ` package . ]the estimators of and are : under the regularity conditions below , the eigenvalue and its right- and left - eigenvectors and will be unique with probability approaching one ( see lemma [ lem : exist : hat ] ) . ) or when and are not unique , then we can simply take and set for all without altering the convergence rates or limiting distribution of the estimators .this was not an issue in simulations or the empirical application . ] given a time series of data , a natural estimator for is : we consider two possibilities for estimating .[ [ case-1-sdf - is - observable-1 ] ] case 1 : sdf is observable + + + + + + + + + + + + + + + + + + + + + + + + + first , consider the case in which the function is specified by the researcher . in this caseour estimator of is [ [ case-2-sdf - is - estimated-1 ] ] case 2 : sdf is estimated + + + + + + + + + + + + + + + + + + + + + + + + now suppose that the sdf is of the form where the functional form of is known up to the parameter which is to be estimated first from the data on and possibly also asset returns .let denote this first - stage estimator . in this case ,we take : recall that the _ long - run yield _ is .we may estimate using : we may also estimate the size of the permanent component .the _ entropy _ of the permanent component , namely - { \mathbb{e}}[\log(m^p_{t+1}/m^p_t)] ] ( see and ) . given , a natural estimator of is : in case 1 ; in case 2 we replace by in ( [ e : lhat1 ] ) .the size of the permanent component may also be measured by other types of statistical discrepancies besides entropy ( e.g. cressie - read divergences ) which may be computed from the time series of the permanent component recovered empirically using and .we confine our attention to entropy because the theoretical literature has typically used entropy to measure the size of sdfs and their permanent components over different horizons ( see , e.g. , and ) and for sake of comparison with the empirical literature on bounds .here we establish consistency of the estimators and derive the convergence rates of the eigenfunction estimators under mild regularity conditions .[ a : id ] is bounded and the conclusions of proposition [ p : id ] hold .[ a : bias ] .let denote the inverse of the positive definite square root of and denote the identity matrix .define the `` orthogonalized '' matrices , , and .let also denote the euclidean norm when applied to vectors and the operator norm ( largest singular value ) when applied to matrices .[ a : var ] and .* discussion of assumptions : * assumption [ a : bias ] requires that the space be chosen such that it approximates well the range of ( as ) .similar assumptions are made in the literature on projection methods .assumption [ a : bias ] requires that is compact , as has been assumed previously in the literature on sieve estimation of eigenfunctions ( see , e.g. , ) .has range where .if is not compact but is compact for some , then one can apply the estimators to in place of and estimate the solution to and similarly for .consistency and convergence rates of , and would then follow directly from theorem [ t : rate ] . ]assumption [ a : var ] ensures that the sampling error in estimating vanishes asymptotically .this condition implicitly restricts the maximum rate at which can grow with , which will be determined by both the type of sieve and the weak dependence properties of the data .see appendix [ ax : est : mat ] for sufficient conditions .note that and are a proof device and do not need to be calculated in practice . before presenting the main result on convergence rates , wefirst we introduce the sequence of positive constants , , , and that bound the approximation bias ( and ) and sampling error ( and ) . as eigenfunctions are only normalized up to scale , in what follows we impose the normalizations and . define : the quantities and measure the bias incurred by approximating and by elements of . and are available under standard smoothness assumptions ( see ) .] let and and normalize and so that and ( these normalizations are equivalent to setting and ) . under assumption [ a : var ] , we may choose positive sequences and which are both , so that : ( see lemma [ lem : matcgce ] in the appendix for further details ) . the terms and will typically be increasing in and decreasing in : with more data we can estimate the entries of and more precisely , but with larger there are more parameters to estimate . [ t : rate ] let assumptions [ a : id][a : var ] hold . then : 1 . 2 . 3 . where and are defined in ( [ e : deltas ] ) and and are defined in ( [ e : etas ] ) .the convergence rates for and should be understood to hold under the scale normalizations , , and and sign normalizations and .[ rmk : gen ] theorem [ t : rate ] holds for , and calculated from any estimators and that satisfy assumption [ a : var ] with positive definite and symmetric ( almost surely ) . theorem [ t :rate ] is sufficiently general that it applies to models with latent state vectors without modification : all that is required is that one can construct estimators of and that satisfy assumption [ a : var ] . appendix [ ax : filter ] describes two approaches for extending the methodology to models with latent variables .the relation ( [ e : gev ] ) may also be used to numerically compute , , and in models for which analytical solutions are unavailable .for such models , the matrices and may be computed directly ( e.g. via simulation or numerical integration ) . the approximate solutions , and for , and be recovered by solving ( [ e : gev ] ) .lemma [ lem : bias ] gives the convergence rates , , and .theorem [ t : rate ] displays the usual bias - variance tradeoff encountered in nonparametric estimation .the bias terms and will be decreasing in ( since and are approximated over increasingly rich subspaces as increases ) . on the other hand , the variance terms and will typically be increasing in ( larger matrices ) and decreasing in ( more data ) . choosing to balance the bias and variance termswill yield the best convergence rate . to investigate the theoretical properties of the estimators , we derive the convergence rate of in case 1 , where and are as in ( [ e : ghat ] ) and ( [ e : mhat1 ] ) under standard conditions from the statistics literature on optimal convergence rates . although the following conditions are not particularly appropriate in an asset pricing context , the result is informative about the convergence properties of relative to conventional nonparametric estimators .[ c : rate ] let assumption [ a : id ] and the following conditions hold : ( i ) is compact , rectangular and has nonempty interior ; ( ii ) the stationary distribution of has a continuous and positive density ; ( iii ) is a bounded operator from into a hlder class of functions of smoothness ; ( iv ) with ; ( v ) <\infty ] .* discussion of assumptions : * assumption [ a : asydist](a ) is an undersmoothing condition which ensures that the approximation bias does not distort the asymptotic distribution of .assumption [ a : asydist](b)(c ) ensures the remaining terms in ( [ e : ale:1 ] ) are ; sufficient conditions for assumption [ a : asydist](b ) are presented in appendix [ ax : est : mat ] .assumption [ a : asydist](c ) requires that higher - order approximation error be asymptotically negligible .this condition is mild : the summands in have expectation zero , and , and are converging to , , and by lemma [ lem : bias ] .finally , assumption [ a : asydist](d ) ensures the asymptotic variance of is finite .the following result establishes asymptotic normality of in case 1 .appendix [ ax : inf ] contains further results on inference on functionals of and , such as the entropy of the permanent component .define ] , and for some , ( v ) , and ( vi ) increase with such that and . then : ( [ e : ale:1 ] ) holds and .we conclude by deriving the semiparametric efficiency bounds for case 1 . to derive the efficiencybound we require a further technical condition ( see appendix [ ax : inf ] ) . [ t : eff ] let assumptions [ a : id][a : asydist ] and [ a : eff ] hold . then : the semiparametric efficiency bound for is and is semiparametrically efficient .theorem [ t : eff ] provides further theoretical justification for using sieve methods to nonparametrically estimate , , and related quantities . in appendix[ ax : inf ] we derive efficiency bounds for and and show that and attain their bounds .for case 2 , we obtain the following expansion ( under regularity conditions ) : with from display ( [ e : inf : def ] ) with and : the expansion ( [ e : ale:2 ] ) indicates that the asymptotic distribution of and related functionals will depend on the properties of the first stage estimator .we first suppose that is a finite - dimensional parameter and the plug - in estimator is root- consistent and asymptotically normally distributed .the following regularity conditions are deliberately general so as to allow for to be any conventional parametric estimator of . to simplify notation , let .[ a : parametric ] let the following hold : 1 . for some -valued random process 2 . satisfies a clt , i.e. : })\ ] ] where the matrix } ] and for some .let } = ( 1\,,\ , { \mathbb{e}}[\phi^*(x_t ) \phi(x_{t+1})\frac{\partial m(x_t , x_{t+1};\alpha_0)}{\partial \alpha'}])' ] .[ t : asydist:2a ] let assumptions [ a : id][a : parametric ] hold .then : }) ] if ) - \ell(\alpha_0))/\tau ] .let and let .let denote the centered empirical process on .we say that is a _ donsker class _ if is absolutely convergent over to a non - negative quadratic form and there exists a sequence of gaussian processes indexed by with covariance function and a.s .uniformly continuous sample paths such that as ( see ) .[ a : nonpara ] let the following hold : 1 . is gateaux differentiable at and | = o(\|\alpha - \alpha_0\|^2_{{\mathcal{a}}}) ] for some -valued random process , , and for some 3 . satisfies a clt , i.e. : })\ ] ] where the matrix } ] and for some , and for some , and .parts ( a ) and ( b ) are standard conditions for inference in nonlinear semiparametric models ( see , e.g. , theorem 4.3 in ) .part ( c ) is a mild clt condition .the covariance function is well defined under parts ( d ) and ( e ) .sufficient conditions for the class to be donsker are well known ( see , e.g. , ) . for the following theorem ,let } = ( 1,1)' ] .sufficient conditions for assumption [ a : var ] are presented in appendix [ ax : est : mat ] for this case .[ t : asydist:2b ] let assumptions [ a : id][a : asydist ] and [ a : nonpara ] hold .then : }) ] . to simplify notation we drop dependence of and on hereafter . for estimation , we solve a sample analogue of ( [ e : fp : vec ] ) , namely : where is defined in display ( [ e : ghat ] ) and is given by : under the regularity conditions below , a solution on a neighborhood of necessarily exists wpa1 ( see lemma [ lem : fphat : exist ] ) .the estimators of , and are : the estimators and can then be plugged into display ( [ e : rec : sdf ] ) to obtain the sdf consistent with preference parameters , and the observed law of motion of the state .we say that is _ continuously frchet differentiable _ at if is differentiable on a neighborhood of and as .[ a : fp : exist ] let the following hold : 1 . has a unique positive fixed point 2 . is compact and positive 3 . is continuously frchet differentiable at with .[ a : fp : bias ] let the following hold : 1 . 2 . is dense in the range of as .let be as in assumption [ a : var ] .let and .[ a : fp : var ] and for each .* discussion of assumptions : * assumption [ a : fp : exist ] imposes some mild structure on which ensures that fixed points of are continuous under perturbations . assumption [ a : fp : bias](a ) is analogous to assumption [ a : bias ] .assumptions [ a : fp : exist ] and [ a : fp : bias ] are standard in the literature on solving nonlinear equations with projection methods .finally , assumption [ a : fp : var ] is similar to assumption [ a : var ] and restricts the rate at which the sieve dimension can grow with .sufficient conditions for assumption [ a : fp : var ] are presented in appendix [ ax : est : mat : fp ] .let denote the bias in approximating by an element of the sieve space .assumption [ a : fp : bias](b ) implies that . to control the sampling error , fix any small . by assumption [ a : fp : var ]we may choose a sequence of positive constants with such that : [ t : fpest ] let assumptions [ a : fp : exist][a : fp : var ] hold .then : 1 . 2 . 3 . .the convergence rates obtained in theorem [ t : fpest ] again exhibit a bias - variance tradeoff .the bias terms are decreasing in , whereas the variance term is typically increasing in but decreasing in .choosing to balance the terms will lead to the best convergence rate . for implementation, we recommend the following iterative scheme based on proposition [ p : nl ] . set , then calculate : for .if the sequence converges to ( say ) , we then set : this procedure converged in the simulations and empirical application and was more efficient than numerical solution of the sample fixed - point problem ( [ e : fp : sample ] ) .the following monte carlo experiment illustrates the performance of the estimators in consumption - based models with power utility and recursive preferences .the state variable is log consumption growth , denoted , which evolves as a gaussian ar(1 ) process : the parameters for the simulation are , , and .the data are constructed to be somewhat representative of quarterly growth in u.s .real per capita consumption of nondurables and services ( for which and ) .however , we make the consumption growth process twice as persistent to produce more nonlinear eigenfunctions and twice as volatile to produce a more challenging estimation problem .we consider a power utility design in which and a design with recursive preferences with unit eis , whose sdf is presented in display ( [ e : rec : sdf ] ) . for both designs we set and .the parameterization and is typically used in calibrations of long - run risks models ( ) ; here we take to produce greater nonlinearity in the eigenfunctions . for each designwe generate 50000 samples of length 400 , 800 , 1600 , and 3200 .results reported in this section use either a hermite polynomial or cubic b - spline basis of dimension .the hermite polynomials are normalized by the mean and standard deviation of the data to be approximately orthonormal and the knots of the b - splines are placed evenly at the quantiles of the data .the results were reasonably insensitive both to the choice of sieve and to the dimension of the sieve space .we estimate , , , , and for both designs and we also estimate and for the recursive preference design . to implement the estimators , , and , we use the estimator in ( [ e : ghat ] ) for both preference specifications . for power utilitywe use the estimator in ( [ e : mhat1 ] ) . for recursive preferenceswe first estimate using the method described in the previous section , then construct the estimator as in display ( [ e : mhat2 ] ) , using the plug - in sdf : based on first - stage estimators of .we normalize that and so that . we also normalize so that .the bias and rmse of the estimators are presented in tables [ tab : mc1 ] and [ tab : mc2 ] . , , and ,for each replication we calculate the distance between the estimators and their population counterparts , then take the average over the mc replications . to calculate the bias we take the average of the estimators across the mc replications to produce , , and ( say ) ,then compute the distance between , and and the true , and .the use of the `` bias '' here is not to be confused with the bias term in the convergence rate calculations .there `` bias '' measures how close , , and are to , , and . here`` bias '' of an estimator refers to the distance between the parameter and the average of its estimates across the mc replications .similar calculations are performed for , , , and . ]table [ tab : mc1 ] shows that , and may be estimated with small bias and rmse using a reasonably low - dimensional sieve for both hermite polynomials and cubic b - splines .table [ tab : mc2 ] presents similar results for , , and .the rmses for and under recursive preferences are typically smaller than the rmses for and under power utility , even though with recursive preferences the continuation value must be first estimated nonparametrically .in contrast , the rmse for is larger under recursive preferences , which is likely due to the fact that is much more curved for that design ( as evident from comparing the vertical scales figures [ fig : mc : sfig2 ] and [ fig : mc : sfig4 ] ) .the results in table [ tab : mc1 ] also show that may be estimated with a reasonably small degree of bias and rmse in moderate samples .figures [ fig : mc : sfig1][fig : mc : sfig5:bs ] also present ( pointwise ) confidence intervals for , and computed across simulations of different sample sizes . for each figure ,the true function lies approximately in the center of the pointwise confidence intervals , and the widths of the intervals shrink noticeably as the sample size increases .the results for hermite polynomials and cubic b - splines are similar . however , the intervals are somewhat narrower at their extremities for cubic b - splines than for hermite polynomials .[ cols="^,^,^,^,^,^,^ " , ] table [ tab : emp ] presents the estimates and 90% confidence intervals . , , , , , , and for each bootstrap replication .we discard the fraction of replications in which the estimator of failed to converge . in the right panelwe fix and and re - estimate , , , , and for each bootstrap replication .] there are several notable aspects .first , both state specifications are able to generate a permanent component whose entropy is consistent with a return premium of around 2.1% per quarter relative to the long bond , which is in the ballpark of empirically reasonable estimates .second , the estimated long - run yield of around 2.2% per quarter is too large , which is explained by the low value of .third , the estimated entropy of the sdf itself is for the bivariate specification and for the univariate specification .. ] therefore , the estimated horizon dependence ( the difference between the entropy of the permanent component and the entropy of the sdf ; see ) is about 0.02% and 0.07% per quarter , respectively .these estimates are well inside the bound of .1% per month that argue is required to match the spread in average yields between short- and long - term nominal bonds .finally , the estimates of are quite imprecise , in agreement with previous studies ( see , e.g. , who use a similar data set and a different , but related , estimation technique ) .the confidence intervals for , and in the left panel of table [ tab : emp ] are quite wide because they reflect , in large part , the uncertainty in estimating and .experimentation with different sieve dimensions resulted in estimates of between and , which is in agreement with previous studies . using aggregate consumption data and using stockholder consumption data .further , with stockholder data their estimated eis is not significantly different from zero .this suggests that our estimates of and maintained assumption of a unit eis are empirically plausible . ]the right panel of table [ tab : emp ] presents estimates of , and fixing and , , and ( and are still estimated nonparametrically ) .it is clear that the resulting confidence intervals are much narrower once the uncertainty in estimating and especially is shut down .for example , with the estimate of is very similar to that obtained for , but the confidence interval shrinks from to .confidence intervals for and are also considerably narrower .we now turn to analyzing the time - series properties of the permanent and transitory components .the upper panel of figure [ fig : ts ] presents time - series plots of the sdf under recursive preferences obtained at the parameter estimates for the bivariate state specification .the upper panel also plots the time series of the permanent and transitory components , which are constructed as : as can be seen , the great majority of the movement in the sdf is attributable to movements in the permanent component .both evolve closely over time and exhibit strong counter - cyclicality .the transitory component for recursive preferences is small , coherent with the literature on bounds which finds that the transitory component should be substantially smaller than the permanent component .the correlation of the permanent component series and gdp growth is approximately whereas the correlation of the transitory component series and is approximately .the lower panel of figure [ fig : ts ] displays the time series of the sdf and permanent and transitory components obtained under power utility using the same .this panel shows that the permanent component , which is similar to that obtained under recursive preferences , is much more volatile than the sdf series .the large difference between the sdf and permanent component series under power utility is due to a very volatile transitory component , which implies a counterfactually large spread in average yields between short- and long - term bonds ( ) . to understand further the long - run pricing implications of the estimates of , and , figures [ fig : emp : phi_1][fig : emp : phistar_2 ] plot the estimated and under recursive preferences for both the bivariate and univariate state specifications .it is clear that both estimates of are reasonably flat , which explains the small variation in the transitory component in figure [ fig : ts ] .however , the estimated has a pronounced downward slope in .the bivariate specification shows that this is only part of the story , as the estimated is also downward - sloping in , especially when consumption growth is relatively low .what is the economic significance of these estimates ?proposition [ p : id ] shows that is the radon - nikodym derivative of the measure corresponding to , say , with respect to . figures [ fig : emp : rn_1][fig : emp : rn_2 ] plot the estimated change of measure for the two state specifications . as the estimate of is relatively flat, the estimated change of measure is characterized largely by .therefore , in the bivariate case , the distribution assigns relatively more mass to regions of the state space in which there is low dividend and consumption growth than the stationary distribution , and relatively less mass to regions with high consumption growth ..5 for ,title="fig : " ] .5 for ,title="fig : " ] + .5 for ,title="fig : " ] .5 for ,title="fig : " ] + .5 for ,title="fig : " ] .5 for ,title="fig : " ] finally , we turn to the role of nonlinearities and non - gaussianity in explaining certain features of the long - end of the term structure .figure [ fig : emp : yield ] presents nonparametric estimates of ( a ) the ( quarterly ) long - run yield and ( b ) the covariance between the logarithm of the permanent and transitory components , namely and , recovered from the data on with and increased from to .the nonparametric estimates are presented alongside estimates for two parametric specifications of the state process .the first specification assumes evolves as a gaussian var(1 ) with constant variance .the second specification is a gaussian ar(1 ) for log consumption growth with stochastic volatility : where is a first - order autoregressive gamma process ( i.e. a discrete - time versions of the feller square - root process ; see ) .the state vector for the second specification is .we refer to this second specification as sv - ar(1 ) .the long - run yield and the covariance cov(, ) were obtained analytically as functions of , , and the estimates of the var(1 ) and sv - ar(1 ) parameters ., and the three autoregressive gamma process parameters for the sv - ar(1 ) model are estimated via indirect inference using a garch(1,1 ) auxiliary model .analytic solutions for the stochastic volatility specification are available by following arguments in appendix h of . ] .5 , ),title="fig : " ] .5 , ),title="fig : " ] figure [ fig : emp : yld ] shows that the nonparametric estimates of the long - run yield are non - monotontic , whereas the parametric estimates are monotonically decreasing .this non - monotonicity is not apparent in the nonparametric estimates using .it is also clear that the nonparametric estimates of the long - run yield are much larger for larger values of than the corresponding results from either of the two parametric models .figure [ fig : emp : cov ] displays the covariance between the log of the nonparametric estimates of the permanent and transitory components for different values of against those obtained for the two parametric state processes . the covariance of the nonparametric estimates of the permanent and transitory components is negative for low to moderate values of , but becomes positive for larger values of .in contrast , the parametric estimates are negative and decreasing in .a recent literature has emphasized the role of positive dependence between the permanent and transitory components in explaining excess returns of long - term bonds .positive dependence also features in models in which the term structure of risk prices is downward sloping ( see , e.g. , the example presented in section 7.2 in ) .however , positive dependence is known to be difficult to generate via conventional preference specifications in workhorse models with exponentially - affine dynamics , such as the long - run risks model of .although the covariance is poorly estimated for large values of , this finding at least suggests that nonlinearities in state dynamics may have a role to play in explaining salient features of the long end of the yield curve .this paper introduces econometric methods to extract the permanent and transitory components of the sdf process in the long - run factorization of , , and .we show how to estimate the solution to the perron - frobenius eigenfunction problem directly from a time series of data on state variables and the sdf process . by estimating directly the perron - frobenius eigenvalue and eigenfunction, we can ( 1 ) reconstruct the time series of the permanent and transitory components and investigate their properties ( e.g. the size of the components , their correlation , etc ) , and ( 2 ) estimate both the yield and the change of measure which characterizes pricing over long investment horizons .this represents a useful contribution relative to existing empirical work which has established bounds on various moments of the permanent and transitory components as functions of asset returns , but has not extracted these components directly from data .the methodology is nonparametric in that it does not impose tight parametric restrictions on the dynamics of the state .we view this approach as being complementary to parametric methods and useful to help better understand the roles of dynamics and preferences in structural macro - finance models .the main technical contributions of this paper are to introduce nonparametric sieve estimators of the eigenvalue and eigenfunctions and establish consistency , convergence rates and asymptotic normality of the estimators , and some efficiency properties .we also introduce nonparametric estimators of the continuation value function under recursive preferences with unit eis and study their large - sample properties .the econometric methodology may be extended and applied in several different ways .first , the methodology can be applied to study more general multiplicative functional processes such as the valuation and stochastic growth processes in , , and .second , the methodology can be applied to models with latent state variables .the main theoretical results ( theorems [ t : rate ] and [ t : fpest ] ) are sufficiently general that they apply equally to such cases .finally , our analysis was conducted within the context of structural models in which the sdf process was linked tightly to preferences .a further extension would be to apply the methodology to sdf processes which are extracted flexibly from panels of asset returns data . to this end, this methodology could be applied in conjunction with information projection - based sdf estimation procedures developed recently by and .supplementary appendix for nonparametric stochastic discount factor decomposition this appendix contains material to support the paper `` nonparametric stochastic discount factor decomposition '' .appendix [ ax : est ] presents further results on estimation of the eigenvalue and eigenfunctions used for sdf decomposition and on the estimation of the continuation value function under recursive preferences .appendix [ ax : inf ] contains additional results on inference .appendix [ ax : id ] further details on the relation between the identification and existence conditions in section [ s : id ] and the identification and existence conditions in and .appendix [ ax : filter ] discusses extension of the sdf decomposition methodology to models with latent state variables .proofs of all results are presented in appendix [ ax : proofs ] .in this section we present some supplementary lemmas from which theorems [ t : rate ] and [ t : fpest ] follow immediately . we also present some sufficient conditions for assumption [ a : var ] and [ a : fp : var ] . when we refer to an eigenfunction / eigenvector as being unique , we mean unique up to sign and scale normalization .the results below for and are derived using arguments from ( note that study a selfadjoint operator whereas here the operator is nonselfadjoint ) .our results for are new .the first result shows that the approximate solutions , and from the eigenvalue problem ( [ e : gev ] ) are well defined and unique for all sufficiently large .it should be understood throughout that , , and are all real - valued eigenvectors .[ lem : exist ] let assumptions [ a : id ] and [ a : bias ] hold. then there exists such that for all , the maximum eigenvalue of the eigenvector problem ( [ e : gev ] ) is real and simple , and hence has unique right- and left - eigenvectors and corresponding to .[ lem : bias ] let assumptions [ a : id ] and [ a : bias ] hold .then : 1 . 2 . 3 . where and are defined in display ( [ e : deltas ] ) .the rates should be understood to hold under the scale normalizations , , , and and the sign normalizations and .the following result shows that the solutions , and to the sample eigenvalue problem ( [ e : est ] ) are well defined and unique with probability approaching one ( wpa1 ) .[ lem : exist : hat ] let assumptions [ a : id ] , [ a : bias ] , and [ a : var ] hold . then wpa1 , the maximum eigenvalue of the generalized eigenvector problem ( [ e : est ] ) is real and simple , and hence has unique right- and left - eigenvectors and corresponding to .[ lem : var ] let assumptions [ a : id ] , [ a : bias ] , and [ a : var ] hold .then : 1 . 2 . 3 . where and are defined in display ( [ e : etas ] ) .the rates should be understood to hold under the scale normalizations , , and and the sign normalizations and .the following two lemmas are known in the literature on solution of nonlinear equations by projection methods ( see , e.g. , chapter 19 of ) .the first result shows that the approximate solution is well defined for all sufficiently large and converges to the fixed point as .[ lem : fp : exist ] let assumptions [ a : fp : exist ] and [ a : fp : bias ] hold. then there exists and such that for all the projected problem has a unique solution in the ball .moreover , as .let and .[ lem : bias : fp ] let assumptions [ a : fp : exist ] and [ a : fp : bias ] hold .then , , and are each well defined for all sufficiently large , and : 1 . 2 . 3 . .we now show that the sample fixed - point problem has a solution wpa1 , and that the solution is a consistent estimator of .the following two results are new .[ lem : fphat : exist ] let assumptions [ a : fp : exist][a : fp : var ] hold .then wpa1 , there exists a fixed point of on a neighborhood of and the estimator from ( [ e : fpest ] ) satisfies .the following result bounds the sampling error .notice that the `` bias term '' shows up due to the nonlinearity of .in contrast , the bias term does not appear in the corresponding `` variance terms '' for the linear eigenvalue problem ( see lemma [ lem : var ] ) .[ lem : var : fp ] let assumptions [ a : fp : exist][a : fp : var ] hold .then wpa1 there exists a fixed point of such that the estimators , , and from ( [ e : fpest ] ) satisfy : 1 . 2 . 3 . .we derive the results assuming that the state process is either beta - mixing ( absolutely regular ) or rho - mixing .we use these dependence concepts because a variety of models for macroeconomic and financial time series exhibit these types weak dependence .examples include copula - based models and discretely sampled diffusion processes .the beta - mixing coefficient between two -algebras and is : with the supremum taken over all -measurable finite partitions and -measurable finite partitions .the beta - mixing coefficients of are defined as : we say that is _ exponentially beta - mixing _ if for some and .the rho - mixing coefficients of are defined as : = 0 , \|\psi\| = 1 } { \mathbb{e}}\big [ { \mathbb{e}}[\psi(x_{t+q})|x_t]^2\big]^{1/2}\,.\ ] ] we say that is _ exponentially rho - mixing _ if for some .the following two lemmas derive convergence rates for the estimators in ( [ e : ghat ] ) and in ( [ e : mhat1 ] ) .these results may be used to verify assumptions [ a : var ] and [ a : asydist](b ) and bound the terms and in display ( [ e : etas ] ) .lemma [ lem : beta:1 ] uses an exponential inequality for sums of weakly - dependent random matrices derived in .recall that .[ lem : beta:1 ] let the following hold : 1 . is strictly stationary and exponentially beta - mixing 2 . < \infty ] for some 3 . . then : assumption [ a : var ] holds , and we may take . the following lemma derives convergence rates for the estimators in ( [ e : ghat ] ) and in ( [ e : mhat2 ] ) when is a finite - dimensional parameter .if the first - stage estimator converges at a root- rate ( which corresponds to taking in the following lemma ) , then the same convergence rates for , and are obtained as for case 1 . for brevity , in the following two results we consider only the case in which the state process is beta - mixing .[ lem : beta:2 ] let the conditions of lemma [ lem : beta:1 ] hold for , and let : 1 . for some 2 . be continuously differentiable in on a neighborhood of for all with : < \infty\ ] ] 3 . . then :assumption [ a : var ] holds , and we may take .we now derive convergence rates for the estimators in ( [ e : ghat ] ) and in ( [ e : mhat2 ] ) for the semi / nonparametric case in which is an infinite - dimensional parameter .the parameter space is ( a banach space ) equipped with some norm .this includes the case in which is a function , i.e. with a function space , and in which consists of both finite - dimensional and function parts , i.e. with where . for each define as the operator ] denote the entropy with bracketing of with respect to the norm .finally , let and observe that .[ lem : beta:3 ] let the conditions of lemma [ lem : beta:1 ] hold for , and let : 1 . have envelope function with for some 2 .}(u,{\mathcal{m}}^*,\|\cdot\|_{\frac{4vs}{2s - v } } ) \leq { \mathrm{const } } \times u^{-2\zeta} ] as 4 . and = o_p(1) ] for some 3 . .then : assumption [ a : fp : var ] holds , and we may take .here we consider the asymptotic distribution the estimator of the entropy of the permanent component of the sdf , namely ] is positive definite . then : })\ ] ] where } = \hbar_{[{\mathrm{2a}}]}'w_{[{\mathrm{2a}}]}^{\phantom \prime } \hbar_{[{\mathrm{2a}}]}^{\phantom \prime} ] , then : the efficiency bound for is .the following result shows that behaves as a linear functional of .this is used to derive the asymptotic distribution of in theorem [ t : asydist:1 ] .it follows from assumption [ a : var ] that we can choose sequences of positive constants and such that : with and as .[ lem : expansion ] let assumptions [ a : id ] , [ a : bias ] and [ a : var ] hold . then : with and normalized so that and . in particular , if and then : this appendix we discuss separately existence and identification , and compare the conditions in the present paper with the stochastic stability conditions in ( hs hereafter ) and ( bhs hereafter ) .[ a : id:1 ] let the following hold : 1 . is bounded 2 .there exists positive functions and a positive scalar such that solves ( [ e : pev ] ) and solves ( [ e : pev : star ] ) 3 . is positive for each non - negative that is not identically zero .* discussion of assumptions : * parts ( a ) and ( c ) are implied by the boundedness of and positivity of in assumption [ a : id:0](a ) . proposition [ p : a : exist ] below shows that the conditions in assumption [ a : id:0 ] are sufficient for existence ( part ( b ) ). no compactness condition is assumed .[ p : a : id ] let assumption [ a : id:1 ] hold . then : the functions and are the unique solutions ( in ) to ( [ e : pev ] ) and ( [ e : pev : star ] ) , respectively .we now compare the identification results with those in hs and bhs .some of hs s conditions related to the generator of the semigroup of conditional expectation operators ] .the following are discrete - time versions of assumptions 6.1 , 7.1 , 7.2 , 7.3 , and 7.4 in hs .[ c : hs ] 1 . is a positive multiplicative functional 2 .there exists a probability measure such that \,{\mathrm{d } } \hat \varsigma(x ) = \int \psi(x)\,{\mathrm{d } } \hat \varsigma(x)\ ] ] for all bounded measurable 3 . for any with , > 0\ ] ] for all 4 . for any with , for all , where \,{\mathrm{d } }\hat \varsigma(x)\ ] ] for each .condition [ c : hs](a ) is satisfied by construction of in ( [ e : pctc ] ) . for condition[ c : hs](b ) , let and be as in assumption [ a : id:1](b ) and normalize such that = 1 ] for all .below shows that this probability measure is precisely the measure used to define the unconditional expectation in the long - run approximation ( [ e : lrr ] ) . ]recall that is the stationary distribution of .we then have : \,{\mathrm{d } } \hat \varsigma(x ) \\ & = \int { \mathbb{e } } \left [ \left .\rho^{-1 } m(x_t , x_{t+1 } ) \frac{\phi(x_{t+1})}{\phi(x_t ) } \psi(x_{t+1 } ) \right| x_t = x \right ] \phi(x ) \phi^*(x)\,{\mathrm{d } } q(x ) \\ & = \rho^{-1 } { \mathbb{e } } \left [ \phi^*(x_t ) ( { \mathbb{m}}(\phi \psi ) ( x_t ) ) \right ] \\ & = \rho^{-1 } { \mathbb{e } } \left [ ( ( { \mathbb{m}}^ * \phi^*)(x_{t+1 } ) ) \phi(x_{t+1 } ) \psi(x_{t+1 } ) \right ] \\ & = { \mathbb{e}}[\phi^*(x_{t+1 } ) \phi(x_{t+1 } ) \psi(x_{t+1 } ) ] = \int \psi(x ) \,{\mathrm{d } } \hat \varsigma(x)\,.\end{aligned}\ ] ] therefore , condition [ c : hs](b ) is satisfied .a similar derivation is reported for continuous - time semigroups in an preliminary 2005 draft of hs with replaced by an arbitrary measure . for condition[ c : hs](c ) , note that implies under our construction of .therefore , implies is positive on a set of positive measure .moreover , by definition of we have : & = & \frac{1}{\phi(x ) } \sum_{t=1}^\infty \rho^{-t } \mathbb m_t ( \phi(\cdot ) { 1\!\mathrm{l}}\ { \cdot \in \lambda\})(x ) \\ & \geq & \frac{1}{\phi(x ) } \sum_{t=1}^\infty \lambda^{-t } \mathbb m_t(\phi(\cdot ) { 1\!\mathrm{l}}\ { \cdot \in \lambda\})(x ) \end{aligned}\ ] ] for any where denotes the spectral radius of .assumption [ a : id:1](c ) implies is irreducible and , by definition of irreducibility , ( almost everywhere ) holds for .therefore , assumption [ a : id:1](c ) implies condition [ c : hs](c ) , up to the `` almost everywhere '' qualification .part ( d ) is a harris recurrence condition which does not translate clearly in terms of the operator .when combined with existence of an invariant measure and irreducibility ( condition [ c : hs](b ) and ( c ) , respectively ) , it ensures both uniqueness of as the invariant measure for the distorted expectations as well as -ergodicity , i.e. , - \int \frac{\psi(x)}{\phi(x)}\,{\mathrm{d } } \hat \varsigma(x ) \right| = 0\ ] ] ( almost everywhere ) where the supremum is taken over all measurable such that ( * ? ? ?* proposition 14.0.1 ) . result ( [ e : hscgce ] ) is a discrete - time version of proposition 7.1 in hs , which they use to establish identification of .assumption [ a : id:1 ] alone is not enough to obtain a convergence result like ( [ e : hscgce ] ) . on the other hand , the conditions in the present paper assume existence of whereas no positive eigenfunction of the adjoint of is guaranteed under the conditions in hs . to belong to a banach space ) .] this suggests the harris recurrence condition is of a very different nature from assumption [ a : id:1 ] .bhs assume that is ergodic under the probability measure , for which conditions [ c : hs](b)(d ) are sufficient .also notice that condition [ c : hs](a ) is satisfied by construction in bhs .the identification results in hs and the proof of proposition 3.3 in bhs shows that uniqueness is established in the space of functions for which ] and = 1 ] is characterized by the stationary distribution of and the positive eigenfunctions of and .in this appendix we describe two approaches that may be used to extend the methodology presented in this paper to models with latent state variables .theorem [ t : rate ] will continue to apply provided that one can construct estimators of and that satisfy assumption [ a : var ] .formal verification of assumption [ a : var ] for the following approaches requires a nontrivial extension of the statistical theory in appendix [ ax : est : mat ] , which we defer to future research .a first approach to dealing with models with a latent volatility state variable is via _ high - frequency proxies ._ suppose that the state vector may be partitioned as where is observable and is spot volatility or integrated volatility . here we could use an estimator of , say , based on high - frequency data .the estimators and may be formed as described in section [ s : est ] but using in place of .a second approach is via _ filtering and smoothing ._ fully nonparametric models with latent variables are not , in general , well identified .nevertheless , with sufficient structure ( e.g. by specifying a semiparametric state - space representation for the unobservable components in terms of an auxiliary vector of observables ) it may be feasible to use a filter and smoother to construct an estimate of the distribution of the latent state variables given the time series of observables .suppose that the state vector may be partitioned as where is observable and is latent . to simplify presentation, we assume that the joint density of given factorizes as : where we assume that is unknown and is known up to a parameter .we further assume that the econometrician observes a time series of data on an auxiliary vector and that the joint density for given factorizes as : with known up to some parameter .note that ( [ e : latent : z ] ) describes a state - space representation in which is the state equation and is the observation equation .finally , the sequence is assumed to be ( jointly ) strictly stationary and ergodic .the econometrician observes the vector at dates . to estimate could proceed as follows : 1 . calculate the maximum likelihood estimate of using the time series .2 . 3 .partition into blocks of length .that is , for each , let } = ( z_{jq},\ldots , z_{(j+1)q-1}) ] of given } ] denotes expectation under the posterior } ; \hat \theta,\pi) ] . but notice that the term on the right - hand side is a random matrix which is a function of } ] is strictly stationary and ergodic , its average ought to approach as increases .when is unknown but the mle is consistent , the same intuition goes through except it would also have to be established that : } \big ] - { { \mathbb{e}}}_{\theta_0,\pi } \big [ b_2^{k_2}(\zeta_t)b_2^{k_2}(\zeta_t ) ' \big|\vec z_{[j ] } \big]\ ] ] becomes negligible ( in an appropriate sense ) as . to estimate we proceed similarly : 1 . with and as before ,for each and each we compute the posterior density };\hat \theta,\pi) ] , and .2 . 3 .when the function is known , the estimator of is : } \\ { \widehat{{\mathbf{m}}}}_{[j ] } & = \frac{1}{q-1 } \sum_{t = jq}^{(j+1)q-2 } \bigg ( \big ( b_1^{k_1}(y_t)b_1^{k_1}(y_{t+1 } ) ' \big ) \\ & \quad \quad \quad \quad \quad \otimes \bigg ( \int b_2^{k_2}(\zeta_t)b_2^{k_2}(\zeta_{t+1})'m(y_t,\zeta_t , y_{t+1},\zeta_{t+1 } ) f(\zeta_t;\zeta_{t+1}|\vec z_{[j]},\hat \theta,\pi ) { \mathrm{d } } \zeta_{t+1 } { \mathrm{d } } \zeta_t \bigg ) \bigg ) \end{aligned}\ ] ] when contains an estimated component we replace in the above display by .for any vector , define : or equivalently . for any matrix define : we also define the inner product weighted by , namely .the inner product and its norm are germane for studying convergence of the matrix estimators , as is isometrically isomorphic to .first note that : where the second line is by assumption [ a : asydist](a ) and the third line is by lemma [ lem : expansion ] and assumption [ a : asydist](b ) ( under the normalizations and ) . by identity ,we may write the first term on the right - hand side of display ( [ a : asydist : pf:1 ] ) as : where the second line is by assumption [ a : asydist](c ) . we verify the conditions of theorem [ t : asydist:1 ] .since the data are exponentially beta - mixing , , and has finite moment , lemma [ lem : beta:1 ] implies that and .both terms are under condition ( vi ) , verifying assumptions [ a : var ] and [ a : asydist](b ) . part ( a ) of assumption [ a : asydist ] is satisfied by conditions ( iii ) and ( vi ) and part ( d ) is satisfied by condition ( iv ) .finally , for part ( c ) , by stationarity and the triangle inequality we have : & \leq \sqrt n { \mathbb{e } } [ | \psi_{\rho , k}(x_t , x_{t+1 } ) - \psi_\rho(x_t , x_{t+1 } ) | ] \notag \\ & \leq \sqrt n \big ( | \rho_k - \rho| \times { \mathbb{e } } [ \phi ^ * ( x_t ) \phi(x_t ) ] + \rho_k \times { \mathbb{e}}\big [ | \phi^*(x_t)\phi(x_t ) - \phi^*_k(x_t ) \phi_k^{\phantom * } ( x_t)|\big ] \big ) \notag \\ & \quad + \sqrt n \big ( { \mathbb{e}}\big [ m(x_t , x_{t+1 } ) | \phi^*(x_t)\phi(x_{t+1 } ) - \phi^*_k(x_t ) \phi_k^{\phantom * } ( x_{t+1})|\big ] \big)\ , .\label{e : corrnorm:0}\end{aligned}\ ] ] for the leading term , lemma [ lem : bias](a ) and the conditions on and yield : & = \sqrt n \times o(\delta_k ) \times 1 \notag \\ & = \sqrt n \times k^{-\omega } \times 1 = o(1 ) \label{e : corrnorm:1 } \,.\end{aligned}\ ] ] for the second term , the cauchy - schwarz inequality , lemma [ lem : bias](b)(c ) and the conditions on , and yield : & \leq \sqrt n \times \rho_k \times \big ( \| \phi^ * - \phi_k^*\| \|\phi\| + \|\phi_k^*\| \| \phi - \phi_k\| \big ) \notag \\ & = \sqrt n \times o(1 ) \times o ( \delta_k^ * + \delta_k^{\phantom * } ) \notag \\ & = \sqrt n \times o(1 ) \times o ( k^{-\omega } ) = o(1 ) \label{e : corrnorm:2}\,.\end{aligned}\ ] ] we split the final term in ( [ e : corrnorm:0 ] ) in two , to obtain : \big ) \\ & \quad \quad \leq \sqrtn \big ( { \mathbb{e } } [ m(x_t , x_{t+1 } ) | \phi(x_{t+1 } ) - \phi_k(x_{t+1})| \phi^*(x_{t+1 } ) ] \\ & \quad \quad \quad + { \mathbb{e } } [ m(x_t , x_{t+1 } ) | \phi^*(x_t ) - \phi_k^*(x_t)| |\phi_k(x_{t+1})| ] \big ) \,.\end{aligned}\ ] ] using the cauchy - schwarz inequality and lemma [ lem : bias](b)(c ) as above , we may deduce that these terms are and provided that < \infty ] .this latter condition holds under the moment conditions on and because : \\ & \leq 2{\mathbb{e}}[m(x_t , x_{t+1})^2 \phi(x_{t+1})^2 ] + 2{\mathbb{e}}[m(x_t , x_{t+1})^2 ( \phi(x_{t+1 } ) - \phi_k(x_{t+1}))^2 ] \\ & \leq 2{\mathbb{e}}[m(x_t , x_{t+1})^2 \phi(x_{t+1})^2 ] + 2{\mathbb{e}}[m(x_t , x_{t+1})^s]^{2/s } \times \|\phi - \phi_k\|_{\frac{2s}{s-2 } } ^2\end{aligned}\ ] ] and . therefore : \big ) = o ( \sqrt n \times k^{-\omega } ) = o(1 ) \label{e : corrnorm:3}\ ] ] under the conditions on . substituting ( [ e : corrnorm:1 ] ) , ( [ e : corrnorm:2 ] ) , and ( [ e : corrnorm:3 ] ) , into ( [ e : corrnorm:0 ] ) and using markov s inequality yields , as required . as in the proof of theorem [ t : asydist:1 ] , we have : by adding and subtracting terms and using assumption [ a : asydist](c ) , we obtain : let , , , , and .we decompose the second term on the right - hand side of ( [ e:2a : pf1 ] ) as : for term , we know that wpa1 .whenever we may take a mean value expansion to obtain : wpa1 , where is in the segment between and . since , it suffices to show that : let be a compact neighborhood of .assumption [ a : parametric](c)(d ) implies that : < \infty\ ] ] and so by dominated convergence we may deduce that the -valued function : \ ] ] is continuous at .it is also straightforward to show that : \right| = o_p(1)\ ] ] by ( [ e : moments ] ) , the ergodic theorem , and compactness of .therefore , . for , using the fact that and root- consistency of , it follows that for any we have : \end{aligned}\ ] ] where the second - last line is by the union bound and cauchy - schwarz , and the final line is by markov s inequality and assumption [ a : parametric](c ) and where the term is independent of .therefore and hence : where the second line is by markov s inequality the cauchy - schwarz inequality and the third is by lemma [ lem : bias](b)(c ) .it follows by assumptions [ a : asydist](a ) and [ a : parametric](d ) that .substituting into ( [ e:2a : pf1 ] ) and using assumption [ a : parametric](a ) : } ' \bigg ( \begin{array}{c } \psi_{\rho , t } \\\psi_{\alpha , t } \end{array }\bigg ) + o_p(1)\ ] ] and the result follows by assumption [ a : parametric](b ) . by arguments to the proof of theorem [ t : asydist:1 ] and [ t : asydist:2a ] and using assumption [ a : nonpara](a)(b ) , we may deduce : + t_{1,n } + t_{2,n } + o_p(1)\end{aligned}\ ] ] where : the result will follow by assumption [ a : nonpara](b)(c ) provided we can show that the terms and are both .similar arguments to the proof of corollary [ c : inf:1 ] imply that under assumption [ a : nonpara](e ) . for term ,notice that : where denotes the centered empirical process on .let denote the norm defined on p. 400 of .display ( a ) in lemma 2 of shows that , under exponential beta - mixing ( condition ( d ) ) , the norm is dominated by the norm for any .it follows from condition ( e ) that for some .therefore , is uniformly bounded under and is well defined on since .it follows by condition ( b ) that . appropriately modifying the arguments of lemma 19.24 in (i.e. replacing the norm by the norm induced by which is the appropriate norm for the weakly dependent case ) give , as required .take , , and as in lemma [ lem : fpiter ] .fix and consider . as , choose sufficiently small that .( is the neighborhood in the statement of the proposition . )take any .for any such we can write where . write . by homogeneity of : for each , where ( note positivity of ensures that for each and each ) .it follows from lemma [ lem : fpiter ] that : as required .[ lem : fpiter ] let the conditions of proposition [ p : nl ] hold .then : there exists finite positive constants with and a neighborhood of such that : for all .fix some constant such that . by the gelfand formula, there exists such that .frchet differentiability of at together with the chain rule for frchet derivatives implies that : hence : we may therefore choose such that for all where .then for any and any we have : it is straightforward to show via induction that boundedness of and homogeneity of degree of together imply : for any . the result for is stated in the text .for , let , , and be as in lemma [ lem : fpiter ] .suppose is a fixed point of belonging to .then by lemma [ lem : fpiter ] : hence .first note that is a simple isolated eigenvalue of under assumption [ a : id ] .therefore , there exists an such that for all .let denote a positively oriented circle in centered at with radius .let denote the resolvent of evaluated at , where is the identity operator .note that the number , given by : because is a holomorphic function on and is compact .by assumption [ a : bias ] , there exists such that for all .therefore , for all the inequality : holds .it follows by theorem iv.3.18 on p. 214 of that whenever : ( i ) the operator has precisely one eigenvalue inside and is simple ; ( ii ) ; and ( iii ) lies on the exterior of .note that must be real whenever because complex eigenvalues come in conjugate pairs .thus , if were complex - valued then its conjugate would also be in , which would contradict the fact that is the unique eigenvalue of on the interior of .step 2 : any nonzero eigenvalue of is also a nonzero eigenvalue of with the same multiplicity .so by step 1 we know that the largest eigenvalue of is positive and simple whenever .recall that where solves the left - eigenvector problem in ( [ e : gev ] ) . here , is the eigenfunction corresponding to of the adjoint of with respect to the space .we now introduce the term , which is the eigenfunction corresponding to of the adjoint of with respect to the space ( it follows from lemma [ lem : exist ] that and are uniquely defined ( up to scale ) for all sufficiently large under assumptions [ a : id ] and [ a : bias ] ) . that is : & = \rho_k { \mathbb{e}}[\phi_k^+(x ) \psi(x ) ] \\ { \mathbb{e}}[\phi_k^*(x ) \pi_k { \mathbb{m } } \psi_k(x ) ] & = \rho_k { \mathbb{e}}[\phi_k^*(x ) \psi_k(x)]\end{aligned}\ ] ] for all and .notice that does not necessarily belong to whereas does belong to .however , it follows from the preceding display that .the proof of lemma [ lem : bias ] shows that and converge to at the same rate .step 1 : proof of part ( b ) .we use similar arguments to the proof of proposition 4.2 of .take ( where is from lemma [ lem : exist ] ) and let denote the spectral projection of corresponding to .by lemma 6.4 on p. 279 of , we have : where is defined in the proof of lemma [ lem : exist ] .it follows that : moreover , for each we have : where the first inequality is by theorem iv.3.17 on p. 214 of and the second is by display ( [ e : crdef ] ) .this inequality holds uniformly for . substituting ( [ e : resbd ] ) into ( [ e : projbd ] ) : where the final equality is by definition of in display ( [ e : deltas ] ) .since is simple , the spectral projection is of the form : under the normalizations and . herewe are imposing the normalizations and and and so we scale the definition of the projection accordingly . ] under the normalizations and . by the proof of proposition 4.2 of , under the sign normalization we have: whence by ( [ e : resbd:00 ] ) , proving ( b ) .step 2 : proof of part ( a ) .here we use similar arguments to the proof of corollary 4.3 of . by the triangle inequality : because by part ( b ) , by definition of , and because is bounded and is a ( weak ) contraction on .first observe that holds for all ( where denotes the conjugate of ) because ( * ? ? ?* theorem 6.22 , p. 184 ) .similarly , holds for all whenever . by similar arguments to the proof of part ( b ) : where the final equality is by definition of ( see display ( [ e : deltas ] ) ) .now we use the fact that is of the form : under the normalizations and . by similar arguments to the proof of part ( b ) , we have : under the sign normalization .it follows by ( [ e : projbdstar ] ) that .step 4 : proof that .recall that . then by the triangle inequality and the fact that is a weak contraction, we have : where the final equality is by definition of ( see display ( [ e : deltas ] ) ) and step 3 .if is invertible we have : part ( b ) follows by the triangle inequality , noting that whenever . substituting into the preceding display yields : as required .step 1 : let denote the restriction of to ( i.e. is a linear operator on the space rather than the full space ) .we show that : holds for all .fix any such .then for any we have where is given by .similarly , for any we have where is given by .therefore , for any we must have , i.e. , holds for all . therefore : step 2 : we now show that has a unique eigenvalue inside wpa1 , where is from the proof of lemma [ lem : exist ] . as the nonzero eigenvalues of , , and are the same , it follows from the proof of lemma [ lem : exist ] that for all the curve separates from and from . taking from lemma [ lem :exist ] , by step 1 and display ( [ e : resbd ] ) , we have : recall that is isomorphic to on .let denote the resolvent of on the space . by isometry and display ( [ e : resbd: proj ] ) : by lemma [ lem : matcgce](b ) , assumption [ a : var ] , and boundedness of , we have : hence : holds wpa1 .the proof of lemma [ lem : exist ] shows that has a unique eigenvalue inside whenever and that is a simple eigenvalue of .therefore , whenever ( [ e : kato ] ) holds we have that : ( i ) has precisely one simple eigenvalue inside ; ( ii ) ; and ( iii ) lies on the exterior of ( see theorem iv.3.18 on p. 214 of ) .again , must be real and simple whenever ( [ e : kato ] ) holds ( because complex eigenvalues come in conjugate pairs ) hence the corresponding left- and right - eigenvectors and are unique .step 1 : proof of part ( b ) .let denote the spectral projection of corresponding to the eigenvalue . by similar arguments to the proof of lemma [ lem : bias](b ) , andso : note that because is centered at and by lemma [ lem : bias ] .further , whenever ( [ e : kato ] ) holds , for each we have : which is by assumption [ a : var ] and display ( [ e : resbd : k ] ) .substituting into ( [ e : hatbd ] ) and using the definition of ( cf . display ( [ e : etas ] ) ) yields : let .observe that is given by : under the normalizations and . by similar arguments to ( [ e : oblique ] ) , we obtain : under the sign normalization .it follows that by ( [ e : ckhatbd ] ) .step 2 : proof of part ( a ) .similar arguments to the proof of lemma [ lem : bias](a ) yield : the second term on the right - hand side of ( [ e : rhatineq ] ) is ( cf . display ( [ e : etas ] ) ) . for the first term, assumption [ a : var ] implies that . but .therefore , and the result follows by ( [ e : ckineq ] ) .step 3 : proof of part ( c ) .identical arguments to the proof of part ( b ) yield : where : and is from display ( [ e : etas ] ) .therefore , under the sign normalization we have : whence by ( [ e : projbdstarhat ] ) .we verify the conditions of theorem 19.1 in .we verify the conditions noting that , in our notation , is a neighborhood of , , , and ( the restriction of to ) . is continuously frchet differentiable on a neighborhood of by assumption [ a : fp : exist](c ) . as a consequence , is continuously frchet differentiable on .assumption [ a : fp : exist](b)(c ) implies that is continuously invertible in .also notice that by assumption [ a : fp : bias](b ) ( this verifies condition ( 19.8 ) in ) .moreover : by continuity of .moreover , by continuous frchet differentiability of and assumption [ a : fp : bias](a ) .this verifies conditions ( 19.9 ) in , and their condition ( 19.10 ) is trivially satisfied . fix any . by continuous frchet differentiability , there exists such that whenever . we may also choose such that and for all .whenever and we have : and hence : as required .theorem 19.1 in then ensures ( 1 ) existence of and uniqueness as a fixed point of on a neighborhood of for all sufficiently large and ( 2 ) the upper bound : ( see equations ( 19.12)(19.13 ) on p. 295 ) .it follows by continuity of and and assumption [ a : fp : bias](b ) that .we first prove part ( c ) by standard arguments ( see p. 310 in ) .take from lemma [ lem : fp : exist ] .we then have : frchet differentiability of at ( assumption [ a : fp : exist](c ) ) implies : as and ( assumptions [ a : fp : exist](c ) and [ a : fp : bias](a ) ) , a similar argument to lemma [ lem : exist ] ensures that there exists such that for all sufficiently large . therefore is invertible and holds for all sufficiently large . substituting ( [ e : bias : fp:1 ] ) and( [ e : bias : fp:2 ] ) into ( [ e : bias : fp:0 ] ) yields : hence .part ( b ) then follows from the inequality : finally , part ( a ) follows from the fact that and continuous differentiability of at each . by definition of , , and , we have as holds wpa1 ( by the first part of assumption [ a : fp : var ] ), we have : wpa1 , hence : wpa1 .the result follows by assumption [ a : fp : var ] .lemma [ lem : fp : exist ] implies that has a unique fixed point in and no fixed point on for all .it also follows from the proof of lemma [ lem : fp : exist ] that is invertible for all ( increasing if necessary ) .let denote the rotation of on .then for all we have and hence by propositions ( 3)(5 ) on pp .299 - 300 of .we now show that the inequality : holds wpa1 .the left - hand side is by lemma [ lem : fp : matcgce ] and assumption [ a : fp : var ] .we claim that .suppose the claim is false .then there exists a subsequence with such that .since is bounded and is compact , there exists a convergent subsequence .let . then : as , where the first term vanishes by definition of , the second vanishes by definition of , and the third vanishes because is dense in the range of .it follows that .finally , by continuity of and definition of : as , hence is a fixed point of . butthis contradicts the fact that is the unique fixed point of in .this proves the claim .it follows by proposition ( 2 ) on p. 299 of that whenever inequality ( [ e : fpindex : ineq ] ) holds , we have : by isomorphism : hence holds wpa1 . therefore , has at least one fixed point wpa1 .clearly , for we must have . repeatingthe argument for any implies that wpa1 .therefore .we first prove part ( c ) . by lemma [ lem : fphat : exist ] , is well defined ( wpa1 ) and . recall that and .then wpa1 , we have : so by lemma [ lem : fp : matcgce](b ) and the triangle inequality : notice that : moreover , is continuously invertible by assumption [ a : fp : exist](b ) .therefore : holds for all sufficiently large .also notice that : where the first inequality is because is a ( weak ) contraction on , the second is by the triangle inequality , and the final line is by frechet differentiability of at ( first and third terms ) and continuity of at ( second term ) . substituting ( [ e : var : fp:1 ] ) and ( [ e : var : fp:2 ] ) into ( [ e : var : fp:0 ] ) and rearranging ,we obtain : as required .parts ( a ) and ( b ) follow by similar arguments to the proof of lemma [ lem : bias : fp ] . for part ( a ) we use a truncation argument .let be a sequence of positive constants to be defined subsequently , and write : where , with and denoting the indicator function , we have : \\\xi_{2,t , n } & = & n^{-1 } \tilde b^k(x_t ) m(x_t , x_{t+1 } ) \tilde b^k(x_{t+1 } ) ' { 1\!\mathrm{l}}\ { \|\tilde b^k(x_t ) m(x_t , x_{t+1 } ) \tilde b^k(x_{t+1})'\| > t_n\ } \\ & & - { \mathbb{e}}[n^{-1 } \tilde b^k(x_t ) m(x_t , x_{t+1 } ) \tilde b^k(x_{t+1 } ) ' { 1\!\mathrm{l}}\ { \|\tilde b^k(x_t ) m(x_t , x_{t+1 } ) \tilde b^k(x_{t+1})'\| > t_n\ } ] \,.\end{aligned}\ ] ] note = 0 ] ( and similarly for ) , we obtain : ^{1/q } \leq ( \xi_k^{q-2 } { \mathbb{e}}[(u'\tilde b^k(x_t))^2])^{1/q } = \xi_k^{1 - 2/q}\ ] ] and hence : | \leq \frac{\xi_k^{2 + 4/r } } { n^2 } { \mathbb{e}}[|m(x_0,x_1)|^r]^{2/r}\,.\ ] ] as this bound holds uniformly for , it follows from the variational characterization of the operator norm that : \| = \sup_{u , v \in s^{k-1}}|u'{\mathbb{e } } [ \xi_{1,t , n}^{\phantom \prime } \xi_{1,s , n } ' ] v| = o(\xi_k^{2 + 4/r}/n^2)\,.\ ] ] this bound holds uniformly for , and also holds for \| ] .this proves .the result for follows similarly . for part ( a ) ,let be an orthonormal basis for .using the fact that the frobenius norm dominates the norm , we have : & \leq & { \mathbb{e } } [ \|{\widehat{{\mathbf{m}}}}^o - { \mathbf{m}}^o\|^2_f ] \\ & = & \sum_{l=1}^k { \mathbb{e } } [ \|({\widehat{{\mathbf{m}}}}^o - { \mathbf{m}}^o)u_l\|^2 ] \\ & \leq & \frac{ck \xi_k^{(2r+4)/r}}{n } { \mathbb{e}}[|m(x_0,x_1)|^r]^{2/r } \end{aligned}\ ] ] by similar arguments to ( [ e - mhatvec ] ) .the result follows by chebyshev s inequality .the convergence rate of is as in lemma [ lem : beta:1 ] .first write : lemma [ lem : m : beta](a ) yields . for the convergence rate for the remaining term , condition ( a )implies that wpa1 . a mean - value expansion ( using conditions ( a ) and ( b ) ) then yields : wpa1 , for in the segment between and .let .therefore , wpa1 we have : where the first line is because and the second and third lines are by several applications of the cauchy - schwarz inequality .finally , notice that by lemma [ lem : g : beta](a ) and condition ( c ) , and : by the ergodic theorem and condition ( b ) .the bounds for and follow by lemma [ lem : matcgce](a ) .the convergence rate for is from lemma [ lem : beta:1 ] . to establish the rate for , it suffices to bound : let and let be a sequence of positive constants to be defined .define the functions : then by the triangle inequality : \right\| \\ & + \sup_{\alpha \in { \mathcal{a } } } \left\| \frac{1}{n } \sum_{t=0}^{n-1 } \tilde b^k(x_t ) h^{tail}_\alpha(x_t , x_{t+1 } ) \tilde b^{k}(x_{t+1})\right\| \\ & + \sup_{\alpha \in { \mathcal{a } } } \left\| { \mathbb{e } } [ \tilde b^k(x_t ) h^{tail}_\alpha(x_t , x_{t+1 } ) \tilde b^{k}(x_{t+1 } ) ] \right\| + \left\| { \mathbb{e } } [ \tilde b^k(x_t ) h_{\hat \alpha}(x_t , x_{t+1 } ) \tilde b^{k}(x_{t+1 } ) ] \right\| \\= : & \ , { \widehat{t}}_1 + { \widehat{t}}_2 + { \widehat{t}}_3 + { \widehat{t}}_4 \,.\end{aligned}\ ] ] to control , consider the class of functions : where is the unit sphere in .each function in is bounded by .moreover : where is the centered empirical process on . by theorem 2 of : = o \left ( \varphi(\sigma_{n , k } ) + \frac{t_n q \varphi^2(\sigma_{n , k})}{\sigma_{n , k}^2 \sqrt n } + \sqrt nt_n \beta_q \right)\ ] ] where is a positive integer , for the norm is defined on p. 400 of , and : }(u,{\mathcal{h}}_{n , k},\|\cdot\|_{2,\beta } ) } \ , { \mathrm{d } } u \,.\ ] ] exponential -mixing and display ( a ) in lemma 2 of ( with ) imply : for some constant that depends only on the -mixing coefficients . therefore : where is finite by conditions ( a ) and ( b ) .take .define and .each function in is of the form : for we have : where : for any .let . by theorem 2.7.11 of , we have : }(u,{\mathcal{b}}_k^*,\|\cdot\|_{q } ) \leq n\big ( u/(2(k/\xi_k^2)^{1/q}),s^{k-1},\|\cdot\| \big ) \leq \bigg ( \frac{4 ( k/\xi_k^2)^{1/q } } { u } + 1\bigg)^k\ ] ] since the -covering number for the unit ball in bounded above by .it follows by lemma 9.25(ii ) in that : }(3u,{\mathcal{h}}_{n , k}^*,\|\cdot\|_{q } ) & \leq \bigg ( \frac{4 ( k/\xi_k^2)^{1/q } } { u } + 1\bigg)^{2k } n_{[\,\,]}(u,{\mathcal{h}}_n^*,\|\cdot\|_{q})\ , .\label{e : bracket : euclid}\end{aligned}\ ] ] let ] is a -bracket for the norm for , because : by hlder s inequality . taking in ( [ e : bracket : euclid ] ) and using the fact that truncation of does nt increase its bracketing entropy , we obtain : } ( u , { \mathcal{h}}_{n , k } , \|\cdot\|_{2v } ) & \leq n_{[\,\ , ] } \big ( \frac{u}{\xi_k^2 \| e \|_{4s } } , { \mathcal{h}}_{n , k}^ * , \|\cdot\|_{\frac{4vs}{2s - v } } \big ) \notag \\ & \leq \big ( \frac{12 \|e\|_{4s } \xi_k^{2 } ( k/\xi_k^2)^{\frac{2s - v}{4vs}}}{u } + 1\big)^{2k } n_{[\,\,]}\big ( \frac{u}{3 \xi_k^2 \|e\|_{4s}},{\mathcal{m}}^*,\|\cdot\|_{\frac{4vs}{2s - v}}\big ) \ , .\label{e : entropy : hnk}\end{aligned}\ ] ] finally , by displays ( [ e : beta : norm ] ) and ( [ e : entropy : hnk ] ) and condition ( b ) : }(u,{\mathcal{h}}_{n , k},\|\cdot\|_{2,\beta } ) } \ , { \mathrm{d } } u \\ & \leq \int_0^\sigma \sqrt{\logn_{[\,\,]}(u / c,{\mathcal{h}}_{n , k},\|\cdot\|_{2v } ) } \ , { \mathrm{d } } u \\ & \leq \int_0^\sigma \sqrt{2 k \log \big(1 + 12 c \|e\|_{4v } \xi_k^{2 } ( k/\xi_k^2)^{\frac{2s - v}{4vs } } /u \big ) } \ , { \mathrm{d } } u + { \mathrm{const}}^{1/2 } ( 3c \xi_k^2 \|e\|_{4s})^\zeta \frac{\sigma^{1-\zeta}}{1-\zeta } \\ & \leq 12\sqrt{2k } c \|e\|_{4s } \xi_k^{2 } ( k/\xi_k^2)^{\frac{2s - v}{4vs } } \int_0^{\sigma/(12c \|e\|_{4s } \xi_k^{2 } ( k/\xi_k^2)^{\frac{2s - v}{4vs } } ) } \sqrt { \log ( 1 + 1/u ) } \ , { \mathrm{d } } u \\ & \quad + { \mathrm{const}}^{1/2 } ( 3 \xi_k^2 \|e\|_{4s})^\zeta \frac{\sigma^{1-\zeta}}{1-\zeta } \\ & \leq 24 \sqrt{2k } c \|e\|_{4s } \xi_k^{2 } ( k/\xi_k^2)^{\frac{2s - v}{4vs } } \bigg ( \frac{\sigma}{12c \|e\|_{4s } \xi_k^{2 } ( k/\xi_k^2)^{\frac{2s - v}{4vs } } } \vee 1 \bigg ) \\ & \quad + { \mathrm{const}}^{1/2 } ( 3 \xi_k^2 \|e\|_{4s})^\zeta \frac{\sigma^{1-\zeta}}{1-\zeta } \,.\end{aligned}\ ] ] using the fact that and , we obtain . substituting into ( [ e : dmr : bd ] ) and using markov s inequality : similar arguments to the proof of lemma [ lem : m : beta ] yield and .we choose so that : which makes ( since ) . we will also choose for sufficiently large , so that .this also ensures that the term .therefore , , , and are all . for the remaining term , by conditions ( c ) and ( d ) of the lemma we may deduce : + o ( \|\hat \alpha - \alpha_0\|_{{\mathcal{a}}}^2 ) \\ & = o_p(n^{-1/2 } ) + o ( \|\hat \alpha - \alpha_0\|_{{\mathcal{a}}}^2 ) = o_p(n^{-1/2 } ) \,.\end{aligned}\ ] ] combining the above bounds for yields . the bounds for and follow by lemma [ lem : matcgce](a ) .the convergence rate for is from lemma 2.2 of . for ,let be a sequence of positive constants to be defined. also define : we then have : \right\| \\ & \quad + \sup_{v : \| v\| \leq c } \left\| \frac{1}{n } \sum_{t=0}^{n-1 } \tilde b^k(x_t ) g^{tail}_{t+1 } |\tilde b^{k}(x_{t+1})'v|^\beta \right\| \\ & \quad + \sup_{v : \| v\| \leq c } \left\| { \mathbb{e } } [ \tilde b^k(x_t ) g^{tail}_{t+1 } |\tilde b^{k}(x_{t+1})'v|^\beta ] \right\| \quad = : \quad { \widehat{t}}_1 + { \widehat{t}}_2 + { \widehat{t}}_3 \,.\end{aligned}\ ] ] let denote the element of . also define observe that where is the centered empirical process on . by construction , each is uniformly bounded by .theorem 2 of gives the bound : = o \left ( \varphi(\sigma_{n , k } ) + \frac{c^\beta t_n q \varphi^2(\sigma_{n , k})}{\sigma_{n , k}^2 \sqrt n } + \sqrt nc^\beta t_n \beta_q \right)\ ] ] where is a positive integer , where the norm is defined on p. 400 of , and : }(u,{\mathcal{h}}_{n , k},\|\cdot\|_{2,\beta } ) } \ , { \mathrm{d } } u \,.\ ] ] exponential -mixing and display ( a ) in lemma 2 of ( with ) , imply : for some constant that depends only on the -mixing coefficients .therefore : where is finite by condition ( b ) . set . to calculate the bracketing entropy integral , first fix and let be a -cover for and be a -cover for .for any and there exist and such that : where : therefore : }\big(u , { \mathcal{h}}_{n , k } , \|\,\cdot\,\|_{2s } \big ) \leq \big(\frac{2k\xi_k^{1+\beta } } { u } + 1\big)^k \big(\frac{2(ck\xi_k^{1+\beta})^{1/\beta } } { u^{1/\beta } } + 1\big)^k\ ] ] since the -covering number for the unit ball in is bounded above by .it follows by ( [ e : beta : norm : fp ] ) and the above display that : }(u,{\mathcal{h}}_{n , k},\|\cdot\|_{2,\beta } ) } \ , { \mathrm{d } } u \\ & \leq \int_0^\sigma \sqrt{\logn_{[\,\,]}(u / c,{\mathcal{h}}_{n , k},\|\cdot\|_{2s } ) } \ , { \mathrm{d } } u \\ & \leq \int_0^\sigma \sqrt{k \log \big(1 + 2 c k \xi_k^{1+\beta}/u\big ) } \ , { \mathrm{d } } u + \int_0^\sigma \sqrt{k \log \big(1 + 2 ( c ck \xi_k^{1+\beta}/u)^{1/\beta } \big ) } \ , { \mathrm{d } } u \\ & \leq 4 c k \sqrt k \xi_k^{1+\beta } \bigg ( \frac{\sigma}{4ck\xi_k^{1+\beta } } \vee 1 \bigg ) \\ & \quad \quad + 2^\beta c c k \sqrt k \xi_k^{1+\beta } \int_0^{\sigma/(2^\beta c c k \xi_k^{1+\beta } ) } \sqrt{\log(1+u^{-1/\beta } ) } \ , { \mathrm{d } } u \\ & \leq 4 c k \sqrt k \xi_k^{1+\beta } \bigg ( \frac{\sigma}{4ck\xi_k^{1+\beta } } \vee 1 \bigg ) + 2^\beta c c k \sqrt k \xi_k^{1+\beta } \frac{\big ( \sigma/(2^\beta c c k \xi_k^{1+\beta } ) \big)^{1-\frac{1}{2\beta}}}{1-\frac{1}{2\beta } } \,.\end{aligned}\ ] ] where the final line is because . since , we obtain . substituting into ( [ e : dmr : bd ] ) and using markov s inequality : by similar arguments to the proof of lemma [ lem : m : beta ] , we have and .we choose so that : which makes ( since ) . we will also choose for sufficiently large , so that .this also ensures that the term .therefore , , , and are all .step 1 : normalize and so that .we first show that : notice that dividing by establishes the result under the normalization in the statement of the lemma .recall the definitions of in display ( [ e : hat : pk : def ] ) . also define : whenever ( from lemma [ lem : exist ] ) and is positive and simple ( which it is wpa1 by lemma [ lem : exist : hat ] ) then and are well defined and we have : by linearity of trace .observe that : but we have : by cauchy - schwarz , display ( [ e : ckhatbd ] ) , and the normalization .it follows from ( [ e : pgbd0 ] ) , ( [ e : pgbd1 ] ) , ( [ e : pgbd2 ] ) , and lemma [ lem : var](a ) that : but notice that : whenever ( * ? ? ?* expression ( 6.19 ) on p. 178 ) where is as in the proof of lemma [ lem : exist ] .finally , by display ( [ e : resbd : k ] ) we have : this completes the proof of step 1 .step 2 : here we show that by lemma [ lem : matcgce](a ) we have : which completes the proof of step 2 .the result follows by combining steps 1 and 2 and using ( which follows from lemma [ lem : matcgce](b ) and definition of ) . first note that : \right ) \\& = \frac{1}{\sqrtn } \sum_{t=0}^{n-1 } ( \rho^{-1 } \psi_{\rho , t } - \psi_{lm , t } ) + o_p(1)\end{aligned}\ ] ] where the second line is by display ( [ e : ale:1 ] ) and a delta - method type argument .the result now follows from the joint convergence in the statement of the proposition .similar arguments to the proof of proposition [ p : asydist : l:1 ] yield : by continuous differentiability of on a neighborhood of and the dominance condition in the statement of the proposition , we may deduce : where \,.\ ] ] substituting into the expansion for and using assumption [ a : parametric](a ) yields : the result follows by the joint clt assumed in the statement of the proposition .we prove part ( i ) first .let denote the stationary distribution of .under assumption [ a : eff ] and the maintained assumption of strict stationarity of , the tangent space is < \infty ] almost surely endowed with the norm ( see pp . 878880 of which is for a real - valued markov process , but trivially extends to vector - valued markov processes ) .take any bounded and consider the one - dimensional parametric model which we identify with the collection of transition probabilities where each transition probability is dominated by and is given by : where : for each we define the linear operator on by : observe that : which is a bounded linear operator on ( since is a bounded linear operator on and is bounded ) . by taylor s theorem and boundedness of can deduce that and therefore : where the term is uniform in .it then follows by this and boundedness of that .similar arguments to the proof of lemma [ lem : exist ] imply that there exists and such that the largest eigenvalue of is simple and lies in the interval for each .it follows from a perturbation expansion of about ( see , for example , equation ( 3.6 ) on p. 89 of ) that : + o(\tau^2 ) \notag \\ & = \tau \int m(x_t , x_{t+1})\phi(x_{t+1})\phi^*(x_t ) h(x_t , x_{t+1}){\mathrm{d}}q_2(x_t , x_{t+1 } ) + o(\tau^2 ) \label{e : eff : pf3}\end{aligned}\ ] ] under the normalization , where the second line is by ( [ e : eff : pf1 ] ) and ( [ e : eff : pf2 ] ) .expression ( [ e : eff : pf3 ] ) shows that the derivative of at is . since bounded functions are dense in , we have that is differentiable relative to the tangent set with derivative .the efficient influence function is the projection of onto , namely : = \psi_\rho(x_t , x_{t+1})\ ] ] because = \phi^*(x_{t } ) { \mathbb{m } } \phi(x_t ) = \rho \phi(x_t ) \phi^*(x_t) ] is the efficiency bound for .we now prove part ( ii ) . by linearity ,the efficient influence function of is : where is the efficient influence function for ] , as required .we first show that any positive eigenfunction of must have eigenvalue .suppose that there is some positive and scalar such that .then we obtain : with because and are positive , hence .a similar argument shows that any positive eigenfunction of must correspond to the eigenvalue .it remains to show that and are the unique eigenfunctions ( in ) of and with eigenvalue .we do this in the following three steps .let .we first show that if then the function given by also is in . in the second stepwe show that implies or .finally , in the third step we show that . for the first step , first observe that because by assumption [ a : id:1](b ) .then by assumption [ a : id:1](c ) , for any we have and so ( almost everywhere ) . on the other hand , which implies that and hence . for the second step , take any that is not identically zero .suppose that on a set of positive measure ( otherwise we can take in place of ) .we will prove by contradiction that this implies .assume not , i.e. on a set of positive measure .then ( almost everywhere ) and .but by step 1 we also have that . then for any we have ( almost everywhere ) by assumption [ a : id:1](c ) .therefore , ( almost everywhere ) .this contradicts the fact that on a set of positive measure .a similar proof shows that if holds on a set of positive measure then . for the third step we use an argument based on the archimedean axiom ( see , e.g. , p. 66 of ) .take any positive and define the sets and ( where the inequalities are understood to hold almost everywhere ) .it is easy to see that and are convex and closed .we also have \subseteq s_+$ ] so is nonempty .suppose is empty .then on a set of positive measure for all .by step 2 we therefore have ( almost everywhere ) .but then because is a lattice we must have for all which is impossible because . therefore is nonempty . finally , we show that .take any .clearly .by claim 2 we know that either : ( almost everywhere ) which implies or ( almost everywhere ) which implies . therefore .the archimedean axiom implies that the intersection must be nonempty .therefore ( the intersection must be a singleton else and with ) and so ( almost everywhere ) .this completes the proof of the third step .assumption [ a : id:0](a ) implies that ( see proposition iv.9.8 and theorem v.6.5 of ) .the result now follows by theorems 6 and 7 of with . that is isolated follows from the discussion on p. 1030 of .consider the operator with .proposition [ p : a : exist ] implies that .further , since is power compact it has discrete spectrum ( * ? ? ?* theorem 6 , p. 579 ) .we therefore have and hence where and , and commute ( see , e.g. , p. 331 of or pp .1034 - 1035 of ) . since these operators commute , a simple inductive argument yields : for each .by the gelfand formula , there exists such that : let be the maximal subset of for which . if this subsequence is finite then the proof is complete .if this subsequence is infinite , then by expression ( [ gelfand ] ) , therefore , there exists a finite positive constant such that for all large enough , we have : and hence : as required .
stochastic discount factor ( sdf ) processes in dynamic economies admit a permanent - transitory decomposition in which the permanent component characterizes pricing over long investment horizons . this paper introduces econometric methods to extract the permanent and transitory components of the sdf process . we show how to estimate the solution to the perron - frobenius eigenfunction problem of using data on the markov state and the sdf process . estimating directly the eigenvalue and eigenfunction allows one to ( 1 ) construct empirically the time series of the permanent and transitory components of the sdf process and ( 2 ) estimate the yield and the change of measure which characterize pricing over long investment horizons . the methodology is nonparametric , i.e. , it does not impose any tight parametric restrictions on the dynamics of the state variables and the sdf process . we derive the large - sample properties of the estimators and illustrate favorable performance in simulations . the methodology is applied to study an economy where the representative agent is endowed with recursive preferences , allowing for general ( nonlinear ) consumption and earnings growth dynamics . * keywords : * nonparametric estimation , sieve estimation , stochastic discount factor , permanent - transitory decomposition , nonparametric value function estimation . * jel codes : * c13 , c14 , c58 .
a common phenomenon of active galactic nuclei , which presumably harbor supermassive black holes with masses of ( rees 1984 ) , is the strong variability which can be observed in x - ray lightcurves .these agn lightcurves seem to show featureless red noise , i.e. scale - free , divergent variability at low frequencies , often also described as flickering or fluctuation ( lawrence et al . 1987 ) .the term describes the power law distribution of the spectral power with the function in the power spectrum , often denoted as behavior .we present an alternative model to analyse the variability seen in the x - ray lightcurves of agn .the standard method of analyzing time series in the frequency domain is discussed briefly in section 2 .the alternative is known as a linear state space model ( lssm ) based on the theory of autoregressive processes ( scargle 1981 , honerkamp 1993 ) which usually can not be observed directly since the observational noise ( i.e. detectors , particle background ) overlays the process powering the agn .a lssm fit applied to the time series data yields the dynamical parameters of the underlying stochastic process .these parameters should be strongly correlated to the physical properties of the emission process .the corresponding lssm power spectrum exhibits both the decrease of power at medium frequencies and a limitation of spectral power at low frequencies .the detailed mathematical background of lssm and the fit procedure are described in section 3 and 4 .finally we present first results using this technique with exosat data from the seyfert galaxy ngc 5506 in section 5 .although measured astronomical data are time domain data , a commonly applied method works in the frequency domain by analyzing the power spectrum of the time series .as the observational window function is convoluted to the true spectrum of the source , artefacts might be produced in the power spectrum , which make a proper interpretation more difficult ( papadakis and lawrence 1995 , priestley 1992 ) . in most cases ,the power spectra are fit by a power law function with an offset described as , with values of ranging from 0 to 2 and a mean of about 1.5 ( lawrence and papadakis 1993 ) .the value is often denoted as the ` observational noise floor ' which describes the random process comprising the observational errors whereas the ` red noise ' component is the signal of interest . in the case of long agn observations , however ,a flattening at low frequencies occurs which can not be modelled by the -model ( mchardy 1988 ) .the -model is an ad hoc description of the measured periodogram , without any direct physical motivation .however , it is possible to generate time series with a -spectrum using self - organized criticality models simulating the mass flow within an accretion disc of the agn ( mineshige et al .such models produce a stationary time series that exhibits a -power spectrum by limiting the timescales occurring in the simulated accretion process .a -model without limited timescales would be stationary only if the power law slope is smaller than unity ( samorodnitsky and taqqu 1994 ) .the observed time series is composed by the superposition of single luminosity bursts .the slope of the -spectrum of data simulated in that way is about 1.8 , significantly higher than those measured from real data ( lawrence and papadakis 1993 ) .if the inclination of the accretion disk is brought in as an additional model parameter the slope can be diminished , but not in a way that leads to convincing results ( abramowicz et al .another point that contradicts this assumption is that there is no correlation between the spectral slope and the type of the seyfert galaxy ( green et al .this correlation should be present since the seyfert type is believed to be caused by the inclination of the line of sight ( netzer 1990 ) .the periodogram which is used to estimate the true source spectrum is difficult to interpret in the presence of non - equispaced sampling time series arising from real astronomical data ( deeter and boynton 1982 and references therein ) .the estimation of the -spectrum is hampered even in the absence of data gaps .this is due to the finite extent of the observed time series .therefore , the transfer function ( fourier transform of the sampling function ) is a sinc - function which will only recover the true spectrum if this is sufficiently flat ( deeter and boynton 1982 ; deeter 1984 ) . in the case of ` red noise 'spectra the sidebands of the transfer function will cause a spectal leakage to higher frequencies which will cause the spectra to appear less steep ( the spectral slope will be underestimated ) .even periodograms of white noise time series deviate from a perfectly flat distribution of frequencies as the periodogram is a -distibuted random variable with a standard deviation equal to the mean ( leahy et al . 1983 ) . thus the periodograms fluctuate and their variances are independent of the number of data points in the time series . due to the logarithmic frequency binning, agn periodograms will always show this strong fluctuation due to the low number of periodogram points averaged in the lowest frequency bins ( see fig.1 ) .fig.1 : a ) exosat me x - ray lightcurve of the quasar 3c273 ( jan .1986 ) , b ) corresponding periodogram .each dot represents the spectral power at its frequency , stepped with .the periodogram is binned logarithmically ( squares indicates a single point within the frequency bin ) .furthermore , additional modulations can be created in white noise periodograms if the time series consists of parts which slightly differ in their means and variances , respectively ( krolik 1992 ) . in the case of the exosat me x - ray lightcurvesthis effect is due to the swapping of detectors as each detector has its own statistical characteristics which can not be totally suppressed ( grandi et al .1992 ; tagliaferri et al .fig.1a shows a typical x - ray lightcurve which mainly consists of uninterrupted 11 ksec observation blocks before detectors are swapped .if the periodogram frequency corresponds to the observation block length , the calculated sum of fourier coefficients equals its expected white noise value of due to the constant mean and variance within the entire oscillation cycle . at other , mainly lower , frequencies the fourier sum yields non - white values due to temporal correlations caused by different means and variances of observation blocks located in the test frequency cycle .these deviations from a flat spectrum will be very strong at frequencies which correspond to twice the observation block length .the arrows in fig.1b clearly show this minimum feature at and another shortage of power at which corresponds to the long uninterrupted 72 ksec observation block starting at the second half of the exosat observation ( fig.1a ) .consequently a model is required which operates in the time domain and avoids any misleading systematical effects occuring in power spectra .in this section we briefly introduce the linear state space model ( lssm ) . for a detailed discussion , see honerkamp ( 1993 ) and hamilton ( 1995 ) .the lssm is a generalization of the autoregressive ( ar ) model invented by yule ( 1927 ) to model the variability of wolf s sunspot numbers .we follow wold s decomposition theorem ( wold 1938 ; priestley 1992 ; fuller 1996 ) which states that any discrete stationary process can be expressed as the sum of two processes uncorrelated with one another , one purely deterministic ( i.e. a process that can be forecasted exactly such as a strictly period oscillation ) and one purely indeterministic .further , the indeterministic component , which is essentially the stochastic part , can be written as a linear combination of an innovation process , which is a sequence of uncorrelated random variables . a given discrete time series is considered as a sequence of correlated random variables .the ar model expresses the temporal correlations of the time series in terms of a linear function of its past values plus a noise term and is closely related to the differential equation describing the dynamics of the system .the fact that has a regression on its own past terms gives rise to the terminology ` autoregressive process ' ( for detailed discussions see scargle 1981 ; priestley 1992 ) .a time series is thus a realization of the stochastic process or , more precisely , the observation of a realization of the process during a finite time interval .the ar model expresses the temporal correlations in the process in terms of memory , in the sense that a filter ( ) remembers , for a while at least , the previous values .thus the influence of a predecessor value decreases as time increases .this fading memory is expressed in the exponential decay of the ar autocorrelation function ( see eq .[ eq : rar ] ) .the ar processes variable remembers its own behavior at previous times , expressed in a linear relationship in terms of plus which stands for an uncorrelated ( gaussian ) white noise process . number of terms used for the regression of determine the order of the ar process , which is abbreviated to an ar[p ] process .the parameter values have to be restricted for the process to be stationary ( honerkamp 1993 ) .for a first order process this means .depending on the order , the parameters of the process represents damped oscillators , pure relaxators or their superpositions .for the first order process ar[1 ] the relaxation time of the system is determined from by : in the case of a damped oscillator for an ar[2 ] process the parameters , the period and the relaxation time respectively , are related by : for a given time series the parameters can be estimated e.g. by the durbin - levinson- or burg - algorithm ( honerkamp 1993 ) . by statistical testingit is possible to infer whether a model is compatible with the data .fig.2 : a ) exosat me x - ray lightcurve of ngc 5506 ( jan .1986 ) , b ) hidden ar[1]-process , estimated with the lssm fit .a first generalization of ar models are the autoregressive - moving - average ( arma ) models that include also past noise terms in the dynamics : both models , ar and arma processes , assume that the time series is observed without any oberservational noise . in presence of such noise the parameters will be underestimated and statistical tests will reject the model even if its order is specified correctly .lssms generalize the ar and arma processes by explicitly modelling observational noise .furthermore , lssms use the so called markov property , which means that the entire information relevant to the future or for the prediction is contained in the present state .the variable that has to be estimated can not be observed directly since it is covered by observational noise . following the markov property it is possible to regressivelypredict the values , though .the measured observation variables may not necessarily agree with the system variables that provide the best description of the system dynamics .thus a lssm is defined with two equations , the system or dynamical equation ( [ eq : sys ] ) and the observation equation ( [ eq : obs ] ) . this definition is a multivariate description , which means that the ar[p ] process is given as a -dimensional ar process of order one , with a matrix that determines the dynamics . by combining the different dimensional terms of the multivariate description the typical ar[p ] ( see eq .[ eq : ar ] ) form can be derived easily . the observation is formulated as a linear combination of the random vectors and . the matrix maps the unobservable dynamics to the observation . the terms and represent the dynamical noise with covariance matrix and the observational noise with variance , respectively .the estimation of the parameters in lssms is more complicated than for ar or arma processes .there are two conceptually different procedures available to obtain the maximum likelihood parameters estimates .both are iterative and start from some initial values that have to be specified .the first procedure uses explicit numerical optimization to maximize the likelihood .the other applies the so called expectation - maximization algorithm .the latter procedure is slower but numerically more stable than the former and is described in detail by honerkamp ( 1993 ). statistical evaluation of a fitted model is generally based on the prediction errors .the prediction errors are obtained by a kalman filter which estimates the unobservable process ( hamilton 1995 ) .such a linear filter allows us to arrive at the variables ( and its prediction errors ) , used to describe the system dynamics , starting from a given lssm and the given observations ( brockwell and davis 1991 , koen and lombard 1993 ) . multiplying the estimated process with the estimated yields an estimate of the observed time series .a necessary condition that the model fits to the data is that the difference represents white noise , i.e. the time series of prediction errors should be uncorrelated .this can for example be judged by a kolmogorov - smirnov test that tests for a flat spectrum of the prediction errors or by the portmanteau test using their autocorrelation function .we have used the first method to quantify the goodness of fit of the tested lssms ( see table 1 ) . another criterion to judge fitted models is the decrease in the variance of prediction errors with increasing order of the fitted models . a knee in this functiongives evidence for the correct model order .any further increase of the model order will not reduce the variance significantly . the so called akaike information criterion ( aic )formulizes this procedure including the different number of parameters of the models ( hamilton 1995 ) .any oscillators and relaxators which might occur in unnecessarily more complex lssms should be highly damped and can be neglected therefore . the last method to judgea fitted model is to compare the spectrum that results from the fitted parameters with the periodogram of the sample time series .the spectrum of a lssm is given by : the superscript denotes transposition .spectra of ar or arma processes are special cases of equation ( [ eq : arsp ] ) . in the simplest case of an ar[1 ] processmodelled with a lssm , the corresponding spectrum is given by : } = \frac{q}{1 + a_1 ^ 2 - 2a_1\cos(\omega)}+r\end{aligned}\ ] ] this function provides both the flattening at low and the decrease of power at medium frequencies seen in periodograms ( e.g. see fig . 4 ) . in a first approach gaps in the observed lightcurvewere filled with white noise with the same mean and rms as the original time series in order to create a continuous time series . in a second runthese gaps were refilled with the predictions of the kalman filter plus a white noise realization with the original lightcurves variance .generally , gaps in an observed time series can be handled by the lssm in a natural way avoiding the filling of gaps with poisson noise .the key is again the kalman filter .the kalman filter considers the fact that there are still decaying processes taking place even if the object is not observed . in each cycle of the iterative parameter estimation procedure is estimated based on an internal prediction , corrected by information obtained from the actual data . in case of gaps no information from is available and the internal prediction decays in its intrinsic manner until new information is given . in the case of the lightcurve of ngc 5506the resulting parameters are consistent with those of the first approach due to the high duty cycle of the original time series .as the x - ray lightcurves from exosat are the longest agn observations available , we have used the longest individual observation of about 230ks of the seyfert galaxy ngc 5506 for applying the lssm ( fig .the data which have been extracted from the heasarc exosat me archive , are background subtracted and dead time corrected , with a 30 sec time resolution obtained over 18kev energy range .the seyfert galaxy ngc 5506 holds a special place in agn variability studies , as it is both bright and one of the most variable agn .the chosen lightcurve contains only few gaps providing a duty cyle of 92.4% .the mean and rms of the lightcurve are 6.87 and 1.55 counts in 30s bins .ccccc model & & periods & & ks test + lssm ar[p ] & & ( s ) & ( s ) & + 0 & 1 & - & - & 0.0% + 1 & 0.722 & - & 4799 & 93.5% + 2 & 0.701 & - & 26.1 & 66.8% + & & - & 5011 & + 3 & 0.510 & - & 10.6 & 88.2% + & & - & 18.9 & + & & - & 4798 & + 4 & 0.395 & 236.3 & 71.1 & 92.1% + & & - & 6.7 & + & & - & 4780 & + + + + + we applied lssms with different order ar processes .an lssm using an ar[0 ] process corresponds to a pure white noise process without any temporal correlation and a flat spectrum .the used kolmogorov - smirnov test rejects this model at any level of significance ( see table 1 ) . without loss of generality , is set to unity , the mean and variance are set to 0 and 1 , respectively .we see that the x - ray lightcurve of ngc 5506 can be well modelled with a lssm ar[1 ] model , as the residuals between the estimated ar[1 ] process and the measured data are consistent with gaussian white noise .fig.3 shows the distribution and the corresponding normal quantile plot of the fit residuals which both display the gaussian character of the observational noise .the standard deviation of the distribution is 0.738 which is in good agreement to the estimated observational variance of 0.722 for the lssm ar[1 ] fit ( see table 1 ) .furthermore , the lightcurve of the estimated ar[1 ] looks very similar to the temporal behavior of the hidden process ( fig.2 ) .the corresponding dynamical parameter of the lssm ar[1 ] fit is 0.9938 which corresponds to a relaxation time of about 4799 s. fig.3 : a ) distribution and b ) normal quantile plot of the residuals of the lssm ar[1 ] fit to the exosat me ngc 5506 lightcurve ( the dotted lines in a ) indicate the mean and rms of the observational noise ) .a normal quantile plot arranges the data in increasing order and plot each data value at a position that corresponds to its ideal position in a normal distribution .if the data are normally distributed , all points should lie on a straight line .the lssm ar[1 ] gives a good fit to the exosat ngc 5506 data as the variance of the prediction errors nearly remains constant from model order 1 to 2 and the residuals conforms to white noise .the decrease in the variance for higher model orders might be due to correlations in the modelled noise , generated by the switching of the exosat detectors . since each detector has its own noise charateristics a regular swapping between background and source detectors would lead to an alternating observational noise level ( see section 2 ) .the higher order lssm ar[p ] fits try to model the resulting correlations with additional but negligible relaxators and damped oscillators ( , ) .we have used the durbin - levinson algorithm ( see section 3 ) to estimate the parameters of a competing simple ar[p ] model ( see table 2 ) . as expected for time series containing observational noise , the characteristic timescalesare underestimated by fitting a simple ar process and the statistical test rejects the ar[p ] model .a test for white noise residuals fails , which means that there are still correlations present which can not be modelled with an ar[p ] procces .we have performed ar[p ] fits for model orders up to 10 and we never found residuals consitent with white noise , indicating that there is no preferred model order .all occuring relaxators and damped oscillators are insignificant due to their short relaxation timescales compared with the bintime of 30s . as the observational noise is not modelled explicitly in ar models , it is included accidentally in the inherent ar noise term .thus , any correlation in the observed time series which can be detected in the lssm fits , is wiped out and the higher order ar fits only reveal fast decaying relaxators and oscillators .ccccc model & & periods & & ks test + ar[p ] & & ( s ) & ( s ) & + 0 & 1 & - & - & 0.0% + 1 & 0.9235 & - & 23.3 & 0.5% + 2 & 0.8814 & - & 55.6 & 0.3% + & & - & 29.8 & + 3 & 0.8566 & - & 97.0 & 0.4% + & & 197.4 & 40.6 & + 4 & 0.8362 & - & 153.2 & 0.4% + & & - & 51.2 & + & & 127.7 & 55.1 & + + + + + one might expect that the resulting best fit lssm light curve ( fig .2b ) might also be produced by just smoothing the original lightcurve .this assumption is wrong as a smoothing filter would pass long timescales and suppress all short time variability patterns .thus all information about the variations on short timescales would be lost ( brockwell and davis 1989 ) .the kalman filter concedes not only the time series values but also its prediction errors .these errors are much smaller than the errors of the observed lightcurve . in the case of the ngc 5506 observation ( fig.2 )the estimation errors are about 0.18 counts / sec and the errors of are about 1.3 counts / sec , respectively .both lightcurves in fig.2 are shown without error bars due to reasons of clarity .we have used monte carlo simulations to determine the error of the dynamical parameter . using the distribution of the estimated parameters of 1000 simulated ar[1 ] time series with the best fit results, we found .as the dynamical parameter is close to unity the corresponding relaxation time error is high , with . to prove the quality of the lssm results we have fitted a lssm ar[1 ] spectrum to the periodogram datathis fit yields the dynamical parameter which is consistent with the lssm ar[1 ] fit in the time domain , but the corresponding error is much higher due to the lower statistical significance of frequency domain fits ( see section 2 ) .the autocovariance function of the ar[1 ] process is given by : }({\delta } ) = \frac { q } { 1 - a_1 ^ 2 } \, e^{\log(a_1 ) \ , \delta } \label{eq : rar}\end{aligned}\ ] ] which is an exponentially decaying function for stationary ( ) time series , very similar to the temporal behavior of the autocorrelation function of a shot noise model ( papoulis 1991 ) : the variable denotes the density and is the lifetime of the shots .this similarity means that an ar[1 ] process can also be modelled by a superposition of poisson distributed decaying shots ( papoulis 1991 ) .the shot noise model , which has been used as an alternative to the model , appears to give a good fit to the power spectrum of ngc 5506 ( papadakis and lawrence 1995 , belloni and hasinger 1990 and references therein ) .but instead of all the shots having the same lifetime , papadakis and lawrence ( 1995 ) used a distribution varying as between and .they fixed arbitrarily at 12000s and found that is around 300s for ngc 5506 , much lower than the relaxation time of about 4800s found with the lssm fit . a possible explanation for this difference could be the distribution of lifetimes .since the power law slope of the shot noise model is constantly at medium and high frequencies , this distribution is necessary to modify the slope and to maintain a good fit to the spectrum .the advantage of a lssm is a variable slope at medium frequencies which depends on the dynamical parameter ( see fig . 4 ) .fig.4 : periodogram of the exosat me x - ray lightcurve of ngc 5506 ( dots ) and the spectrum of the best fit lssm ar[1 ] model in the time domain ( line ) ( see fig.2a ) .the spectra of the higher order lssm ar fits differ less than from the lssm ar[1 ] spectrum .the dashed lines display the - spectra of the corresponding frequency domain fit .the time domain fit yields errors which are more than 3 times smaller ( see text for details ) .the shot noise model can be regarded as an approximation of an ar[1 ] model for values near unity .the mean density of the poisson events then corresponds to the variance of the dynamical noise in the lssm system equation ( [ eq : sys ] ) .thus could be used to quantify and compare the rate of the accretion shots occuring in agns .we obtain a convincing fit to the observed x - ray lightcurve of an agn using a lssm ar[1 ] process as well in the time and in the frequency domain .the explicit modelling of observational noise allows to estimate the covered ar[1 ] process , indicating that the stochastic process is dominated by a single relaxation timescale .we show that the general ar[p ] model ( see eq.1 ) can be restricted to a simple ar[1 ] process which succeeds in describing the entire dynamics of the observed agn x - ray lightcurve .it has been suggested by mchardy ( 1988 ) that the single shots , which are supposed to be superimposed to build the lightcurve , may arise from subregions of an overall larger chaotic region which are temporarily lit up , perhaps by shocks . since one would expect a non uniform electron density throughout this region ( probably decreasing with distance from the central engine ) , the resulting difference in cooling timescales yields the different decay timescales ( green et al .as the lssm predicts that the stochastic process is dominated by a single relaxator , we presume the existence of a single cooling timescale or a uniform electron density in the emission region following the shot noise model ( see sutherland et al . 1978 ) .the assumption of an exponentially decaying shot seems to be reasonable as time - dependent comptonisation models lead to such a pulse profile .the scenario for a thermal comptonisation model ( payne 1980 , liang and nolan 1983 ) starts with uv photons which arise as the accretion inflows inhomogeneities , each producing a single flare when gravitational energy is set free as radiation .the impulsive emission of the poisson distributed delta peaks in a cloud of hot electrons triggers x - ray flares with a specific pulse profile depending on the seed photon energy , the density , and the temperature of the electrons .this impulsive emission is delayed and broadened in time and spectrally hardened due to repeated compton scattering .some approximate analytic solutions of this process show that the temporal evolution of the generated x - ray pulse can be described by a nearly exponentially decaying function ( miyamoto and kitamoto 1989 ) .the only difference to the ` shots ' used above is the ( more realistic ) non - zero rise time . using this modelit should be possible to associate the estimated relaxator timescale }$ ] with the physical properties of the comptonisation process .the presented lssm can also be used to analyse x - ray variability of galactic x - ray sources . as both ,relaxators and ( damped ) oscillators can be estimated , it is possible to use the algorithm to search for periodocities and qpo phenomena in the lightcurves of x - ray binaries ( see robinson and nather 1979 , lewin et al .1988 , van der klis 1989 ) ._ acknowledgements : _ we would like to thank j.d .scargle , r. staubert , m. maisack , j. wilms and k. pottschmidt for helpful discussions and c. gantert for writing the code of the lssm program .furthermore , we thank the anonymous referee for constructive comments .abramowicz a.r ., chen x. , kato , s. et al . , 1995 ,apjl 438 , l37 begelman m.c . , de kool m. , 1991 , in variability in active galactic nuclei , ed .h.r.miller & p.j.wiita ( cambridge : cambridge univ.press ) , 198 belloni t. , hasinger g. , 1990 , a&a 227 , l33 brockwell p.j . ,davis r.a . , 1991 ,time series : theory and methods , springer verlag , 2nd.ed .deeter j.e . , 1984 ,apj 281 , 482 - 491 deeter j.e . ,boyton p.e . , 1982 ,apj 261 , 337 - 350 fuller w.a . , 1996 ,introduction to statistical time series , new york , john wiley , 2nd.ed .grandi p. , tagliaferri g. , giommi p. et al . , 1992 , apj suppl .ser . , 82 , 93 green a.r ., mchardy i.m . ,lehto h.j ., 1993 , mnras 265 , 664 - 680 hamilton j.d . , 1995 , time series analysis , princeton university press honerkamp j. , 1993 , stochastic dynamical systems , vch publ .new york , weinheim koen c. , lombard f. , 1993 , mnras 263 , 287 krolik j.h . , 1992 , statistical challenges in modern astronomy , e.feigelson and g.j.babu , eds ., springer verlag new york , 349 lawrence a. , watson m.g ., pounds k.a .et al . , 1987 , nature 325 , 694 lawrence a. , papadakis p. , 1993, apj suppl .414 , 85 leahy d.a ., darbro w. , elsner r.f . , weisskopf m.c , sutherland p.g . , 1983 ,apj 266 , 160 - 170 lehto h.j . , 1989 , in proc .23rd eslab symp . on two topics in x - ray astronomy , vol.1 , ed .j.hunt & b.battrick ( esa sp-296 ) , noordwijk , 499 lewin w.h.g . ,van paradijs j. , van der klis m. , 1988 , space sci .review 46 , 273 liang e.p . ,nolan p.l . , 1983 , space sci. review 38 , 353 mchardy i. , czerny b. , 1987 , nature 325 , 696 mineshige s. , ouchi n.b . , nishimori h. et al . , 1994 , pasj 46 , 97 miyamoto s. , kitamoto s. , 1989 , nature 342 , 773 netzer h. , 1990 , saas - fee advanced course 20 on active galactic nuclei , eds .blandford , h.netzer , l.woltjer , 57 - 160 papadakis i.e. , lawrence a. , 1995 , mnras 272 , 161 papoulis a.p . ,1991 , probability , random variables and stochastic processes , new york , mcgraw - hill , 3rd.ed .payne d.g ., 1980 , apj 232 , 951 priestley m.b . , 1992 , spectral analysis and time series , san diego , academic press rees m.j . , 1984 , ann.rev.astron.astrophys .22 , 471 robinson e.l ., nather r.e . , 1979 , apj suppl .39 , 461 samorodnitsky g. , taqqu m.s ., 1994 , stable non - gaussian random processes , new york , chapman and hall scargle j.d . , 1981 , apj suppl .45 , 1 sutherland p.g ., weisskopf m.c . ,kahn s.m . , 1978 , apj 219 , 1029 tagliaferri g. , bao g. , israel g.l .1996 , apj , accepted for publication van der klis m. , 1989 , ann.rev.astron.astrophys . 27 , 517 wold h.o.a . , 1938 , a study in the analysis of stationary time series , uppsala , almqvist and wiksell , 2nd.edyule g. , 1927 , phil .a 226 , 267 +
in recent years , autoregressive models have had a profound impact on the description of astronomical time series as the observation of a stochastic process . these methods have advantages compared with common fourier techniques concerning their inherent stationarity and physical background . however , if autoregressive models are used , it has to be taken into account that real data always contain observational noise often obscuring the intrinsic time series of the object . we apply the technique of a linear state space model which explicitly models the noise of astronomical data and allows to estimate the hidden autoregressive process . as an example , we have analysed the x - ray flux variability of the active galaxy ngc 5506 observed with exosat .
recently sub - diffusion processes have attracted increasing interest since the introduction of continuous time random walks ( ctrws ) in and a large number of contributions have been given to them ( , ) . since ctrw is a random walk subordinated to a simple renewal process , by , it can be regarded as a generalized physical diffusion process ( including the sub - diffusion process and the super - diffusion process ) and there exists a closed connection between the time fractional diffusion system and the sub - diffusion process .moreover , it is confirmed in and that the time fractional diffusion systems can be used to well characterize those sub - diffusion processes , which offer better performance not achievable before using conventional diffusion systems and surely raise many potential research opportunities at the same time . in the case of diffusion system, it is well known that in general , not all the states can be reached in the whole domain of interest .so here , we first introduce some notations on the regional controllability of time fractional diffusion systems when the system under consideration is only exactly ( or approximately ) controllability on a subset of the whole space , which can be regarded as an extensions of the research work in ( , ) . besides, focusing on regional controllability would allow for a reduction in the number of physical actuators , offer the potential to reduce computational requirements in some cases , and also possible to discuss those systems which are not controllable on the whole domain , etc .furthermore , in and , the authors have shown that the measurements and actions in practical systems can be better described by using the notion of actuators and sensors ( including the location , number and spatial distribution of actuators and sensors ) .then the contribution of this present work is on the regional controllability of the sub - diffusion processes described by riemann - liouville time fractional diffusion systems of order by using the notion of actuators and sensors .as cited in , their applications are rich in many real life .for example , the flow through porous media ( ) , or the swarm of robots moving through dense forest ( ) .we hope that the results here could provide some insights into the qualitative analysis of the design and configuration of fractional controller .the rest of the paper is organized as follows .the mathematical concept of regional controllability problem is presented in the next section .section 3 is focused on the characterizations of strategic actuators in the case of regional controllability . in section 4 ,our main results on the regional controllability analysis of time time fractional diffusion systems are presented and the determination of the optimal control which achieves the regional controllability is obtained .two applications are worked out in the last section .let be an open bounded subset of with smooth boundary and we consider the following abstract riemann - liouville time fractional differential system : ,~0<\alpha<1 , \\\lim\limits_{t\to 0^+ } { } _ 0d^{\alpha}_t z(t)=z_0 , \end{array}\right.\ ] ] where generates a strongly continuous semigroup on the hilbert space , is a uniformly elliptic operator ( , ) , and the initial vector . here and denote the riemann - liouville fractional order derivative and integral , respectively , given by and in addition , is a control operator depends on the number and the structure of actuators . the control where is a hilbert space . in particular ,if the system is excited by actuators , one has and we first recall some necessary lemmas to be used afterwards .[ lem0 ] for any given a function is said to be a mild solution of the following system , \\\lim\limits_{t\to 0^+}{}_0d_t^{\alpha-1}v(t)=v_0\in z , \end{array}\right.\ ] ] if it satisfies where here is the strongly continuous semigroup generated by operator , and is a probability density function defined by such that ( ) * proof . *it follows from the laplace transforms that the system is equivalent to ( ) then }ds .\end{array}\ ] ] consider the stable probability density function . by the arguments in , we see that satisfies the following property let we obtain that }d\tau\\&= & \alpha\int_0^\infty{\int_0^\infty{e^{-\lambda \tau\theta}\psi_\alpha(\theta)\phi(\tau^\alpha)\tau^{\alpha-1}[v_0+\tilde{f}(\lambda)]}d\theta}d\tau\\&= & \sigma_1(v_0)+\sigma_2(f),\end{aligned}\ ] ] where and suppose that . then we have and let and + then we get and the proof is complete .[lem2 ] let be an open set and be the class of infinitely differentiable functions on with compact support in and be such that then almost everywhere in [lem1 ] let the reflection operator on interval ] and the regional controllability problem is equivalent to solving the equation then we can obtain the following theorem .[ th2 ] if the system is regionally approximately controllable on , then for any has a unique solution and the control steers the system to at time in .moreover , solves the minimum problem .* by lemma , we see that if the system is regionally approximately controllable on , then is a norm of space .let the completion of with respect to the norm again by .then we will show that has a unique solution in . for any , it follows from the definition of operator in that hence , is one to one .it follows from theorem 2.1 in that admits a unique solution in .further , let in problem , one has then for any with , we obtain that =0. ] and by the theorem 1.3 in , it then follows from that solves the minimum energy problem and the proof is complete .this section aims to present two examples to show the effectiveness of our obtained results .* example 5.1 .* let us consider the following one dimensional time fractional order differential equations of order with a zone actuator to show of remark .}u(t)\mbox{in } [ 0,1]\times [ 0,b ] , \\\lim\limits_{t\to 0^+}z(x , t)=z_0(x)\mbox { in } [ 0,1 ] , \\z(0 , t)= z(1 , t)=0\mbox { in } [ 0,b ] , \end{array}\right.\end{aligned}\ ] ] where }u ] .next , we show that there exists a sub - region such that the system is possible regional controllability in at time . without loss of generality ,let , . based on the argument above, is not reachable on . ] , we see that \\\neq 0 . \end{array}\end{aligned}\ ] ] then is possible regional controllability in ] here generates a strongly continuous semigroup .in addition , for any , by lemma , we see that defines a norm on , where is the unique mild solution of the following problem , \\\lim\limits_{t\to 0^+ } q{}_td^{\alpha-1}_b\varphi(t)=p_\omega^*g .\end{array}\right.\ ] ] now if we consider the following system , \\\lim\limits_{t\to 0^+}{}_0d^{\alpha-1}_{t}\psi(t)=0 . \end{array}\right.\ ] ] let ] , for any admits a unique solution .moreover , the control steers to at time and solves the minimum control energy problem .the purpose of this paper is to investigate the regional controllability of the riemann - liouville time fractional diffusion equations of order .the characterizations of strategic actuators when the control inputs appear in the differential equations as distributed inputs and an approach on the regional controllability with minimum energy of the problems are solved . since , , together with we get that our results can be regarded as the extension of the results in and .moreover , the results presented here can also be extended to complex fractional order distributed parameter dynamic systems .for instance , the problem of constrained regional control of fractional order diffusion systems with more complicated regional sensing and actuation configurations are of great interest . for more information on the potential topics related to fractional distributed parameter systems, we refer the readers to and the references therein .
this paper is concerned with the concepts of regional controllability for the riemann - liouville time fractional diffusion systems of order . the characterizations of strategic actuators to achieve regional controllability are investigated when the control inputs emerge in the differential equations as distributed inputs . in the end , an approach to guarantee the regional controllability of the problems under consideration in the considered subregion with minimum energy control is described and successfully tested through two applications . , , regional controllability ; time fractional diffusion systems ; strategic actuators ; minimum energy control .
a typical wireless sensor network consists of sensor nodes with limited energy reserves .many sensor network applications expect the sensor nodes to be active for months and may be years . however , in most of the situations , once the sensor nodes run out of their energy reserves , then replacing their batteries is not possible either due to the inaccessibility of sensor nodes or because such an endeavor may not be economically viable .so , there is a great demand and scope of the strategies which attempt to reduce the energy consumption , hence increase the lifetime of the sensor nodes .sensor nodes spend energy in receiving and transmitting data , sensing / actuating , and computation . in this paper , we concern ourselves with reducing the energy cost of transmission .the energy cost of receiving data can be easily incorporated in the model that we propose .the energy spent in sensing / actuating represents a fixed cost that can be ignored .we assume computation costs are negligible compared to radio communication costs . though this is a debatable assumption in dense networks , but incorporating computation costs is not straightforward and is left for future work .the transmission energy depends on three factors : the number of bits to be transmitted , the path - loss factor between the sensor nodes and the base - station , and the time available to transmit the given number of bits .the path - loss factor describes the wireless channel between a sensor node and the base - station and captures various channel effects , such as distance induced attenuation , shadowing , and multipath fading . for simplicity, we assume the path - loss factor to be constant .this is reasonable for static networks and also the scenarios where the path - loss factor varies slowly and can be accurately tracked .the idea of varying the transmission time to reduce the energy consumption was proposed in and explored in in the context of sensor networks , where its -hardness is discussed . in this paper, we attempt to reduce the transmission energy by reducing the number of bits transmitted by the sensor nodes to the base - station . in a data gathering sensor network ,the spatio - temporal correlations in sensor data induce data - redundancy . in , slepian and wolfshow that it is possible to compress a set of correlated sources down to their joint entropy , without explicit communication between the sources .this surprising existential result shows that it is enough for the sources to know the probability distribution of data generated .recent advances in distributed source coding allow us to take advantage of data correlation to reduce the number of bits that need to be transmitted , with concomitant savings in energy .however , finding the optimal rate allocation lying in slepian - wolf achievable rate region defined by constraints for nodes and designing efficient distributed source codes is a challenging problem .we simplify this problem by allowing the interaction between the base - station and the sensor nodes , and introducing the notion of _ instantaneous decoding _ .this reduces the rate allocation problem to the problem of finding the optimal scheduling order , albeit at some loss of optimality .two results in the theory of interactive communication provided further motivation for our work .first result states that even if the feedback does not help in increasing the capacity of a communication channel , it does help in reducing the complexity of communication .second result states that for the worst - case interactive communication , for almost all _ sparse pairs _ the recipient , who has nothing to say , must transmit almost all the bits in an optimal interactive communication protocol and informant transmits _ almost _ nothing . based on these results ,this paper proposes a formalism that attempts to reduce the number of bits sent by a sensor node .the proposed formalism casts the problem of many sensor nodes communicating with a base - station as a problem where multiple informants with correlated information communicate with single recipient . herewe identify the base - station as the recipient of the information and the sensor nodes as the sources of information .however , and subsequent papers consider only the interactive communication between a single informant - recipient pair , while in the sensor networks , we have as many informant - recipient pairs , as the number of sensor nodes . if the sensor data is assumed to be uncorrelated , then the results of can be trivially extended to the present scenario .however , in a data - gathering sensor network , the sensor data is supposed to be correlated .so , extending the results in to the scenarios where multiple informants with correlated information communicate with single recipient and then applying those results to the sensor networks , is not straightforward .formally , the problem is the following .there are sources of correlated information and there is one recipient of information , which needs to collect the information from these sources .assume that can interact with any of those sources , but among themselves these sources can not interact directly .at the end of communication , each of these sources need not know what other sources know .we are looking for the most efficient communication schemes ( ones which minimize the communication complexity ) for this problem . in the context of sensor networks , if the base - station knows the joint distribution of the sensor data or the correlations therein , then it can play some distributed version of the ` game of twenty questions ' with sensors to retrieve their information .if there is a single sensor node or if the sensor data are uncorrelated , then the base - station can ` play ' with individual sensor nodes and any node needs to send at least bits , where is the entropy of the information source at node .so , assuming the sensor data at all the nodes are _ identically distributed _, finally the base - station retrieves bits from the sensors .however , in a more realistic situation , the sensors have correlated data and the base - station needs to retrieve only bits . note that we can talk in terms of entropies only when we are concerned with average number of bits communicated .however , if we are interested in the worst - case number of bits , then it depends solely on the cardinality of the _ support set _ of the data of individual nodes than on the corresponding probabilities . in this work , we make two simplifying assumptions : the sensor nodes communicate with the base - station in a single - hop and only the total number of bits exchanged between the sensor nodes and the base - station are considered and this is what we refer to as ` communication complexity ' .this paper shows that the communication complexity depends on the model of the spatial - correlation of the sensor data as well as on the order in which the sensor nodes communicate with the base - station . under the assumption of an omniscient base - station, the base - station can compute the optimal number of bits , which any node in any schedule needs to send to it for any spatial correlation model of the sensor data .however , a sensor node , even if it knows in how many bits it needs to send its information to the base - station , may not actually be able to compress its information to that many number of bits , without explicit knowledge of how its data is correlated with the data of the other nodes . in general, this is neither possible given the limited knowledge of the network that a sensor node is supposed to have , nor desirable given the limited energy and computational capabilities of the sensor nodes .an omnipotent base - station can take up the most of the burden of computation and communication , allowing the the sensor nodes to perform minimal computation and transmit minimal number of bits , hence conserving their precious energy reserves and possibly increasing their operational lifetime .in the section [ example ] , we give an example of the situation where multiple informants with correlated information communicate with single recipient .we use this example to illustrate the complexity of communication in such scenarios and various other fundamental issues and our results .section [ worst_case_theory ] develops the formalism to compute the worst - case communication complexity in the scenarios where multiple correlated informants communicate with a single recipient .section [ sensornet_appl ] illustrates some of the ideas proposed and developed in this paper in the context of a sensor network with one particular model of spatial correlation of sensor data .finally , the section [ conclusions ] lists the contributions of our work and concludes the paper .there are groups , and each group has teams . in every match , two teams from two different groups play against each other . the result of each match is announced over radio. matches always result in a clear winner .the format of the radio announcements is : `` today the teams from groups and played against each other .the match between teams and was won by team . ''three persons , , and are involved . listens to the first part of the announcement `` today the teams from groups and played against each other . ''and then the radio is snatched by person , who listens to the portion `` the match between teams and was won by '' before the radio is snatched from him by , who listens to the portion `` team '' .now all three persons agree that must know which two teams played and who the winner was ( and need not learn which two teams from which two groups actually played and which one finally won ) .they want to find the most efficient way to do this . wants to know which two teams from the groups and played and which one actually won .suppose only communicates with , then will only know which two teams from the groups and actually played the match , but not the winner .on the other hand , if only communicates with , then knows the teams from which two groups played and who was the winner , but he may not know who the other team was . in this problem, it is essential that the identity of the group to which a team belongs to is included in the name of the team , that is , the names of the teams have to be globally unique .suppose on the contrary , that the names of the teams are only unique within a group , then two or more groups might have the the teams with the same name .so , in the event of a match between two teams from two different groups with the same name playing against each other , will not be able to make the winner out of the information sent to him by , even if has already informed of which two teams have played .so , we are demanding that once and have communicated all their information to , then must be able to unambiguously infer which two teams had played as well as who the winner was .the previous argument proves that this demand is satisfied if and only if the team names are globally unique .it should be noted that the total number of bits exchanged to complete the entire process of communication depends very much on the format of the announcement .for example , in the original problem in , if the format of the announcement is : `` the match between teams and is won by first / second team '' with the protocol that the ` first ' ( ` second ' ) corresponds to the first ( second ) team mentioned in the announcement . in such a situation , only one bit needs to be sent to , with or without interaction between and . sends bits to identify one of the groups and the team from it . after this message, it does not need to identify the second group as it is obvious to .so , needs to send another bits to enable know which is the team from the other group ( message 1 , 2 ) . sends bits to to help him know who the winner was ( message 3 ) . there are two scenarios here .one where communicates its information to before communicates with .the other scenario is the one where sends his information to before sends .we have to compute the number of bits exchanged for both the scenarios .* when communicates with before : * knows the two groups from which two teams played against each other . encodes in bits the names of the groups .so , sends in bits the first bit location at which the encodings of the two groups differ to ( message 1 ) . on its turn , sends the value of the first bit at which the encodings of two groups differ along with the bits to identify the team within one of the groups . to help identify the team from the other group it just needs to send bits to as the identity of the group to which this team belongs tois already known to by now ( message 2 , 3 , 4 ) . at the end of this step , knows which two teams had played .so , now in bits it sends the identity of the first bit location at which the encodings of the two teams differ to ( message 5 ) and responds by sending the value of that bit ( message 6 ) . with this determine who the winner was .so under this scenario , the number of bits exchanged are : * when communicates with before : * sends in bits the location of the first bit at which the encodings of the two groups differ to ( message 1 ) . in its turn , sends the value of the bit at that location along with the encoding of the winning team in number of bits ( message 2 , 3 ) . at the end of this step, knows which was the winning team and to which group it belonged to too .so , all that does not know now is that which team from the other group also played in the match .it sends to the location and value of the first bit at which the encodings of two two teams differ in bits ( message 4 , 5 ) . then responds by sending number of bits to identify the team from the given group ( message 6 ) .so under this scenario , the number of bits exchanged are : note , , and show that if we consider , total number of bits exchanged in the entire communication or the total number of bits transmitted by persons and together , interaction helps in reducing the number of bits compared to when no interaction is allowed .further , when communicates with before , the number of exchanged bits are less than when communicates with before . if we adopt the convention that all the messages sent by a source , until some other source sends the messages , form one message , then in all the above situations at most four messages are exchanged .so , in a communication protocol following this convention , the source concatenates all the messages that it sends before some other source begins to send the messages , and receiver knows how to parse the concatenated message into its individual messages . as and the example aboveprove , the interaction reduces the number of bits exchanged between the informants and the recipient .however , from the above example , it also becomes clear that even for the given number of messages exchanged between the informants and the recipient , in general , the number of bits exchanged depends on the order in which the informants communicate with the recipient .so in the above example , with four messages allowed , the number of bits exchanged between , and depend on whether communicates with before or after .we conjecture that this is so due to `` somewhat '' asymmetric nature of the distribution of the information at the nodes and of this particular example and loosely , we can say that the messages from contain more information than those from .in this section we attempt to develop a theory of worst - case interactive communication . to keep the discussion simple and clear , we consider a general scenario involving two informants ( and ) and one recipient ( ) .however , the same formalism can be extended for the scenarios involving more than two informants . for the sake of completeness and to facilitate the discussion that follows , let us reintroduce the notion of _ ambiguity set _ and some related concepts , defined originally in .let be a random pair , with _ support set _ .the _ support set _ of is the set of possible values . , the _ support set _ of , is similarly defined .s _ ambiguity set _ when his random variable takes the value is the set of possible values when .ambiguity _ in that case is the number of possible values when .the _ maximum ambiguity _ of is the maximum number of values possible with any given value .assume that a total of messages are allowed , with messages per informant allowed to be exchanged between the informant and the recipient .there are two possible schedules in which the informants can communicate with the recipient : either communicates first with or communicates first . in the spirit of ,let us introduce some more definitions : : -message worst - case complexity of transmitting to a person who already knows , when communicates first . : -message worst - case complexity of transmitting to a person who already knows , when communicates first . using these definitions, we have : in general , .we omit the detailed proof for the sake of brevity . however , the league example of the previous section provides an example supporting the statement of the theorem ._ corollary : _ the unbounded interaction complexities satisfy .it is easy to prove the following trivial , but quite useful lower bounds on the unbounded interaction complexities . for all ( x , y , z ) tuples, similar bounds exists for , , and .since empty messages are allowed , it is obvious that is a decreasing function of .this holds true for other complexities , such as etc too .this fact together with previous lemma implies that with similar bounds for , , and .we can use above results to find the communication complexity of the version of the league problem discussed in the previous section and arrive at the results of and .if we identify the sets , and appropriately , we can directly use the results from .for example , the relevant support sets , , , and , with are : this section , we apply the formalism developed in the previous sections to illustrate the computation of worst - case and average - case interactive communication complexities . we assume following spatial correlation model for the sensor data .let be the random variable representing the sampled sensor reading at node and denote the number of bits that the node has to send to the base - station .let us assume that each node has at most number of bits to send to the base - station , so .however , due to the spatial correlation among sensor readings , each sensor may send less than number of bits .let us define a data - correlation model as follows .let denote the distance between nodes and .let us define , the number of bits that the node has to send when the node has already sent its bits to the base - station , as follows : figure [ fig1 ] illustrates this for .it should be noted that when , the data of the nodes and differ in at most least significant bits .so , the node has to send , at most , least significant bits of its bit data. from the definition above in , follows the symmetry of the conditional number of bits : however , the definition of the correlation model is not complete yet and we must give the expression for the number of bits transmitted by a node conditioned on more than one node already having transmitted their bits to the base - station .there are several ways in which this quantity can be defined , we choose the following definition : so , with this definition , the number of bits transmitted by any node in a schedule depends only on the node nearest to it among all the nodes already polled in the schedule . for a given schedule , first node transmits its bits to the base - station . the omniscient base - station based on its knowledge of the correlation model in as well as all the internode distances, knows that its _ maximum ambiguity _ about second node s data is , as bit - patterns are possible for the least - significant bits of s data .so , even if we allow unbounded interaction between the base - station and the node , it follows from the results of the previous section that at least bits are exchanged , even by the optimal communication protocol . however , here we are more concerned with demonstrating the reduction in the communication complexity due to the interaction between the sensor nodes and the base - station rather than with proposing the schemes which achieve the optimal lower bounds of the communication complexity .so , here we propose an _ almost optimal _protocol for the communication between the base - station and a sensor node .base - station informs the node in bits ( if , then the base - station sends one bit to the corresponding node ) to transmit least significant bits of its information and the sensor node responds by sending corresponding bits .note that if no interaction is allowed between the base - station and the sensor node , then the sensor node has to send bits to the base - station . continuing this process , the base - station queries all the nodesso , the worst - case communication complexity of schedule is the sum of the total number of bits sent by the base - station and the number of bits sent by the sensor - nodes , in the worst - case .it is given by : let be the set of all possible schedules .then optimally minimum value of is achieved for that schedule that solves the following optimization problem : given the definition of the correlation model in and , it is easy to see that the optimum schedule is generated by a greedy scheme that chooses the next node in the schedule ( from the set of the nodes not already scheduled ) to be that node that is nearest to the set of already scheduled nodes . as noted above in the discussion of the correlation model of , the data of two correlated nodes and can differ in at most number of least significant bits . however , it is not necessary that it _ actually _ differs in those many bits .so , there can be the situations where even if the base - station has estimated that the node has to send number of its least - significant bits and communicated this to the node , the node s data differs from the the data of node in less than least - significant bits .in such situations , it is sufficient for the base - station to reconstruct the data of the node if the node sends only those bits where its data actually differs from that of node . given the number of its least - significant bits that the node has to send to the base - station , there are possible bit - patterns , one out of which the node has to communicate to the base - station . assuming that the each of these bit - patterns are uniformly distributed with probability , then following the typical huffman - coding argument , the node can communicate its data to the base - station in at most bits on average .given a communication schedule , the first node in the schedule sends its bits to the base - station .based on this and the knowledge of the correlation model given in , the base - station informs the second node in bits to send its information in bits . assuming the uniform - distribution, the node sends the requested information in bits , in average . continuing this process , the base - station queries all the nodes .so , the average - case communication complexity of schedule is the sum of the total number of bits sent by the base - station and the number of bits sent , on average , by the sensor - nodes .it is given by : the optimal value of is achieved for that schedule that solves the following optimization problem : it is easy to see that , once again , the optimum schedule is generated by a greedy scheme that chooses the next node in the schedule to be that node that is nearest to the set of the nodes which are already scheduled . _remark : _ comparing and , it may appear that the average - case performance of a schedule is no better than its worst - case performance , but it should be noted that the average - case analysis is done for the uniform distribution of the bit - patterns , that gives the maximum entropy . for any other distribution of the bit - patterns , the average communication complexity will be lesser than given by .this paper proposes a new framework , based on exploiting the redundancy induced by the spatio - temporal correlations in the sensor data and the reduction in communication complexity due to interaction , to reduce the total number of bits sent by the sensor nodes in a single - hop data - gathering sensor network .the proposed formalism views the problem of many sensor nodes communicating with the base - station as the problem of many informants with the correlated information communicating with single recipient .we extend various existing results on single informant - recipient pair communication to the present case .we show that such extensions lead to various non - trivial new results .finally , we apply this new framework to compute the worst - case and average - case communication complexities of a typical sensor network scenario and demonstrate the significance of our contribution . 1 .we show that interaction helps in reducing the communication complexity also in the scenarios where more than one informant are involved .2 . we show that when the _ ambiguity _ is more than two , then no fixed number of messages are optimum .we show that for the scenarios involving more than one informant , the order in which the informants communicate with the recipient may determine the communication complexity .we conjecture that if the nodes which have ` more ' information communicate first , then this brings down the overall communication complexity .essentially , we need a metric to quantify the amount of information , but not in ` shannon - sense ' .we show that when multiple informants communicate with a recipient , the m - message complexity of communication between informant and recipient , can be computed by directly modifying the hypergraph based on the information provided by the previous informants in the communication schedule .
in this work , we are concerned with maximizing the lifetime of a cluster of sensors engaged in single - hop communication with a base - station . in a data - gathering network , the spatio - temporal correlation in sensor data induces data - redundancy . also , the interaction between two communicating parties is well - known to reduce the communication complexity . this paper proposes a formalism that exploits these two opportunities to reduce the number of bits transmitted by a sensor node in a cluster , hence enhancing its lifetime . we argue that our approach has several inherent advantages in scenarios where the sensor nodes are acutely energy and computing - power constrained , but the base - station is not so . this provides us an opportunity to develop communication protocols , where most of the computing and communication is done by the base - station . the proposed framework casts the sensor nodes and base - station communication problem as the problem of multiple informants with correlated information communicating with a recipient and attempts to extend extant work on interactive communication between an informant - recipient pair to such scenarios . our work makes four major contributions . firstly , we explicitly show that in such scenarios interaction can help in reducing the communication complexity . secondly , we show that the order in which the informants communicate with the recipient may determine the communication complexity . thirdly , we provide the framework to compute the -message communication complexity in such scenarios . lastly , we prove that in a typical sensor network scenario , the proposed formalism significantly reduces the communication and computational complexities .
in their influential paper the geometry of graphs and its algorithmic applications " , linial et al . introduce a novel and powerful set of techniques to the algorithm designer s toolkit .they show how to use the mathematics of metric embeddings to help solve difficult problems in combinatorial optimization .the approach inspired a large body of further work on metric embeddings and their applications .our objective here is to show how this extensive body of work might be generalized to the _ geometry of hypergraphs_. recall that a _ hypergraph _ consists of a set of vertices and a set of _ hyperedges _ , where each is a subset of .the underlying geometric objects in this new context will not be metric spaces , but _ diversities _, a generalization of metrics recently introduced by bryant and tupper .diversities are a form of multi - way metric which have already given rise to a substantial , and novel , body of theory .we hope to demonstrate that a switch to diversities opens up a whole new array of problems and potential applications , potentially richer than that for metrics .the result of which is of particular significance to us is the use of metric embeddings to bound the difference between cuts and flows in a _ multi - commodity flow _ problem .let be a graph with a non - negative edge capacity for every edge .we are given a set of _ demands _ for .the objective of the multi - commodity flow problem is to find the largest value of such that we can simultaneously flow at least units between and for all and . as usual, the total amount of flow along an edge can not exceed its capacity .multi - commodity flow is a linear programming problem ( lp ) and can be solved in polynomial time .the _ dual _ of the lp is a relaxation of a min - cut problem which generalizes several np - hard graph partition problems .given let be the sum of edge capacities of edges joining and and let denote the sum of the demands for pairs with and .we then have for every .when there is a single demand , the minimum of equals the maximum value of , a consequence of the max - flow min - cut theorem . in general , for more than one demand there will be a gap between the values of the minimum cut and the maximum flow .linial et al , building on the work of , show that this gap can be bounded by the _ distortion _ required to embed a particular metric ( arising from the lp dual ) into space .the metric is _ supported _ on the graph , meaning that it is the shortest path metric for some weighting of the edges . by applying the extensive literature on distortion bounds for metric embeddingsthey obtain new approximation bounds for the min - cut problem . in this paperwe consider generalizations of the multi - commodity flow and corresponding minimum cut problems .a natural generalization of the single - commodity maximum flow problem in a graph is _ fractional steiner tree packing _ . given a graph with weighted edges , and a subset ,find the maximum total weight of trees in spanning such that the sum of the weights of trees containing an edge does not exceed the capacity of that edge .whereas multi - commodity flows are typically used to model transport of physical substances ( or vehicles ) , the steiner tree packing problem arises from models of information , particularly the broadcasting of information ( see for references ) .the fractional steiner tree packing problem generalizes further to incorporate multiple commodities , a formulation which occurs naturally in multicast and vlsi design applications ( see ) . for each have a demand ( possibly zero ) and the set of trees in the graph spanning .a _ generalized flow _ in this context is an assignment of non - negative weights to the trees in for all , with the constraint that for each edge , the total weight of trees including that edge does not exceed the edge s capacity .the objective is to find the largest value of for which there is a flow with weights satisfying for all demand sets .these problems translate directly to hypergraphs , permitting far more complex relationships between the different capacity constraints . as for graphs , we have demands defined for all .each hyperedge has a non - negative capacity .we let denote the set of all _ minimal connected sub - hypergraphs _ which include ( not necessarily trees ) .a flow in this context is an assignment of non - negative weights to the trees in for all , with the constraint that for each hyperedge , the total weight of trees including that hyperedge does not exceed the hyperedge s capacity . as in the graph case ,the aim is determine the largest value of for which there is a flow with weights satisfying the constraint for all demand sets .all of these generalizations of the multi - commodity flow problem have a dual problem that is a relaxation of a corresponding min - cut problem . for convenience , we assume any missing edges or hyperedges are included with capacity zero . fora subset let be the set of edges or hyperedges which have endpoints in both and .the min - cut problem in the case of graphs is to find the cut minimizing where runs over all pairs of distinct vertices in , while in hypergraphs we find which minimizes where run over all subsets of . in both problemsthe value of a min - cut is an upper bound for corresponding value of the maximum flow .linial et al . showed that the ratio between the min - cut and the max flow can be bounded using metric embeddings .our main result is that this relationship generalizes to the fractional steiner problem with multiple demand sets , on both graphs and hypergraphs , once we consider diversities instead of metrics .the following theorems depend on the notions of diversities being supported on hypergraphs and -embeddings of diversities , which we will define in subsequent sections .[ major1 ] let be a hypergraph .let be a set of edge capacities and a set of demands .there is a diversity supported on , such that the ratio of the min - cut to the maximum ( generalized ) flow for the hypergraph is bounded by the minimum distortion embedding of into .gupta et al . proved a converse of the result of linial et al . by showing that , given any graph and metric supported on it, we could determine capacities and demands so that the bound given by the minimal distortion embedding of into was tight . we establish the analogous result for the generalized flow problem in hypergraphs .[ major2 ] let be a hypergraph , and let be a diversity supported on it .there is a set of edge capacities and a set of demands so that the ratio of the min - cut to the maximum ( generalized ) flow equals the distortion of the minimum distortion embedding of into .a major benefit of the link between min - cut and metric embeddings was that linial et al . andothers could make use of an extensive body of work on metric geometry to establish improved approximation bounds . in our context , the embeddings of diversities is an area which is almost completely unexplored .we prove a few preliminary bounds here , though much work remains .+ the structure of this paper is as follows .we begin in section 2 with a brief review of diversity theory , including a list of examples of diversities . in section 3we focus on and diversities , which are the generalizations of and metrics .these diversities arise in a variety of different contexts .fundamental properties of diversities are established , many of which closely parallel results on metrics . in section 4we show how the concepts of metric embedding and distortion are defined for diversities , and establish a range of preliminary bounds for distortion and dimension . finally ,in section 5 , we prove the analogues of linial et al s and gupta et al s results on multi - commodity flows , as stated in theorems [ major1 ] and [ major2 ] above .a _ diversity _ is a pair where is a set and is a function from the finite subsets of to satisfying _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ( d1 ) , and if and only if .+ ( d2 ) if then _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for all finite .diversities are , in a sense , an extension of the metric concept .indeed , every diversity has an _ induced metric _ , given by for all .note also that is _ monotonic _ : implies . for convenience ,in the remainder of the paper we will relax condition ( d1 ) and allow even when .likewise , for metrics we allow even if .we define embeddings and distortion for diversities in the same way as for metric spaces .let and be two diversities and suppose . a map has _ distortion _ if there is such that and for all finite .we say that is an _isometric embedding _ if it has distortion and an _ approximate embedding _ otherwise .bryant and tupper provide several examples of diversities .we expand that list here . 1 ._ diameter diversity ._ let be a metric space .for all finite let 2 . _ diversity ._ let denote the diversity , where for all finite . diversity ._ let be a measure space and let denote the set of all all measurable functions with .an diversity is a pair where is given by for all finite . to see that satisfies ( d2 ) , consider the triangle inequality for the diameter diversity on a real line and integrate over .phylogenetic diversity ._ let be a phylogenetic tree with taxon set .for each finite , the _ phylogenetic diversity _ of is the length of the smallest subtree of connecting taxa in .steiner diversity ._ let be a metric space .for each finite let denote the minimum length of a steiner tree connecting elements in ._ hypergraph steiner diversity ._ let be a hypergraph and let be a non - negative weight function .given let denote the minimum of over all subsets such that the sub - hypergraph induced by is connected and includes . then is a diversity ._ measure diversity ._ let be a measure space , where is a -algebra of subsets of and ] .hence we define the _ mean - width diversity _ so that the induced metric of is the euclidean metric . here is the beta function .note that , see ._ -diversity_. let be a collection of random variables taking values in the same state space .for every finite let be the probability that do not all have the same state .then is a diversity , termed the _ -diversity _ since , the proportion of segregating ( non - constant ) sites , is a standard measure of genetic diversity in an alignment of genetic sequences ( see , e.g. ) .below , we will show that diversities , phylogenetic diversities , measure diversities , mean - width diversities and -diversities are all examples of -embeddable diversities .in metric geometry we say that one metric _ dominates _ another on the same set if distances under the first metric are all greater than , or equal to , distances under the second .the relation forms a partial order on the cone of metrics for a set : given any two metric spaces and we write if for all .the partial order provides a particularly useful characterization of the standard shortest - path graph metric .let be a graph with edge weights .the shortest path metric is then the unique , maximal metric ( under ) which satisfies for all . given that the _ geometry of graphs _ of is based on the shortest path metric , it is natural to explore what arises when we apply the same approach to diversities .we say that a diversity dominates another diversity if for all finite , in which case we write .applying these to graphs , and hypergraphs , we obtain the diversity analogue to the shortest - path metric .[ thm : steinmax ] 1 .let be a graph with non - negative weight function .the steiner tree diversity is the unique maximal diversity such that for all .2 . let be a hypergraph with non - negative weight function .the hypergraph steiner diversity is the unique maximal diversity such that for all .note that 1. is a special case of 2 .we prove 2 .let denote the hypergraph steiner diversity for . for any edge , the edge itself forms a connected sub - hypergraph , so .let be any other diversity which also satisfies for all .for all there is such that the sub - hypergraph induced by is connected , contains , and has summed weight .multiple applications of the triangle inequality ( d2 ) gives as a further consequence , we can show that the hypergraph steiner diversity dominates all diversities with a given induced metric .[ divbounds ] let be a diversity with induced metric space .let denote the diameter diversity on and let denote the steiner diversity on .then for all finite , if then . suppose .there is such that the last inequality following from the monotonicity of .let be the complete graph with vertex set and edge weights .then by theorem [ thm : steinmax ] . to obtain the final inequality ,consider any ordering of the elements of : .then , using the triangle inequality repeatedly gives diversities were defined in section [ sec : diversitylist ] .we say that a diversity is -embeddable if there exists an isometric embedding of into an diversity . a direct consequence of the definition of diversities ( and the direct sum of measure spaces )is that if and are both diversities then so are and for . hence the -embeddable diversities ona given set form a cone .deza and laurent make a systematic study of the identities and inequalities satisfied by the cone of _ metrics_. much of this work will no doubt have analogues in diversity theory .for one thing , every identity for metrics is also an identity for the induced metrics of diversities .however diversities will satisfy a far richer collection of identities .one example is the following .[ prop : circlel1 ] let be -embeddable and let be finite subsets of with union .then first suppose embeds isometrically in , the diameter diversity on .let and be the minimum and maximum elements in . identify with and with .there is such that , and , without loss of generality , . if then if then , without loss of generality , .select such that , and for all . then , considering two different paths from to we obtain the case for general -embeddable diversities can be obtained by integrating this inequality over the measure space .espnola and piatek investigated when hyperconvexity for diversities implied hyperconvexity for their induced metrics , proving that this held whenever the induced metric of a diversity satisfies for all .( see for definitions and results ) .a consequence of proposition [ prop : circlel1 ] is that this property holds for all -embeddable diversities .if is embeddable then its induced metric satisfies for all finite .suppose that .there are cycles of length through , and each edge is contained in exactly such cycles . for each cycle we have from proposition [ prop : circlel1 ] that hence we now examine three examples of diversities which are -embeddable . in all three cases , the diversity need not be finite , nor even finite dimensional .later , we examine -embeddable diversities for finite sets .measure diversities , -diversities and mean - width diversities are all -embeddable .we treat each kind of diversity in turn .+ _ measure diversities_. + in a measure diversity any element can be naturally identified with the function in .observe now that _ mean - width diversities ._ + let be the -dimensional mean - width diversity . consider where is the unit sphere in , is the borel subsets of and is the measure given by for all where is the surface measure on .let for be the function for . then thus is embedded in .+ _ -diversities_. + let be an -diversity .suppose that the random variables in have state space and that they are defined on the same probability space .for each let be given by if and otherwise .then in the case of measure diversities , we can also prove a converse result , in the sense that every diversity can be embedded in a measure diversity .we first make some observations about .consider the map given by & \mbox{if } ; \\\left [ x , 0 \right ) & \mbox{if }. \end{array } \right.\ ] ] note that , where is lebesgue measure on .furthermore , we have that to see that this is true , we consider three cases .we let be the minimum of all and be the maximum . in case 1 ,all the are non - negative. then ] .this gives the result . in case 2 ,all the are negative and the result follows similarly . in case 3 , some of the are positive and some of the are negative . in this case $ ] and is empty .any -embeddable diversity can be embedded in a measure diversity . without loss of generality , consider the diversity where is a subset of .we construct a new measure space , i.e. the product measure of with lebesgue measure on .for we define by we then have that for all finite subsets of we have further results can be obtained for -embeddable diversities when is finite , say . in this case , the study of diversities reduces to the study of non - negative combinations of _ cut diversities _ , also called _ split diversities _, that are directly analogous to _ cut metrics_. given define the diversity by in other words , when cuts into two parts .the set of non - negative combinations of cut diversities for form a cone which equals the set of -embeddable diversities on .[ charact_l1_embed ] suppose that and is a diversity .the following are equivalent . 1. is -embeddable . is -embeddable for some .3 . is a _ split system diversity _ ( see ) .that is , is a non - negative combination of cut diversities .( i)(iii ) + let be an embedding from to . for each and each let letting if this is negative .define then for all and all we have and so ( iii ) ( ii ) .+ fix .we can write as for all where runs over all subsets of containing .this collection of subsets of can be partitioned into disjoint chains by dilworth s theorem .denote these chains by so that we will show that for every chain the diversity is -embeddable. the result follows . to this end , define by where is the minimal element of the chain that contains . then \(ii ) ( i ) .+ follows from the fact that is itself an diversity .diversities formed from combinations of split diversities were studied by and in literature on phylogenetic diversities .proposition [ prop : finitel1 ] is a restatement of theorems 3 and 4 in .[ prop : finitel1 ] let be a finite , -embeddable diversity , where for all , where we assume . for all we have the identity andif we have from these we obtain the following characterization of finite , -embeddable metrics .a finite diversity is -embeddable if and only if it satisfies and for all , such that .necessity follows from proposition [ prop : finitel1 ] . for sufficiency , observe that the map from a weight assignment to a diversity is linear and , by proposition [ prop : finitel1 ] , invertible for the space of weight functions satisfying for all .the image of this map therefore has dimension . from we that the diversities for odd can be written in terms of diversities for even .hence the space of diversities satisfying has dimension and lies in the image of the map .condition [ eq : finitel1splits ] ensures that the diversity is given by a _non - negative _ combination of cut diversities .given two metric spaces and we can ask what is the minimal distortion embedding of into , where the minimum is taken over all maps .naturally , we can ask the same question for diversities . whereas the question for metric spaces is well - studied ( though still containing many interesting open problems ) the situation for diversities is almost completely unexplored .we state some preliminary bounds here , most of which leverage on metric results . we begin by proving bounds for several types of diversities defined on .[ lem : divrk ] let and be the diameter diversities on , evaluated using and metrics respectively .let and be the and mean - width diversities on . then for all finite all bounds are tight .the inequalities and are due to theorem [ divbounds ] . to prove the bounds , note that for each dimension there are which maximize .hence with equality given by subsets of . to prove the mean - width bound notethat , by jung s theorem , a set of points in with diameter is contained in some sphere with radius , where hence is contained in a set with mean width . fromwe have where again denotes the beta function .the bound holds in the limit for points distributed on the surface of a sphere .we now investigate upper bounds for the distortion of diversities into space .to begin , we consider only diversities which are themselves diameter diversities . in many senses ,these diversities are similar to metrics , and it is perhaps no surprise that they can embedded with a similar upper bound as their metric counterparts .let be a metric space , , and let be the corresponding diameter diversity . 1 .there is an embedding of in with distortion and .2 . there is an embedding of in with distortion and .any metric on points can be embedded into the metric space with distortion , where .let be an embedding for with for all , where is .as above , we let denote the diameter diversity for the metric . for all we have from lemma [ lem : divrk ] that the result now follows since is and is . \2 . as shown in ( see also ) , there is an embedding of into with for all , where and are . for all we have from lemma [ lem : divrk ] that result follows .we now consider the problem of embedding general diversities .the bounds we obtain here can definitely be improved : we do little more than slightly extend the results for diameter diversities . [ thm : upperb ] let be a diversity with . 1 . can be embedded in with with distortion . can be embedded in with with distortion .any diversity can be approximated by the diameter diversity of its induced metric with distortion , as shown in theorem [ divbounds ] .this fact together with the previous theorem gives the required bounds . from upper bounds we switch to lower bounds .any embedding of diversities with distortion induces an embedding of the underlying metric with distortion at most .hence we can use the examples from metrics to establish that there are diversities which can not be embedded in with better than an distortion .we have been able to obtain slightly tighter lower bounds for embeddings into where is bounded .[ prop : nobourgain ] let be the -point diversity with for all non - empty .then the minimal distortion embedding of into has distortion at least . for any embedding of ,lemma [ lem : divrk ] shows that for some , .the distortion of is equal to taking and shows that the distortion is at least .a consequence of proposition [ prop : nobourgain ] is that there will , in general , be no embedding of diversities in for which both the distortion and dimension is , or indeed polylog , ruling out a direct translation of the classical embedding results for finite metrics .even so , we suspect that the upper bounds achieved in theorem [ thm : upperb ] can still be greatly improved .having reviewed diversities , diversities , and the diversity embedding problems , we return to their application in combinatorial optimization .we will here establish analogous results to those of and for hypergraphs and diversity embeddings into .we first state the extensions of maximum multicommodity flows and minimum cuts a little more formally . given a hypergraph , non - negative weights for and ,the goal is find the maximum weighted sum of minimal connected sub - hypergraphs covering without exceeding the capacity of any hyperedge .let be set of all minimal connected sub - hypergraphs of that include .for each sub - hypergraph assign weight .we consider the following generalization of _ fractional steiner tree packing _ which we call _ maximum hypergraph steiner packing _ : identify satisfying the lp : as before , if we define for all subsets of , and let it be zero for , we can drop the dependence of the problem on .the reference studies an oriented version of this problem . as with flows ,maximum hypergraph steiner packing has a multicommodity version . for each subset of we have non - negative demand .we view and as non - negative vectors indexed by all subsets of .suppose we want to simultaneously connect up all with minimal connected sub - hypergraphs carrying flow for all and we want to maximize .the corresponding optimization problem is : note that we use rather than just because the same connected sub - hypergraph might cover more than one set in the hypergraph .we call the optimal value of for this problem , for _ maximum multicommodity hypergraph steiner packing_. next we define the appropriate analogues of the min - cut problem , which we call _ minimum hypergraph cut_. as before , we let be the set of hyperedges which have endpoints in both and , and we make the simplifying assumption that every subset is a hyperedge , including any missing hyperedges with capacity zero .we define below we will show that we define we say that a non - negative vector is _ supported on the hypergraph _ if for . then for any hypergraph we define to be the greatest value of over all nonnegative and such that is supported on .we say that a diversity on is _ supported on _ if it is the hypergraph steiner diversity of for some set of non - negative weights for . for any diversity on we define to be the minimal distortion between and an -embeddable diversity on . for any hypergraph define to be the maximum of over all diversities supported on .the major result for this section is that for all hypergraphs the fact that ( our theorem 1 ) is the analogue of results in section 4 of and the fact that equality holds ( our theorem 2 ) is the analogue of theorem 3.2 in . [prop : characterizemaxhsp ] for all , where is the set of all diversities on .in particular , the optimal is supported on the hypergraph where is the set of all such that .we rewrite the linear program in standard form .we break the equality constraint into and and note that we can omit the constraint , because it will never be active .then we get let be the dual variables corresponding to the first set of inequality constraints , and let be the inequalities corresponding to the second set of inequality constraints .then the dual problem is by strong duality , and have the same optimal values .next we show that is equivalent to where the minimum is taken over all diversities . to see the equivalence of and , suppose that is a diversity solving .let for all and for all .then the objective function of is the same , the second line of still holds , the third line holds by the triangle inequality for diversities , and the fourth and fifth line hold by the non - negativity of diversities . to see the other direction ,suppose and solve .let be the steiner diversity on generated edge weights , . since for all , this can only decrease the objective function .also , by the definition of , for all , so the inequality of is satisfied too .thus the two lps have the same minima .note we can assume that is the steiner diversity for a weighted hypergraph with hyperedges . if not , we can replace with the steiner diversity on the hypergraph whose hyperedges are the set and whose weights are the .this steiner diversity will have the same value on the hyperedges as , so the objective function will not change , but the value can only increase on other subsets of , and so the constraint is still satisfied .finally , is equivalent to this is because , any solution of will only have a smaller or equal value for the objective function of . andany solution of can be rescaled without changing the objective function so that , giving a feasible solution to with the same objective function . this rescaling will not change the hypergraph that is supported on . for any cut of , let be the corresponding cut diversity .then by definition we have that where we restrict to values where the denominator is non - zero .we need to show that this value is not decreased by taking the minimum over all -embeddable diversities instead .let be -embeddable diversity that minimizes the ratio . by proposition [ charact_l1_embed ], can be expressed as a finite linear combination of cut - diversities : for some non - negative and some subsets of .let be the index that minimizes .then we claim that to see this , observe that let for vectors with for all , where and are non - negative vectors of the same size .we claim that attains its minimum on this domain at a value of consisting of a vector with a single non - zero entry . to show this, we compute the gradient of .\ ] ] if and are parallel then the result immediately follows so assume that they are not .then is not zero anywhere in the domain , and so the maximum of must be taken on boundary of the domain .so at least one must be zero . discard this term from the numerator and the denominator of . then repeat the argument for as a function of a vector of one fewer entries .repeating gives a single non - zero value , which may be set to 1 .for the second inequality , given and hypergraph supporting , let solve the maxhsp linear program . by proposition [ prop : characterizemaxhsp ]we know that is supported on .let be the minimal - distortion embeddable diversity of .we may assume that .then as required .let be positive vectors .define if is a closed set of positive vectors , define as . if is a closed convex cone , then where the maximum is taken over all non - negative vectors for which for any .( of theorem [ thm : hyrax ] ) let be a diversity supported by the hypergraph that maximizes , and define .we need to show that where the maximum is taken over all where is supported on .let be given by , and let be the cone of all -embeddable diversities on .then .we apply the lemma to show that where the maximum is taken over all non - negative vectors which satisfy the restriction for any -embeddable diversity .this tells us that there exists such that and for any -embeddable diversity .first we show that we may assume that is supported on .suppose that for some set , we have .since is supported on there are hyperedges that form a connected set covering with .define a new vector by even with this new we still have and for any -embeddable diversity . to see this, first note that so secondly , since satisfies and these are the only sets on which is changed , it follows that .we repeat this procedure until we have only if .m. abramowitz and i. a. stegun ._ handbook of mathematical functions with formulas , graphs , and mathematical tables _ , volume 55 of _ national bureau of standards applied mathematics series_. for sale by the superintendent of documents , u.s .government printing office , washington , d.c . , 1964 .d. bryant and p. f. tupper .hyperconvexity and tight - span theory for diversities ._ advances in mathematics _ , 2310 ( 6):0 3172 3198 , 2012 .issn 0001 - 8708 .doi : http://dx.doi.org/10.1016/j.aim.2012.08.008 .k. jain , m. mahdian , and m. r. salavatipour .packing steiner trees . in _ proceedings of the fourteenth annual acm - siam symposium on discrete algorithms ( baltimore , md , 2003 ) _ ,pages 266274 , new york , 2003 .t. kirly and l. c. lau .approximate min - max theorems for steiner rooted - orientations of graphs and hypergraphs ._ journal of combinatorial theory , series b _ , 980 ( 6):0 1233 1252 , 2008 .issn 0095 - 8956 .doi : http://dx.doi.org/10.1016/j.jctb.2008.01.006 .t. leighton and s. rao .an approximate max - flow min - cut theorem for uniform multicommodity flow problems with applications to approximation algorithms . in _ foundations of computer science , 1988 ., 29th annual symposium on _ , pages 422431 , 1988 .doi : 10.1109/sfcs.1988.21958 .v. moulton , c. semple , and m. steel .optimizing phylogenetic diversity under constraints ._ journal of theoretical biology _ , 2460 ( 1):0 186 194 , 2007 .issn 0022 - 5193 .doi : http://dx.doi.org/10.1016/j.jtbi.2006.12.021 .
the embedding of finite metrics in has become a fundamental tool for both combinatorial optimization and large - scale data analysis . one important application is to network flow problems as there is close relation between max - flow min - cut theorems and the minimal distortion embeddings of metrics into . here we show that this theory can be generalized to a larger set of combinatorial optimization problems on both graphs and hypergraphs . this theory is not built on metrics and metric embeddings , but on _ diversities _ , a type of multi - way metric introduced recently by the authors . we explore diversity embeddings , diversities , and their application to _ steiner tree packing _ and _ hypergraph cut _ problems .
the idea of elaborating the foundation of space - time ( or foundation of relativity ) in a spirit analogous with the rather successful foundation of mathematics ( fom ) was initiated by several authors including , e.g. , david hilbert or leading contemporary logician harvey friedman .foundation of mathematics has been carried through strictly within the framework of first - order logic ( fol ) , for certain reasons .the same reasons motivate the effort of keeping the foundation of space - time also inside fol .one of the reasons is that staying inside fol helps us to avoid tacit assumptions , another reason is that fol has a complete inference system while higher - order logic can not have one by gdel s incompleteness theorem , see e.g. , vnnen or ( * ? ? ?* appendix ) . for more motivation for staying inside fol ( as opposed to higher - order logic ) , cf .e.g. , ax , pambuccian , ( * ? ? ?* appendix 1 : why exactly fol ) , , but the reasons in vnnen , ferreirs , or woleski also apply . following the above motivation , we begin at the beginning , namely first we recall a streamlined fol axiomatization of special relativity theory , from the literature .is complete with respect to ( w.r.t .) questions about inertial motion. then we ask ourselves whether we can prove the usual relativistic properties of accelerated motion ( e.g. , clocks in acceleration ) in . as it turns out ,this is practically equivalent to asking whether is strong enough to `` handle '' ( or treat ) accelerated observers .we show that there is a mathematical principle called induction ( ) coming from real analysis which needs to be added to in order to handle situations involving relativistic acceleration .we present an extended version of which is strong enough to handle accelerated clocks , in particular , accelerated observers .we show that the so - called twin paradox becomes provable in .it also becomes possible to introduce einstein s equivalence principle for treating gravity as acceleration and proving the gravitational time dilation , i.e. that gravity `` causes time to run slow '' .what we are doing here is not unrelated to field s `` science without numbers '' programme and to `` reverse mathematics '' in the sense of harvey friedman and steven simpson .namely , we systematically ask ourselves which mathematical principles or assumptions ( like , e.g. , ) are really needed for proving certain observational predictions of relativity .( it was this striving for parsimony in axioms or assumptions which we alluded to when we mentioned , way above , that was `` streamlined '' . )the interplay between logic and relativity theory goes back to around 1920 and has been playing a non - negligible role in works of researchers like reichenbach , carnap , suppes , ax , szekeres , malament , walker , and of many other contemporaries .in section [ ax - s ] we recall the fol axiomatization complete w.r.t .questions concerning inertial motion .there we also introduce an extension of ( still inside fol ) capable for handling accelerated clocks and also accelerated observers . in section [ main - s ]we formalize the twin paradox in the language of fol .we formulate theorems [ thmtwp ] , [ thmeq ] stating that the twin paradox is provable from and the same for related questions for accelerated clocks .theorems [ thmnoind ] , [ thmmo ] state that is not sufficient for this , more concretely that the induction axiom in is needed . in sections [ an - s ] , [ proofss ] we prove these theorems. motivation for the research direction reported here is nicely summarized in ax , suppes ; cf .also the introduction of .harvey friedman s present a rather convincing general perspective ( and motivation ) for the kind of work reported here .in this paper we deal with the kinematics of relativity only , i.e. we deal with motion of _ bodies _ ( or _ test - particles _ ) .the motivation for our choice of vocabulary ( for special relativity ) is summarized as follows. we will represent motion as changing spatial location in time .to do so , we will have reference - frames for coordinatizing events and , for simplicity , we will associate reference - frames with special bodies which we will call _ observers_. we visualize an observer - as - a - body as `` sitting '' in the origin of the space part of its reference - frame , or equivalently , `` living '' on the time - axis of the reference - frame .we will distinguish _ inertial _ observers from non - inertial ( i.e. accelerated ) ones .there will be another special kind of bodies which we will call _photons_. for coordinatizing events we will use an arbitrary _ ordered field _ in place of the field of the real numbers . thus the elements of this field will be the `` _ quantities _ '' which we will use for marking time and space . allowing arbitrary ordered fields in place of the realsincreases flexibility of our theory and minimizes the amount of our mathematical presuppositions .e.g. , ax for further motivation in this direction .similar remarks apply to our flexibility oriented decisions below , e.g. , keeping the number of space - time dimensions a variable .using coordinate systems ( or reference - frames ) instead of a single observer independent space - time structure is only a matter of didactical convenience and visualization , furthermore it also helps us in weeding out unnecessary axioms from our theories .motivated by the above , we now turn to fixing the fol language of our axiom systems .the first occurrences of concepts used in this work are set by boldface letters to make it easier to find them . throughout this work ,if - and - only - if is abbreviated to * iff*. let us fix a natural number for the dimension of the space - time that we are going to axiomatize .our first - order language contains the following non - logical symbols : * unary relation symbols ( for * bodies * ) , ( for * observers * ) , ( for * inertial observers * ) , ( for * photons * ) and ( for * quantities * which are going to be elements of a field ) , * binary function symbols , and a binary relation symbol ( for the field operations and the ordering on ) , and * a -ary relation symbol ( for * world - view relation * ) . the bodies will play the role of the `` main characters '' of our space - time models and they will be `` observed '' ( coordinatized using the quantities ) by the observers .this observation will be coded by the world - view relation .our bodies and observers are basically the same as the `` test particles '' and the `` reference - frames '' , respectively , in some of the literature .we read as `` is a body '' , `` is an observer '' , `` is an inertial observer '' , `` is a photon '' , `` is a field - element '' .we use the world - view relation to talk about coordinatization , by reading as `` observer observes ( or sees ) body at coordinate point '' .this kind of observation has no connection with seeing via photons , it simply means coordinatization . are the so - called atomic formulas of our first - order language , where can be arbitrary variables or terms built up from variables by using the field - operations `` '' and `` '' .the * formulas * of our first - order language are built up from these atomic formulas by using the logical connectives _ not _( ) , _ and _ ( ) , _ or _ ( ) , _ implies _ ( ) , _ if - and - only - if _ ( ) and the quantifiers _ exists _ ( ) and _ for all _ ( ) for every variable .usually we use the variables to denote observers , to denote bodies , to denote photons and to denote quantities ( i.e. field - elements ) .we write and in place of and , e.g. , we write in place of , and we write in place of etc .the * models * of this language are of the form where is a nonempty set , are unary relations on , etc .a unary relation on is just a subset of .thus we use etc . as sets as well , e.g. , we write in place of . having fixed our language , we now turn to formulating an axiom system for special relativity in this language .we will make special efforts to keep all our axioms inside the above specified first - order logic language of . throughout this work , , and denote positive integers . ( -times ) is the set of all -tuples of elements of .if , then we assume that , i.e. denotes the -th component of the -tuple .the following axiom is always assumed and is part of every axiom system we propose .: : , , , , and are binary operations on , is a binary relation on and is an * euclidean ordered field * , i.e. a linearly ordered field in which positive elements have square roots . in pure first - order logic, the above axiom would look like ] . if is a formula and is a variable , then we say that is a * free variable * [ free variable ] of iff does not occur under the scope of either or .let be a formula ; and let be all the free variables of .let be a model .whether is true or false in depends on how we associate elements of to these free variables .when we associate to , respectively , then denotes this truth - value , thus is either true or false in .for example , if is , then is true in while is false in . is said to be * true * in if is true in no matter how we associate elements to the free variables .we say that a subset of is * definable by * iff there are such that .: : every subset of definable by has a supremum if it is non - empty and * bounded * : \;\land\ ; [ \exists b\in { { \mathrm}{f}}\quad ( \forall x\in { { \mathrm}{f}}\quad \varphi \longrightarrow x \le b)]\;\\ \longrightarrow\ ; \big(\exists s \in { { \mathrm}{f}}\enskip \forall b \in { { \mathrm}{f}}\quad ( \forall x\in { { \mathrm}{f}}\quad \varphi \longrightarrow x \le b)\iff s\le b\big ) . \end{split}\ ] ] we say that a subset of is * definable * iff it is definable by a fol - formula .our axiom scheme below says that every non - empty bounded and definable subset of has a supremum . notice that is true in any model whose ordered field reduct is .let us add to and call it : is a countable set of fol - formulas .we note that there are non - trivial models of , cf .e.g. , remark [ trrem ] way below .furthermore , we note that the construction in misner - thorne - wheeler ( * ? ? ?* chapter 6 entitled the local coordinate system of an accelerated observer , especially pp .172 - 173 and chapter 13.6 entitled the proper reference frame of an accelerated observer on pp . 327 - 332 ) can be used for constructing models of .models of are discussed to some detail in .theorems [ thmtwp ] and [ thmeq ] ( and also prop.[proptr ] , rem.[trrem ] ) below show that already implies properties of accelerated clocks , e.g. , it implies the twin paradox. implies all the fol - formulas true in , but is stronger .let denote the set of elements of that talk only about the ordered field reduct , i.e. let now , together with the axioms of linearly ordered fields is a complete axiomatization of the fol - theory of , i.e. all fol - formulas valid in can be derived from them . herein or ( * ? ? ?* proposition a.0.1 ) .] however , is stronger than , since by corollary [ cornoind ] below , while by theorem [ thmtwp ] .the strength of comes from the fact that the formulas in can `` talk '' about more `` things '' than just those in the language of ( namely they can talk about the world - view relation , too ) . for understanding how works, it is important to notice that does not speak about the field itself , but instead , it speaks about connections between and the rest of the model . why do we call a kind of induction schemathe reason is the following . implies that if a formula becomes false sometime after while being true at , then there is a `` first '' time - point where , so to speak , becomes false .this time - point is the supremum of the time - points until which remained true after .now , may or may not be false at this supremum , but it is false arbitrarily `` close '' to it afterwards . if such a `` point of change '' for the truth of can not exist , implies that has to be true always after if it is true at .( without , this may not be true . )twin paradox ( tp ) concerns two twin siblings whom we shall call ann and ian .( `` a '' and `` i '' stand for accelerated and for inertial , respectively ) .ann travels in a spaceship to some distant star while ian remains at home .tp states that when ann returns home she will be _ younger _ than her _twin brother _ ian . we now formulate tp in our fol language .the * segment * between and is defined as : :=\{{\lambda}p+(1-{\lambda})q:\lambda\in { { \mathrm}{f}}\;\land\ ; 0\le{\lambda}\le1\}.\ ] ] we say that observer is in * twin - paradox relation * with observer iff whenever leaves between two meetings , measures less time between the two meetings than : \subseteq & tr_k(k)\;\land\ ; [ p ' q']\not\subseteq tr_m(k)\\ & \longrightarrow\ ; \big|q_t - p_t\big|<\big|q'_t - p'_t\big| , \end{split}\ ] ] cf . figure 2 . in this casewe write .we note that , if two observers do not leave each other or they meet less than twice , then they are in twin - paradox relation by this definition . thus two inertial observers are always in this relation .[ l][l ] [ l][l ] [ b][b] [ rt][rt] [ rb][rb] [ l][l] [ lt][lt] [ lb][lb] [ c][c] [ c][c] [ b][b] [ b][b] [ r][r] [ r][r] [ c][c]same events : : every observer is in twin - paradox relation with every inertial observer : let be a formula and be a set of formulas . denotes that is true in all models of .gdel s completeness theorem for fol implies that whenever , there is a ( syntactic ) derivation of from via the commonly used derivation rules of fol .hence the next theorem states that the formula formulating the twin paradox is provable from the axiom system .[ thmtwp ] if .the proof of the theorem is in section [ proofss ] .now we turn to formulating a phenomenon which we call duration determining property of events . : : if each of two observers observes the very same ( non - empty ) events in a segment of their self - lines , they measure the same time between the end points of these two segments : \}=\{ev_m(r ' & ) : r'\in [ p ' q']\}\ ; \longrightarrow\;\\ \big|&q_t - p_t\big|=\big|q'_t - p'_t\big| , \end{split}\ ] ] see the right hand side of figure 2 . the next theorem states that also can be proved from our fol axiom system .[ thmeq ] if .the proof of the theorem is in section [ proofss ] .the assumption can not be omitted from theorem [ thmtwp ] .however , theorems [ thmtwp ] and [ thmeq ] remain true if we omit the assumption and assume auxiliary axioms and below , i.e. holds for , too .a proof for the latter statement can be obtained from the proofs of theorems [ thmtwp ] and [ thmeq ] by ( * ? ? ?* items 4.3.1 , 4.2.4 , 4.2.5 ) and ( * ? ? ?* theorem 1.4(ii ) ) .: : in every inertial observer s coordinate system , every line of slope less than 1 is the life - line of an inertial observer : : : traces of inertial observers are lines as observed by inertial observers : can the assumption be omitted from theorem [ thmeq ] , i.e.does hold for ? the following theorem says that theorems [ thmtwp ] and [ thmeq ] do not remain true if we omit the axiom scheme from .if a formula is not true in a model , we write .[ thmnoind ] for every euclidean ordered field not isomorphic to , there is a model of such that , and the ordered field reduct of is .the proof of the theorem is in section [ proofss ] . by theorems[ thmtwp ] and [ thmeq ] , is not true in the model mentioned in theorem [ thmnoind ] .this theorem has strong consequences , it implies that to prove the twin paradox , it does not suffice to add all the fol - formulas valid in ( to ) . let denote the set of all fol - formulas valid in .[ cornoind ] and .the proof of the corollary is in section [ proofss ] .an ordered field is called * non - archimedean * if it has an element such that , for every positive integer , .we call these elements * infinitesimally small*. the following theorem says that , for countable or non - archimedean euclidean ordered fields , there are quite sophisticated models of in which and are false . [ thmmo ] for every euclideanordered field which is countable or non - archimedean , there is a model of such that , , the ordered field reduct of is and ( i)(iv ) below also hold in .* every observer uses the whole coordinate system for coordinate - domain : * at any point in , there is a co - moving inertial observer of any observer : * all observers observe the same set of events : * every observer observes every event only once : the proof of the theorem is in section [ proofss ] .[ l][l] [ rt][rt] [ rb][rb] [ rt][rt] [ rb][rb] [ l][l] [ l][l] [ l][l] [ lb][lb] [ lb][lb] [ t][t] [ b][b] [ r][r] [ r][r] [ r][r] finally we formulate a question .to this end we introduce the inertial version of the twin paradox and some auxiliary axioms . in the inertial version of the twin paradox, we use the common trick of the literature to talk about the twin paradox without talking about accelerated observers .we replace the accelerated twin with two inertial ones , a leaving and an approaching one .we say that observers and are in * inertial twin - paradox relation * with observer if the following holds : \\ \longrightarrow & \ ; |q'_t - p'_t| > |q_t - r_t|+|r_t - p_t\big| , \end{split}\ ] ] cf . figure 3 . in this casewe write . : : every three inertial observers are in inertial twin - paradox relation : : : to every inertial observer and coordinate point there is an inertial observer such that the world - view transformation between and is the translation by vector : : : the world - view transformation between inertial observers and is a linear transformation if : [ qtwp ] does theorem [ thmtwp ] remain true if we replace in with the inertial version of the twin paradox and we assume the auxiliary axioms , and ?question [ qconv ] .we note that and are true in the models of in case , cf .* theorem 1.2 ) , ( * ? ? ?* theorem 2.8.28 ) and .in this section we gather the statements ( and proofs from ) of the facts we will need from analysis .the point is in formulating these statements in fol and for an arbitrary ordered field in place of using the second - order language of the ordered field of reals . in the present sectionis assumed without any further mentioning .let .we say that is * between * and iff or .we use the following notation : :=\{x\in { { \mathrm}{f } } : a\le x \le b\} ] , we assume that and .we also use this convention for .let .then is said to be an * accumulation point * of if for all , has an element different from . is called * open * if for all , there is an such that .let and be binary relations .the * composition * of and is defined as : . the * domain * andthe * range * of are denoted by and , respectively . denotes the * inverse * of , i.e. .we think of a * function * as a special binary relation .notice that if are functions , then for all . denotes that is a function from to , i.e. and .notation denotes that is a * partial function * from to ; this means that is a function , and .let .we call * continuous * at if , is an accumulation point of and the usual formula of continuity holds for and , i.e. we call * differentiable * at if , is an accumulation point of and there is an such that this is unique .we call this the * derivate * of at and we denote it by . is said to be continuous ( differentiable ) on iff and is continuous ( differentiable ) at every .we note that the basic properties of the differentiability remain true since their proofs use only the ordered field properties of , cf.propositions [ propdiff ] , [ propaff ] and [ propmax ] below .let and . then and are defined as and .let . is said to be * increasing * on iff and for all , if , and is said to be * decreasing * on iff and for all , if .[ propdiff ] let and . then ( i)(v ) below hold .* if is differentiable at then it is also continuous at . *let .if is differentiable at , then is also differentiable at and . *if and are differentiable at and is an accumulation point of , then is differentiable at and .* if is differentiable at , is differentiable at and is an accumulation point of , then is differentiable at and . *if is increasing ( or decreasing ) on , differentiable at and , then is differentiable at . since the proofs of the statementsare based on the same calculations and ideas as in real analysis , we omit the proof , cf .* theorems 28.2 , 28.3 , 28.4 and 29.9 ) .let . denotes the -th projection function , i.e. .let .we denote the -th coordinate function of by , i.e. .we also denote by .a function is said to be an * affine map * if it is a linear map composed by a translation .is an affine map if there are and such that , and for all and . ]the following proposition says that the derivate of a function composed by an affine map at a point is the image of the derivate taken by the linear part of .[ propaff ] let be differentiable at and let be an affine map .then is differentiable at and .in particular , , i.e. .the statement is straightforward from the definitions . is said to be * locally maximal * at iff and there is a such that for all .the * local minimality * is defined analogously .[ propmax ] if is differentiable on and locally maximal or minimal at , then its derivate is at , i.e. .the proof is the same as in real analysis , cf.e.g ., ( * ? ? ?* theorem 5.8 ) .let be a model .an -ary relation is said to be * definable * iff there is a formula with only free variables and there are such that recall that says that every non - empty , bounded and definable subset of has a supremum .[ thmboltzano ] assume .let be definable and continuous on ] such that .let be between and .we can assume that .let : f(x ) < c\} ]. thus can not be less than since is an upper bound of and can not be greater than since is the smallest upper bound .hence as desired .[ thmsup ] assume .let be definable and continuous on ] exists and there is an ] has a supremum by since is definable , non - empty and bounded .this supremum has to be and since is continuous on ] is bounded .thus , by , it has a supremum , say , since it is definable and non - empty .we can assume that .let : \exists c \in { { \mathrm}{f}}\enskip \forall x\in [ a , y]\quad f(x)<c < s\} ] and is the supremum of . throughout this work denotes the identity function , i.e. .[ thmlagrange ] assume .let be definable , differentiable on ] and for all ] and its derivate is for all ] , such that for all ] .if and , then there is an such that .[ propint ] assume .let be definable and differentiable on .if for all , then there is a such that for all .assume that for all .let .then for all by ( ii ) and ( iii ) of proposition [ propdiff ] .if there are such that and , then , by the mean value theorem , there is an between and such that and this contradicts .thus for all .hence there is a such that for all .in the present section is assumed without any further mentioning .let be the natural embedding defined as .we define the * life - curve * of observer as seen by observer as . throughout this workwe denote by , for .thus is the coordinate point where observes the event `` s wristwatch shows '' , i.e. iff . in the following proposition ,we list several easy but useful consequences of some of our axioms .[ prop0 ] let and . then ( i)(viii ) below hold .then and for all distinct , . *. then . *. then and . *assume and .if , then is a bijection and is an injection . *. then and . *. then . *. then and .* assume , and .then . to prove ( i ) , let be distinct points . then there is a line of slope that contains but does not contain . by , this line is the trace of a photon .for such a photon , we have and .hence and . thus ( i ) holds .\(ii ) follows from ( i ) since by .\(iii ) and ( iv ) follow from ( i ) by the definitions of the world - view transformation and the life - curve . to prove ( v ) , let . then .since , by , and observe the same set of events , there is an such that .but then and . hence .thus .the other inclusion follows from the definition of the world - view transformation .thus and . to prove ( vi ) , let . by, there is an such that is a co - moving observer of at .for such an , we have and , by ( v ) , .thus . to prove ( vii ) , let . then .but then . thus . by , ; and this proves the first part of ( vii ) .by , we have .thus and this proves the second part of ( vii ) .the `` part '' of ( viii ) follows from ( vii ) . to prove the other inclusion ,let . then , by and ( vi ) , .thus there are and such that and .but then .hence .we say that is * well - parametrized * iff and the following holds : if is an accumulation point of , then is differentiable at and its derivate at is of minkowski - length , i.e. .assume .then the curve is well - parametrized iff is parametrized according to minkowski - length , i.e. for all , if \subseteq dom(f) ] is .( by minkowski - length of a curve we mean length according to minkowski - metric , e.g. , in the sense of wald ( * ? ? ? * , ( 3.3.7 ) ) ) . *proper time * or * wristwatch time * is defined as the minkowski - length of a time - like curve , cf .e.g. , wald ( * ? ? ?* , ( 3.3.8 ) ) , taylor - wheeler or dinverno ( * ? ? ?* , ( 8.14 ) ) .thus a curve defined on a subset of is well - parametrized iff it is parametrized according to proper time , or wristwatch - time .e.g. , ( * ? ? ?* , ( 8.16 ) ) . )the next proposition states that life - curves of accelerated observers in models of are well - parametrized .this implies that accelerated clocks behave as expected in models of .remark [ trrem ] after the proposition will state a kind of `` completeness theorem '' for life - curves of accelerated observers , much in the spirit of remark [ rem - specthm ] .[ proptr ] assume and .let and .then is well - parametrized and definable .let , .then is definable by its definition .furthermore , and by ( iii ) of proposition [ prop0 ] .let be an accumulation point of .we would like to prove that is differentiable at and its derivate at is of minkowski - length . by ( vii ) of proposition [ prop0 ] .thus , by , there is a co - moving inertial observer of at . by proposition [ propaff ], we can assume that is a co - moving inertial observer of at , i.e. , because of the following three statements . by ( v ) of proposition [ prop0 ] , for every , either of and can be obtained from the other by composing the other by a world - view transformation between inertial observers . by theorem [ thmpoi ] ,world - view transformations between inertial observers are poincar - transformations .poincar - transformations are affine and preserve the minkowski - distance . now , assume that is a co - moving inertial observer of at . then , and for every .therefore since and if , we have that for all , let be fixed .since and , there is a such that let such a be fixed . by ( [ tp - e2 ] ) ,( [ tp - e3 ] ) and the fact that , we have that by this and ( [ tp - e1 ] ) , we have thus .this completes the proof since .[ trrem ] well parametrized curves are exactly the life - curves of accelerated observers , in models of , as follows .let be an euclidean ordered field and let be well - parametrized .then there are a model of , and such that and the ordered field reduct of is . recall thatif , then this is a model of .this is not difficult to prove by using the methods of the present paper .we say that is * vertical * iff .[ lemwp ] let be well - parametrized . then ( i ) and ( ii ) below hold .* let be an accumulation point of .then is differentiable at and .furthermore , iff is vertical .* assume and that is definable .let \subseteq dom(f) ] .if is increasing on ] .let be well - parametrized . to prove ( i ) , let be an accumulation point of .then is of minkowski - length . by proposition [ propaff ], is differentiable at and .now , ( i ) follows from the fact that the absolute value of the time component of a vector of minkowski - length 1 is always greater than 1 and it is 1 iff the vector is vertical . to prove ( ii ) , assume and that is definable .let \subseteq dom(f) ] .thus , by rolle s theorem , is injective on ] since is continuous and injective on ] and .then for all ] .[ thmjtwp ] assume .let be definable , well - parametrized and \subseteq dom(f) ] , then .let be definable , well - parametrized and \subseteq dom(f) ] by proposition [ propaff ] .then , by the main value theorem , there is an such that . by ( i ) of lemma [ lemwp ] , we have .but then , .this completes the proof of ( i ) . to prove ( ii ) , let ] and ] by ( ii ) of lemma [ lemwp ] . thus or .now , by adding up the last three inequalities , we get . let . for convenience ,we introduce the following notation : if and if .a set is called * twin - paradoxical * iff , , if , for all if , then there is a such that , and for all distinct and for all , implies that . a positive answer to the following question would also provide a positive answer to question [ qtwp ] , cf . .[ qconv ] assume .let be definable such that is differentiable on ] be a subset of a twin - paradoxical set .are then ( i ) and ( ii ) below true ?* . *if for an ] and \subseteq dom(g) ] . then . by ( ii ) of lemma [ lemwp ], is increasing or decreasing on ] .we can assume that ] and that and are increasing on ] , respectively .is increasing on ] are replaced by and ] iff is increasing on ] and ] .we can assume that and .by lemma [ lemwp ] , and are differentiable on ] , respectively , and for all ] . by ( iv ) and ( v ) of proposition [ propdiff ] , is also differentiable on .by , we have .thus for all by ( iv ) of proposition [ propdiff ] .since both and are of minkowski - length and their time - components are positive and for all , we conclude that for all . by proposition [ propint ], we get that there is a such that for all and thus for all ] and ] and \not\subseteq tr_m(k) ] . by proposition [ proptr ] , by , .by , by , by and by , by and , we have that .thus , by \not\subseteq tr_m(k) ] , \subseteq dom(tr).\ ] ] by ( i ) of lemma [ lemwp ] , ( [ twp - e1 ] ) and ( [ twp - e3 ] ) , we have that is differentiable on ] .let \subseteq \bar{t} ] such that .let such an be fixed . since by ( vii ) of proposition [ prop0 ] .but then .hence .thus \quad\mbox{and}\quad tr(x)_s\neq tr(p_t)_s\ ] ] since .now , by ( [ twp - e1])([twp - e4 ] ) above , we can apply ( ii ) of theorem [ thmjtwp ] to and ] , cf . the right hand side of figure 2 .thus \subseteq cd(k) ] . by , and . therefore \subseteq tr_k(k)\subseteq\bar t ] .we can assume that and .let .we are going to prove that , by applying theorem [ thmjeq ] as follows : let :=[p_t , q_t] ] , and . by ( viii ) of proposition [ prop0 ] , by \subseteq tr_k(k) ] , we conclude that \subseteq dom(f) ] . by proposition[ proptr ] , and are well - parametrized and definable .we have \}=\{g(r'):r'\in [ a',b']\} ] .thus , by theorem [ thmjeq ] , we conclude that .thus and this is what we wanted to prove .[ thmnoind - proof ] [ thmmo - proof ] we will construct three models .let be an euclidean ordered field different from .for every , let denote the translation by vector , i.e. . is called * translation - like * iff for all , there is a such that for all , and for all , and imply that .let be translation - like .first we construct a model of and ( i ) and ( ii ) of theorem [ thmmo ] for and , which will be a model of ( iii ) and ( iv ) of theorem [ thmmo ] if is a bijection .we will show that is false in . then we will choose and appropriately to get the desired models in which is false , too .let the ordered field reduct of be .let be a partition s are disjoint and .] of such that every is open , and for all and , .such a partition can easily be constructed .be a non - empty bounded set that does not have a supremum .let , , , and .] let for every , cf . figure 4 .[ r][r] [ r][r] [ r][r] [ r][r] [ b][b]world - view of [ b][b]world - view of [ l][l] [ l][l] [ l][l] [ l][l] [ l][l] [ r][r] [ r][r] [ b][b] [ t][t] [ t][t] [ t][t ] for the proofs of theorems [ thmnoind ] and [ thmmo].,title="fig:",scaledwidth=90.0% ] it is easy to see that is a translation - like bijection .let , , and . recall that is the origin .first we give the world - view of then we give the world - view of an arbitrary observer by giving the world - view transformation between and .let and for all and . and let for all .let for all . from these world - view transformations , we can obtain the world - view of each observer in the following way : for all . and from the world - views, we can obtain the relation as follows : for all , and , let iff .thus we are given the model .we note that and for all and .it is easy to check that the axioms of and ( i ) and ( ii ) of theorem [ thmmo ] are true in and that if is a bijection , then ( iii ) and ( iv ) of theorem [ thmmo ] are also true in .let be such that , ; and let , and .it is easy to check that is false in for , , , , and , i.e. , , , \subseteq tr_{k'}(k') ] and , cf .figure 4 .[ t][t]world - view of [ t][t]world - view of [ r][r] [ r][r] [ r][r] [ r][r] [ rt][rt] [ lb][lb] [ rt][rt] [ lb][lb] [ r][r] [ r][r] [ b][b] [ t][t] [ t][t] [ t][t ] [ t][t ] [ t][t ] [ l][l] [ r][r] [ r][r] [ r][r] [ r][r] [ l][l] [ l][l] [ l][l] [ l][l] for the proofs of theorems [ thmnoind ] and [ thmmo].,title="fig:",scaledwidth=85.0% ] to construct the first model , let be an arbitrary euclidean ordered field different from and let be a partition of such that for all and , .let for every , cf .figure 5 .it is easy to see that is translation - like .let be such that and ; and let , and .it is also easy to check that is false in for , , , , and , i.e. , , \}=\{ev_m(r'):r'\in [ p ' q']\} ] and denote the -th element with .first we cover \cap { { \mathrm}{f}} ] such that the sum of their length is , the length of each interval is in and the distance of the left endpoint of each interval from is also in .we are going to construct this covering by recursion . in the -th step, we will use only finitely many new intervals such that the sum of their length is .in the first step , we cover with an interval of length .suppose that we have covered for each .since we have used only finitely many intervals yet , we can cover with an interval that is not longer than .since , it is easy to see that we can choose finitely many other subintervals of ] .let us enumerate these intervals .let be the -th interval , be the length of , and the distance of and the left endpoint of . since .let for all , cf .figure 6 .it is easy to see that is a translation - like bijection .let be such that and ; and let , and .it is also easy to check that is false in for , , , , and , cf .figure 5 .let be a field elementarily equivalent to , i.e. such that all fol - formulas valid in are valid in , too .assume that is not isomorphic to .e.g. the field of the real algebraic numbers is such .let be a model of with field - reduct in which neither nor is true .such an exists by theorem [ thmnoind ] .since by assumption , this shows . in a subsequent paper , we will discuss how the present methods and in particular and can be used for introducing gravity via einstein s equivalence principle and for proving that gravity `` causes time run slow '' ( known as gravitational time dilation ) . in this connection we would like to point out that it is explained in misner et al . that the theory of accelerated observers ( in flat space - time ! ) is a rather useful first step in building up general relativity by using the methods of that book .a fol - formula expressing is : .\end{split}\ ] ] a fol - formula expressing is : \wedge\\ \big[\forall ph\;\forall\lambda\quad { { \mathrm}{ph}}(ph)\wedge{{\mathrm}{f}}(\lambda)\wedge{{\mathrm}{w}}(m , ph , p)\wedge&{{\mathrm}{w}}(m , ph , q)\\ \longrightarrow\ ; { { \mathrm}{w}}\big(&m , ph , q+\lambda(p - q)\big)\big ] .\end{split}\ ] ] a fol - formula expressing is : \big ) .\end{split}\ ] ]we are grateful to victor pambuccian for careful reading the paper and for various useful suggestions .we are also grateful to hajnal andrka , ramn horvth and bertalan pcsi for enjoyable discussions .special thanks are due to hajnal andrka for extensive help and support in writing the paper , encouragement and suggestions .h. andrka , j. x. madarsz and i. nmeti , `` logical axiomatizations of space - time , '' in _ non - euclidean geometries _ , e. molnr , ed .( kluwer , dordrecht , 2005 ) http://www.math-inst.hu/pub/algebraic-logic/lstsamples.ps .h. andrka , j. x. madarsz and i. nmeti , with contributions from a. andai , g. sgi , i. sain and cs .t oke , `` on the logical structure of relativity theories , '' research report , alfrd rnyi institute of mathematics , budapest ( 2002 ) http://www.math-inst.hu/pub/algebraic-logic/contents.html .
we study the foundation of space - time theory in the framework of first - order logic ( fol ) . since the foundation of mathematics has been successfully carried through ( via set theory ) in fol , it is not entirely impossible to do the same for space - time theory ( or relativity ) . first we recall a simple and streamlined fol - axiomatization of special relativity from the literature . is complete with respect to questions about inertial motion . then we ask ourselves whether we can prove the usual relativistic properties of accelerated motion ( e.g. , clocks in acceleration ) in . as it turns out , this is practically equivalent to asking whether is strong enough to `` handle '' ( or treat ) accelerated observers . we show that there is a mathematical principle called induction ( ) coming from real analysis which needs to be added to in order to handle situations involving relativistic acceleration . we present an extended version of which is strong enough to handle accelerated motion , in particular , accelerated observers . among others , we show that the twin paradox becomes provable in , but it is not provable without . key words : twin paradox , relativity theory , accelerated observers , first - order logic , axiomatization , foundation of relativity theory
lots of research has been done to control energy utilization in the network layer of wsn .clustering technique got great attention in resolving energy utilization issue . in clustering technique ,data is first gathered and then forwarded to base station ( bs ) .a uniform distribution of nodes and optimum number of chs in each round helps to control load distribution in clustering technique which , ultimately utilizes energy efficiently . as a second step controlling the number of clusters formed during network operation enhances the network stability and lifetime .a network is said to be stable if difference between first node died time and last node died time is minimum .optimum number of chs not only control load distribution but also uses energy efficiently .one of the problem in clustering technique is the creation of energy holes . in random distribution of nodeschs which are overloaded cause the creation of energy holes . in multi - hop data forwarding technique , nodes near the bs consume large energy .these area of nodes are also called hotspots .energy depletes quickly in the hotspot areas of network .hienzelman , a.p .chandrakasan and h. balakrishnan proposed leach ; one of the first clustering routing protocol for wsns .according to leach algorithm , selection of ch for current round is probabilistic .therefore , in this approach , of ch selection chs formed are not uniformly distributed , which may cause existence of disconnected nodes .leach - centralized ( leach - c ) is an extension of leach which is proposed by balakrishnan , chandrakasan and heinzelman .the plus point of this algorithm is that , bs makes sure that node with less energy does not become ch .however in large scale network , nodes far away from bs are unable to send their status to bs .multihop - leach protocol is proposed by nauman israr and irfan awan .multihop - leach has mainly two modes of operations , i.e. , multihop inter - cluster operation and multihop - intra cluster operation .in the leading operation nodes sense the environment and send their data to ch , this data is received by bs through a chain of chs , while the lagging operation is performed in time out period . however , in both modes of operations ch is selected randomly . this agreement does not guaranty full area coverage of the entire network which , it is monitoring .et al_. proposed leach - selective cluster ( leach - sc ) .ch selection in leach - sc is like leach .the algorithm changes the theme of cluster formation in such a way that node finds ch closest to the mid point between itself and bs , then joins that cluster . however , number of chs fluctuate as rounds proceed .localization problem is discussed in and .authors divided the network area in to sub areas .the localization technique helps to improve the coverage hole problem . in this research workwe introduce a new clustering technique of routing layer communication . in densitycontrolled divide - and - rule ( ddr ) , nodes are distributed uniformly in the network and randomly distributed in different segments of network ; to control the density . in this way coverage hole problem can be avoided . secondly in ddr ,clusters formed are static and number of clusters remain fix during network operation .the number of clusters formed are near to optimum number .this helps in efficient energy utilization and uniform load distribution .rest of the paper is organised as under .in this section , we introduce ddr .first we describe how the network area is logically divided into segments then we find out the energy consumption in these segments ..notations used in mathematical model [ cols="<,<",options="header " , ] by increasing network area and number of nodes with a ratio of per node ; for 134 nodes we divide network area into four concentric squares , incase of 167 and 200 nodes we divide network area into five and six concentric squares respectively . in such situations communication distance is no more ; less than or equal to reference distance and overhead on chs near to bs increases as a result stability period and network lifetime decreases to an extent .in this article we focused on energy efficient routing in wsns . our technique , ddr is based on static clustering and optimum number of ch selection in each round . in ddrwe divided the network field into logical segments .the segmentation process helps to reduce communication distance between node and ch and between ch and bs .multi - hop communication in inter - cluster further reduces communication distance . in ddrwe have tried to overcome the problem of coverage hole and energy hole through density controlled uniform distribution of nodes in different segments of network .optimum number of chs in each round helps to achieve balanced load distribution . which enhances stable period and network life time .00 a. liu , p. zhang , z. chen , theoretical analysis of the lifetime and energy hole in cluster based wireless sensor networks , journal of parallel and distributed computing 71 ( 10 ) ( 2011 ) 13271355 .w. heinzelman , a. chandrakasan , h. balakrishnan , energy - efficient communication protocol for wireless microsensor networks , in : system sciences , 2000 .proceedings of the 33rd annual hawaii international conference on , ieee , 2000 , pp . 10pp .w. heinzelman , application - specific protocol architectures for wireless networks , ph.d .thesis , massachusetts institute of technology ( 2000 ) .n. israr , i. awan , multihop clustering algorithm for load balancing in wireless sensor networks , international journal of simulation , systems , science and technology 8 ( 1 ) ( 2007 ) 1325 .w. jun , z. xin , x. junyuan , m. zhengkun , a distancebased clustering routing protocol in wireless sensor networks , in : communication technology ( icct ) , 2010 12th ieee international conference on , ieee , 2010 , pp .r. sugihara , r. gupta , sensor localization with deterministic accuracy guarantee , in : infocom , 2011 proceedings ieee , ieee , 2011 , pp .. m. jin , s. xia , h. wu , x. gu , scalable and fully distributed localization with mere connectivity , in : infocom , 2011 proceedings ieee , ieee , 2011 , pp .. j. lian , l. chen , k. naik , t. otzu , g. agnew , modeling and enhancing the data capacity of wireless sensor networks , ieee monograph on sensor network operations
cluster based routing technique is most popular routing technique in wireless sensor networks ( wsns ) . due to varying need of wsn applications efficient energy utilization in routing protocols is still a potential area of research . in this research work we introduced a new energy efficient cluster based routing technique . in this technique we tried to overcome the problem of coverage hole and energy hole . in our technique we controlled these problems by introducing density controlled uniform distribution of nodes and fixing optimum number of cluster heads ( chs ) in each round . finally we verified our technique by experimental results of matlab simulations . energy , efficient , routing , wsn , static , clustering , hole .
preparing mechanical harmonic oscillators in their ground states is potentially important for future quantum technologies , and is presently relevant for experimental work in optomechanics and nano - electromechanics .here we consider two simple and rather different methods for achieving this goal .the first , called resolved - sideband cooling , is an example of coherent feedback control in which the mechanical oscillator is coupled linearly to an `` auxiliary '' microwave or optical mode . since the auxiliary oscillator has a much higher frequency than the mechanics , it is in its ground state at the ambient temperature . because of this the coupling between the two transfers both energy and entropy from the mechanics to the auxiliary , cooling the former .sideband cooling has already allowed experimentalists to prepare mechanical oscillators in a state with less than one phonon .the second method we investigate is that in which an explicit continuous measurement is made on the mechanical oscillator ( from now on just `` the oscillator '' ) , and the information from this measurement is used to apply a force to the oscillator to damp its motion in the manner of traditional feedback control .our motivation for comparing the performance of resolved - sideband cooling and this measurement - based feedback cooling is to determine how differently the two forms of feedback behave , in the regime of good ground - state cooling , and to understand better the origin of this difference .two previous works have examined , and to varying extents compared , the two cooling methods we consider here . to explain how our work extends and complements these previous analyses we now summarize them briefly .the work by genes _et al . _ was the first to obtain a complete analytical solution for resolved - sideband cooling .in addition to presenting this solution they also analyzed a measurement - based feedback protocol for cooling in which the raw signal from a continuous measurement of position is processed by taking its derivative , and a force applied to the oscillator proportional to this processed signal .nevertheless , genes _ et al ._ were not able to compare quantitatively the effectiveness of the two methods because they did not have a means to quantitatively compare the resources used by each : sideband cooling employs a unitary coupling to the oscillator , whereas measurement - based feedback employs an irreversible coupling quantified by a damping rate .hamerly and mabuchi , employing the theory developed in , made a direct quantitative comparison of the effectiveness of sideband cooling and measurement - based feedback by using the fact that both cooling methods can be realized by coupling the mechanical oscillator to a traveling - wave electromagnetic field ( also known as an _ output channel _ ) .that is , a traveling - wave field can be used to mediate both the continuous measurement used in measurement - based feedback and the unitary coupling of sideband cooling ( coherent feedback ) . because both cooling methods can be implemented using the same coupling , one can ask which method is able to make the best use of the information obtained by the coupling for a given coupling rate . hamerly andmabuchi ( hm ) also used the optimal estimates of the mean position and momentum in the measurement - based feedback protocol , as we do here , whereas genes __ did not . for a weakly - damped ( high - q ) mechanical oscillator , and for a fixed set of parameters ,hm compared measurement - based cooling to coherent feedback as a function of the bath temperature .they found that for weak damping , and for a given set of parameters , coherent feedback was able to cool better than the best linear measurement - based feedback .the results of hamerly and mabuchi were purely numerical .technologically the most interesting regime for cooling is that in which the mechanical oscillator has a high factor ( weak damping ) , and in which the coupling rate to the controller is strong enough that the control protocol can keep the mechanical oscillator close to its ground state .here we show that it is possible to obtain simple analytic expressions for the optimal cooling achieved by both control methods to first order in the small parameters that define the regime of high and ground - state cooling . for sideband cooling thisis achieved merely by expanding the full expression for the performance to second order in these parameters . for measurement - based cooling the equations that determine the performanceare non - linear , and can only be solved exactly for zero damping ( ) .we obtain analytic expressions for weak damping to first - order in the small parameters by using a perturbative method that expands about this exact solution . having analytic expressions for both cooling methods sheds light on the origins of the limits of each , the relationship between these limits , and reveals the dependance on the various key parameters .the small parameters that define the regime of high and ground - state cooling are as follows .we define the regime of ground - state cooling , which is also the regime of `` good control '' , as that in which the control method can maintain the average number of phonons in the oscillator , denoted by , at a value much less than unity ( ) .if we define the steady - state probability that the system will be found outside the ground state by , then this is also the regime in which .the rate at which energy flows into the oscillator from the environment is given by where is the damping rate of the oscillator and is the average number of phonons that the oscillator would have if it were at the ambient temperature .the regime of ground - state cooling requires that the rate at which the control process extracts energy from the oscillator is much greater than . for coherent feedbackthis means that the rate of the interaction with the auxiliary , , ( defined precisely below ) , and the damping rate of the auxiliary , , satisfy for measurement - based feedback the regime of ground - state cooling requires that the measurement rate , , ( a scaled version of the measurement strength , defined in section [ physimp ] ) , and the damping rate induced by the feedback force , , satisfy a further requirement for both methods to provide ground - state cooling is that the rate of the linear coupling , , between the oscillator and the auxiliary mode , or the measurement rate , is slower than the frequency of the oscillator .this stems from the fact that a linear interaction is not the ideal interaction for cooling , and it only works well in the weak - coupling regime .this requirement is not as strict as the above inequalities , however , since a value of as low as 5 can be sufficient to achieve optimal cooling . to obtain our simple expressionswe do assume that and expand to second order in the small parameters and . the fact that these parameters need not be very small is indicated by the fact that they do not affect the cooling to first order but only to second order .further , as part of our analysis we derive results that are exact in and ; it is only in the small parameters and for which our results are necessarily perturbative . given the above time - scale separations , the small parameters that define our regime are for resolved - sideband cooling and for measurement - based cooling .the size of the ratio is determined by further considerations that we discuss below .the ratio is second order in the above parameters : we obtain analytic expressions for the steady - state average phonon number in the oscillator , , either to leading or next - to - leading order in the parameters .resolved - sideband cooling is traditionally implemented by coupling the mechanical oscillator to the auxiliary directly via the linear interaction where , with the oscillator annihilation operator , and with the annihilation operator of the optical or superconducting mode .the interaction rate is modulated at the frequency difference between the oscillator and the cavity mode , which is what allows them to exchange energy as if they were resonant .what enables the direct comparison between the two cooling methods is that resolved - sideband cooling can be implemented by coupling the oscillator and auxiliary via a propagating electromagnetic field , and this is also how measurement - based feedback is implemented . in the latter the propagating field is measured using homodyne detection .thus both cooling methods are able to use the same interface to the resonator , and thus extract information from the resonator in an identical way . for a given rate , ,at which the propagating field couples to the oscillator , we can then ask which cooling method performs better , and is thus able to make better use of the information .when the propagating field is measured , the coupling rate becomes the `` measurement strength '' ( defined below ) characterizing the rate at which the measurement extracts information .when the field is used instead to create the linear coupling with the auxiliary oscillator , the resulting interaction rate realized by the field coupling rate is where is the damping rate of the auxiliary oscillator .it is useful in our analysis below to define a variable that represents an expression that is first order , and only first order in all of the small parameters that appear in it .this allows us in what follows to indicate that an expression is order in any ( or all ) of the with the notation . in the next section we present the physical implementation of both cooling methods via an irreversible output coupling . in section [ cohfb ]we analyze resolved - sideband cooling and derive the expressions for the performance . to do this we use a slightly different approximate master equation to describe the thermal noise of the oscillator than that used by genes _et al . _their approximation was valid for all damping rates of the oscillator at high temperature , while ours is valid for weak damping at all temperatures . in section [ measfb ]we derive the expressions for the performance of the ( optimal ) linear measurement - based feedback cooling . in section [ seccomp ]we compare and discuss the performance of the two methods , and the origin of their respective limitations . in appendixa we discuss how the measurement - based cooling scheme can be treated using the quantum noise equations of input - output theory . in appendixb we show how the steady - state for resolved - sideband cooling is obtained by integrating the spectrum using a remarkable integral formula .this formula can be used to integrate the spectra of the coordinates for any linear input - output network .the interface via which both cooling methods interact with the oscillator is shown in fig .it involves two optical ( or superconducting ) cavities , each with a single mode , where the right - hand end - mirror of each cavity is attached to the oscillator , and thus oscillates with it . to understand how the interface works ,consider the effect of bouncing a beam of light off the oscillator .this beam of light provides both an interface to extract information and to apply a force to the oscillator : i ) when the photons in the light beam are reflected from the surface of the oscillator they apply a force to it , and the size of the force is proportional to the beam intensity ; ii ) monitoring the phase of the reflected light provides a continuous measurement of the position of the oscillator .each of the two optical cavities that are attached to the oscillator provides essentially the same interface as a single beam of light .the photons in the single mode apply a force to the oscillator as they are reflected off it , and the number of photons in the mode can be adjusted by changing the intensity of the light incident on the cavity ( e.g. the light entering the top cavity via input 1 ) .the phase of the light that exits each of the cavities provides information about the oscillator position .the faster the light leaks out of the cavities , characterized by their respective damping rates , the more closely each cavity acts like a beam of light reflected from the oscillator .while we could use a single optical cavity , with a single mode , to provide both a measurement and a feedback force , we choose to use two cavities because this configuration is required to implement resolved - sideband cooling .for both cooling methods the top cavity will be used as a `` measurement interface '' to extract information , and the bottom cavity will be used as an `` actuation '' interface to apply a force to the oscillator . to compare the two control methods, it is the measurement interface , the interface implemented by the top cavity , that we will demand is the same for both methods .that is , both methods will extract information at the same rate using this interface .as far as the physical implementation is concerned , this means that laser 1 has the same power , , and the top cavity the same damping rate , , for both methods . to use the top cavity to create an interface that provides continuous information about the position of the oscillator , , we set the cavity damping rate , , to be much larger than the opto - mechanical coupling rate between the cavity mode and the oscillator , and adiabatically eliminate the cavity .this procedure is detailed in a number of places ( e.g. ) and we wo nt repeat it here . the resulting interface can be described by writing the electromagnetic field output from the cavity as where is the input to the cavity .this is the input - output formalism of collett and gardiner .the constant characterizes the rate at which the output channel provides information about the position , and is given by . here is the steady - state number of photons in the cavity , is the single - photon optomechanical coupling rate , and is the ground - state position uncertainty of the mechanical oscillator , with the oscillator frequency and its mass . the fact that the interface provides information about position , rather than any other observable , and the fixed information rate are the only limits imposed on our two control protocols .given this interface , we wish to know which protocol is able to provide the best ground - state cooling , and under what circumstances . herewe will use scaled position and momentum variables for the oscillator , , and , where .the correspondingly scaled information rate constant is allowing us to write the output field as .the field operators and are continuum versions of annihilation operators .the output field has the same correlation functions as the input field , which are while the interface that provides the information will be the same for both control methods , the interface that provides the feedback force will be used differently in each case .we now describe the two cases in turn .the configuration that implements measurement - based feedback control is shown in fig .[ fig2]b . in this casethe output field is measured by homodyne detection that monitors the phase of the output light , and the second interface ( cavity 2 ) is used merely to apply a classical force to the oscillator . to use cavity 2 to apply a classical force we make the damping rate of this cavity , , sufficiently large that the information rate provided by output 2 in fig .[ fig1 ] goes to zero . to apply a force to the oscillator we shine a laser into input 2 ( see fig .[ fig2]b ) and the resulting force on the oscillator in units of is where is the frequency of the optical mode in the cavity .we therefore apply a time - dependent force by changing the laser power .while it may appear that we can apply only a positive force , this is illusory .the equilibrium position of the oscillator is determined by the force . thus applying a constant offset force ,the force on the oscillator with respect to the resulting equilibrium position is . to peek ahead , the optimal feedback force for the oscillator under linear measurement - based feedback is ] , where is boltzmann s constant .the damping rate of the oscillator is . as noted above the feedback forceis a function of the state at time .resolved - sideband cooling is traditionally implemented using a linear interaction between the mechanical resonator and an auxiliary optical or superconducting resonator , as discussed in the introduction .we can use the two interfaces provided by cavities 1 and 2 in fig .[ fig1 ] to reproduce this linear interaction .this is done by choosing cavity 2 to have the same parameters as cavity 1 , and by applying a phase shift to the light in output 1 ( or alternatively input 2 ) , and by connecting the auxiliary optical resonator to output 1 and input 2 as shown in fig .[ fig2]a . in this caseit is most convenient to use the quantum langevin equations of the input - output formalism to describe the dynamics of the auxiliary cavity mode and the oscillator .for the mechanical oscillator these equations are given by with the input noise operators and describe the noise from the thermal bath , describes the field entering through input 1 , and that entering through input 2 .the correlation functions of the thermal noise operators are given in appendix a. the langevin equations for the auxiliary cavity are where the operators and are the amplitude and phase quadratures of the cavity mode , with the annihilation operator .the damping rate of the cavity is , and the input noise operators describe the single input .these operators are and where is a continuum annihilation operator with the same correlation functions as . to connect the auxiliary input to output 1 of the oscillator we simply set where the minus sign accounts for the phase shift shown in fig .similarly we connect the output of the auxiliary to input 2 by setting substituting eqs.([ch8links ] ) and ( [ ch8links2 ] ) into the equations of motion above for the oscillator and the cavity , the resulting coupled langevin equations for the two systems are with and we have defined .the only noise driving the mechanical oscillator is now the thermal noise ; the noise coming into input 2 from the auxiliary has cancelled the noise coming in input 1 , which is a result of the phase shift applied to the auxiliary input .the auxiliary is driven by the noise from input 1 and is damped via the corresponding output channel at rate .it is this output that takes aways the entropy in the mechanical oscillator , since it is effectively damping to a thermal bath at zero temperature .the coupling between the two oscillators is given by the last terms on the rhs in the equations for and .these are the same as would be generated by an interaction hamiltonian .since the oscillation of the cavity mode continually transforms into , we can replace with in this hamiltonian without affecting the steady - state cooling , and this gives us a hamiltonian equivalent to that in eq.([hint ] ) .if we now modulate the coupling strength at the frequency difference between the two oscillators , then in the interaction picture the oscillators look as though they are resonant , and the result is the equations of motion for resolved sideband cooling .these equations are the same as those in eq.([coup1 ] ) , but with replaced with .the modulation of the coupling strength can be realized by modulating the strength of the effective linear interaction between the mechanics and the transduction oscillators .alternatively it can be achieved by imprinting a modulation on the fields that couple the auxiliary to the other components .here we use the average number of phonons in the steady - state , , to measure the degree of cooling . to calculate this quantity for resolved - sideband cooling we solve the quantum langevin equations ( eqs.([coup1 ] ) and ( [ cflang2 ] ) with replaced by ) in frequency space , and this gives us the spectrum of fluctuations of and . integrating these spectra over all frequencies gives us the steady - state expectation values of and , which in turn gives us the average phonon number via we derive the full expressions for and in appendix a. these expressions are rather complex , but simplify greatly if we expand them to second - order in the small parameters given in eq.([sbsmallp ] ) . performing this expansion we find that \label{bdbss } \\ & & - \frac{n_t}{2}\left ( \frac{\gamma}{\kappa } \right)^2 \left [ 1 + \frac{\kappa^2}{\lambda^2 } + \frac{\kappa^4}{\lambda^4 } \right ] + \frac{1}{16}\left ( \frac{\kappa}{\omega } \right)^2 + \frac{1}{8}\left ( \frac{\lambda}{\omega } \right)^2 .\nonumber \end{aligned}\ ] ] here the first line gives the dominant term , since it is first order while all terms on the second line are second order .note that is not necessarily a small parameter ; we will show it is of order for optimal cooling . notealso that is second order in the small parameters .we note first that the dominant term in gives the cooling performance as the ratio between the rate at which energy flows into the oscillator , , to the maximum rate at which it can flow out of the auxiliary ( and thus out of the oscillator ) , being , and tells us that that this rate is achieved when .this makes sense , and if the dominant term were the only term determining then we would get the best cooling by making as large as possible . butthis is not the case .the last two terms in show that both and must be much smaller than to achieve ground - state cooling .this is because the linear coupling between the oscillators only transfers energy efficiently between the two under the rotating - wave approximation , as is well - known .the remaining term in merely provides a correction to the dominant term , since it is second order in , as long as is not too much smaller than .curiously it improves the cooling a little . the parameters and are properties of the oscillator we want to cool , while and are parameters that we would ideally be able to choose as part of designing our controller .it is therefore natural to ask what values of and will give us the best cooling .it turns out that we can determine analytically the optimal value of for a given value of because this optimal value falls within the validity of our approximation . to do this we start by discarding the term proportional to , an action that we will justify shortly . differentiating the remaining terms with respect to , we find that the minimal value of is reached when the assumption that we used in our expansion in powers of our small parameters was that , but inspection shows that , and so is lower than first - order by a factor of .this is not problematic unless higher - order terms in that we have previously dropped ( e.g. third and forth order terms ) now have an order that is sufficiently low as to be near to the order of the leading - order terms from any of the other small parameters , such as , which now has order . in that casewe would have to include these high - order terms to be consistent .we will check the orders of the relevant terms below .we note now that with this value for , the term , and so its total contribution to is .this is why we were justified in discarding it before we performed the minimization to obtain . substituting into eq.([bdbss ] ) , and keeping terms only up to second order in , the minimum average phonon number is the first term is the dominant term , proportional to , the second term is proportional to , and the last two terms are proportional to . the lowest - order term that we have discarded is .the final step is to minimize over to obtain . in doing thiswe might assume that we can first discard the two second - order terms , because the two leading - order terms are sufficient to provide us with a minimum .however , upon doing this and substituting in the resulting optimal value for , we find that the fourth term contributes to the same order as the first and second .we therefore discard only the third term . minimizing the remaining terms we obtain ^{2/3 } , \label{bdbssx2}\ ] ] giving .we now substitute this value for into the expression for above , and examine the order of the four terms .the first , second and fourth terms now all have order , and are thus all leading order .the third term can be discarded as it has order .we note also that we do not need to include any higher - order terms in or that we have previously discarded : since is a symmetric function of and , the lowest - order terms that we discarded are and , and these are all significantly higher than the leading - order terms in .the minimal value of is to leading order in our small parameters , with the maximal cooling factor , defined as the ratio of the cooled phonon number , , to the initial phonon number , is the best possible cooling is not the only thing we wish to know . in order to compare with measurement - based cooling we would also like to know the best cooling that can be achieved for a given value of the output coupling rate . to answer this question we substitute in for in eq.([bdbss ] ) , which gives - \frac{1}{2}\left ( \frac{\gamma}{\kappa } \right)^2 \left [ 1 + \frac{\kappa}{8 \tilde{k } } + \frac{\kappa^2}{(8 \tilde{k})^2 } \right ] \right ) \nonumber \\ & & + \frac{\kappa^2}{16\omega^2 } + \frac{\tilde{k } \kappa}{\omega^2 } .\label{bdbssk}\end{aligned}\ ] ] minimizing this expression exactly with respect to gives a rather complex result , due to the need to solve a quartic equation .we can nevertheless obtain a simple expression that provides an upper bound on this minimum by choosing so as to minimize the sum of the first and second - to - last terms only .the resulting value of is so that and .substituting this into eq.([bdbssk ] ) , and keeping terms up to order , we obtain with now consider the best cooling that can be obtained by linear measurement - based feedback control , under which the dynamics of the system is described by eq.([ch8mbfbcx ] ) . because the operator being measured is linear in the position and momentum of the oscillator , the oscillator is also linear , and the state of the system is always gaussian , the dynamics is equivalent to that of a linear classical oscillator under a continuous measurement of position , driven by an additional white noise force that simulates exactly the quantum back - action of the measurement . because of this standard results from classical control theory can be applied to our system .the classical theory of linear optimal control , referred to as `` linear quadratic gaussian '' ( lqg ) control tells us that if we wish to minimize a weighted sum of a quadratic function of the coordinates and a quadratic function of the control `` inputs '' ( these inputs are the terms in the equations of motion for the momentum that come from the feedback force ) , then we should choose the feedback force to be a linear combination of the means of and given the observer s state of knowledge .however lqg theory does not apply directly to our problem , since we are interested in minimizing the energy without any particular reference to the control inputs . nevertheless , the coherent feedback protocol we analyzed above is restricted , by the fact that the interaction is linear , to generating only linear dynamics in the system .it is reasonable therefore , in the interests of a fair comparison , that we also restrict the measurement - based feedback to generating linear dynamics .this means that the feedback force must be a linear combination of the expectation values of and , and so can be written as for two rate constants and .the control inputs are then and . in practice , and certainly in experiments today , the amount of force that can be applied induces motion on a timescale much slower than the oscillation of the resonator , and so we restrict ourselves to this regime here .the only effect of is to modify the frequency of the oscillator as , and since in our regime , this has little effect on the dynamics .we can therefore drop , leaving us with only one control parameter .clearly the larger the smaller will be the resulting steady - state energy , . substituting the feedback force into the master equation , eq.([ch8mbfbcx ] ) , the equations of motion for the means and variances of the real variables and are and those for the variances are where is the symmetrized `` covariance '' of and .it is important to remember that the means and variances in these equations are those of the state - of - knowledge of an observer who has access to the measurement results .we refer to them as the _ conditional _ means and variances , which is why we denote the means with the subscript `` c '' .the rate constant in the feedback force merely changes the effective frequency of the oscillator via , and so we have absorbed it into the definition of . in practical situations , certainly those for nano - mechanical resonators ,the feedback rate constants and are much smaller than .the result of this is that will in fact have little effect on the cooling , but is very important as we will see below .since the conditional means of and will be randomly fluctuating , the total variances averaged over all possible trajectories that the system may take while it is being controlled are given by adding the variances of the conditional means of and to the conditional variances .that is , if we denote the total variances by , , and , and the variances of the conditional means by , , and , then we can derive the equations of motion of the variances of the conditional means by first using ito calculus to derive the differential equations for , , and from eq .( [ ch8mx1 ] ) . taking averages on both sides of the differential equations forthese square means gives us the differential equations for the second moments of the means . from thesewe can obtain the equations of motion for the variances of the means , and these are here we have written the equations in terms of dimensionless ( scaled ) versions of the variances , defined by these scaled variances are those of the scaled variables and . using the scaled variances simplifies the equations , and exposes the important rate constants in the dynamics . from now on any variance with a tilde will indicate the dimensionless version of that variance ( e.g. ) .the scaled versions of the thermal variances are the harmonic oscillator ground state has . to calculate the total variances in the steady - state we need to determine the steady - states of both the conditional variances and the variances of the meansthis can be done by setting the left - hand - sides of the equations of motion to zero , and solving the resulting algebraic equations .there is a big difference between the differential equations for the conditional variances and those for the variances of the means : we have written the equations for the latter in matrix form because they are linear , whereas the equations for the former are not , and as they stand do not have analytic solutions .if the harmonic oscillator had no damping , so that were zero , there would be an analytic solution for the steady - states of the conditional variances , being where we see from these solutions that if we want to keep the oscillator close to the ground state , for which , then must be close to unity , and thus ( or equivalently , assuming that ) . while we can not obtain an analytic solution for the steady - states of the conditional variances for all values of , we can obtain an approximate solution valid when is much smaller than and . since is also much smaller than is more than one way to do this expansion .we do it by using the solution above for as our zeroth - order solution , and writing and , where is the small parameter , and is unrestricted . we then solve to obtain the steady - states to first - order in .we are subsequently free to expand the zeroth - order solutions to second order in to obtain solutions to second order in .the result of the first expansion is now expanding to second order in we obtain \ ! \tilde{v}^t-1 \!\right ) , \nonumber \\\tilde{c}^{{\mbox{\scriptsize ss } } } & = & \tilde{c}^0 + \left(\frac{\gamma}{2\omega}\right ) \left(1 - \sqrt{\eta } r \right ) \left(\tilde{v}^t - \frac{[1- \eta r^2/8]}{\sqrt{\eta } } \right ) , \nonumber \\\tilde{v}_p^{{\mbox{\scriptsize ss } } } & = & \tilde{v}_p^0 + ( \tilde{v}_x^{{\mbox{\scriptsize ss } } } -\tilde{v}_x^0 ) + \sqrt{\eta } r ( \tilde{c}^{{\mbox{\scriptsize ss } } } - \tilde{c}^0 ) .\nonumber\end{aligned}\ ] ] for efficient detection these equations simplify considerably , and we see more clearly the effects of the measurement and thermal noise : , \nonumber \\\tilde{c}^{{\mbox{\scriptsize ss } } } & = & \tilde{c}^0 + \left(\frac{\gamma}{\omega}\right ) \left[n_t \left(1 - \frac{8\tilde{k}}{\omega } \right ) + \left ( \frac{2\tilde{k}}{\omega } \right)^{\!\ !2 } \right ] , \nonumber \\\tilde{v}_p^{{\mbox{\scriptsize ss } } } & = & \tilde{v}_p^0 + ( \tilde{v}_x^{{\mbox{\scriptsize ss } } } -\tilde{v}_x^0 ) + r ( \tilde{c}^{{\mbox{\scriptsize ss } } } - \tilde{c}^0 ) .\nonumber\end{aligned}\ ] ] since we are expanding to first - order in and second order in , we should drop terms proportional to as they contribute no more than the other third - order terms that have already been dropped .we have kept these in the above equations merely to show how the second - order terms in affect the solution . calculating the steady - state variances of the meansis straightforward because the equations of motion are linear , although the resulting expressions are rather cumbersome .we find that these variances will only be small , and thus the oscillator close to the ground state , when .this makes sense because the noise from the measurement that causes the means to fluctuate is proportional to , and it is the job of the feedback damping at rate to counteract it .we therefore expand the solutions for the variances of the means in the small parameter .if we want to allow to be smaller than then we can assume that .we keep only first - order terms in but we do not drop any terms in , so is not restricted to being small compared to .the resulting expressions for the variances of the means are , \nonumber \\\tilde{\mbox{\textbf{\textit{v}}}}_{\bar{p}}^{{\mbox{\scriptsize ss } } } & = & \frac{4\tilde{k}}{\gamma } \left [ ( \tilde{v}_x^{{\mbox{\scriptsize ss}}})^2 + ( \tilde{c}^{{\mbox{\scriptsize ss}}})^2 \right ] , \nonumber \\ \tilde{\mbox{\textbf{\textit{c}}}}^{{\mbox{\scriptsize ss } } } & = & -\frac{4\tilde{k}}{\omega } ( \tilde{v}_x^{{\mbox{\scriptsize ss}}})^2 . \nonumber \end{aligned}\ ] ] we can see from these expressions that we can not achieve good cooling if we make too large .this is because our feedback force damps only the momentum , and so to confine the position as well as the momentum we need the oscillation of the oscillator to transform position into momentum ( and vice versa ) on a timescale at least as fast as the damping rate .now we have the steady - state solutions for the conditional variances and the variances of the means , we can combine them to obtain the total variances as per eq.([cvhm2 ] ) .since , eq.([bbrel ] ) tells us that , which we can use to obtain the mean steady - state phonon number . to second order in result is the answer depends both on and . it would be nice to find the optimal value of and thus eliminate from the expression .however this optimal value is , which is unrealistic for practical purposes , and would also imply that the feedback can significantly change the frequency . in this casewe would also be able to increase to reduce further .so instead of minimizing with respect to , we assume , and thus . keeping only terms up to first - order in we then have , we minimize this expression with respect to the measurement strength to determine the best possible cooling .the optimal measurement strength is and the best cooling is the cooling factor is begin by examining the expressions for for sideband cooling and measurement - based feedback for a fixed output coupling rate , given respectively by eqs.([bdbssk ] ) and ( [ nbarm ] ) . in writing these equations now , we keep only the terms that make the most important contribution to limiting the cooling .we have the origins of the various terms in these expressions can be clearly identified .the terms proportional to give the value of that results from the balance between the rate at which energy is injected into the oscillator from the bath at the rate , and the rate at which it is extracted by the controller .we can read off the energy extraction rates as the rate for sideband cooling is the series combination of two conductances , which makes intuitive sense . curiously the measurement has the advantage as far as the extraction rate is concerned , but this is because this rate takes into account only the desirable effect of the purification induced by the measurement .this purification comes at the expense of projection noise , to which we will return below .the second set of terms , those proportional to are remarkably similar for the two controllers .the term in the expression for measurement - based feedback comes purely from the squeezing of the conditional momentum variance .it is due to the fact that the reduction in the position variance due to the measurement of position causes an increase in the momentum variance , which is precisely the back - action noise of the measurement .this term is not a fundamental restriction of measurement - base cooling , it is present only because we are restricted to a linear interaction with the resonator , and thus a measurement that is linear in the coordinates and . for sideband coolingthe term proportional to is due to the breakdown of the rotating - wave approximation ; when is small compared to the oscillator frequency the interaction acts purely to transfer energy between the two , but this is no longer true as is increased relative to , in which case it generates excitations in both systems .this limitation on the coherent feedback cooling is not a fundamental one , but is due to the linearity of the interaction .the term proportional to is due to the fact that the damping of the auxiliary interferes with the energy transfer process .quantum mechanically this can be attributed to the quantum zeno effect , since the damping is a measurement process that inhibits the unitary dynamics .since the joint system is linear , and is therefore equivalent to a noisy classical system , there must also be a classical interpretation .one possibility is that in changing the transfer function of the auxiliary , the damping inhibits the energy transfer in a way that is similar to taking the auxiliary off - resonance with the oscillator .the only way to avoid this limitation on the cooling appears to be to make the coherent control process time - dependent , rather than using an auxiliary with a constant hamiltonian .we will return to this topic below .so far the terms in the expressions for both cooling schemes parallel each other to a large extent .while they may have somewhat different origins they have very similar forms , and would lead to similar cooling behavior . for example , the heating due to the back - action noise of the measurement is similar to the heating due to the correction to the rotating - wave approximation that appears in sideband cooling , and both are caused by the nature of the interaction .the final term in the expression for measurement - based cooling is quite different , as it has no parallel in sideband cooling .it is the heating due to the noise on the mean position and momentum that comes from the random nature of the measurement results , and is a necessary companion to the purification generated by the measurement .this noise is sometimes referred to as the _ projection noise _ of the measurement .this noise , being proportional to , is the projection noise of the position measurement when the oscillator is in its ground state .it is the role of the feedback force to counteract this noise , which is why the resulting heating is proportional to .this noise is not a fundamental limitation on measurement - based cooling , but is due to the fact that it is the position of the oscillator that is measured .it is the heating due to the projection noise that makes measurement - based feedback significantly inferior to sideband cooling for cooling an oscillator via a linear interaction .there are two reasons for this .the first is that the heating terms in sideband cooling have on the bottom line , whereas the heating due to the projection noise is suppressed only by . in practical situations , andcertainly in current experiments , is considerably smaller than .the second reason is that the heating term coming from the projection noise is first - order in .the heating term in sideband cooling that is proportional to also has a factor of .since and can be expected to be similar for optimal cooling , the heating for sideband cooling is effectively second - order in . because of this , even if we set , and thus replace in eq.([nmbf ] ) with , the maximal cooling for measurement - based feedback scales as , while that for sideband cooling scales as .what would happen if we were able to apply a classical feedback `` force '' to damp the position as well as the momentum ? in this case the term would no - longer appear in eq.([nbarm ] ) and we would no - longer need . in the limitin which the projection noise would be eliminated , and the performance of measurement - based cooling would be which is obtained by keeping only the first two terms in eq.([mbfx ] ) .this is slightly better than that for sideband cooling , but requires quite different interactions and very large feedback forces .we have found that the factors that place limits on both coherent and measurement - based feedback for cooling an oscillator with a linear interaction are not fundamental restrictions imposed by quantum mechanics , but are due to the linear nature of the interaction . both control methods could perform much better with a non - linear interaction . nevertheless , resolved - sideband cooling is able to make better use of the linear interaction and achieve much better cooling that measurement - based feedback .we have found that it is not the back - action noise of the measurement which leads to this difference in performance , but the projection noise of the measurement .finally , there is another important difference between coherent feedback and measurement - based feedback in this linear cooling scenario .the performance of the coherent scheme can be greatly improved even without a nonlinear coupling , merely by making the interaction rate time - dependent .this eliminates the need for the rotating - wave approximation , with the result that the energy in the oscillator can be swapped into the auxiliary within a single period of the oscillator .the maximal cooling is then measurement - based feedback can not be improved in this way , and instead requires a non - linear interaction .kj and fs were partially supported by the nsf under project nos .phy-1005571 and phy-1212413 , and kj was partially supported by the aro muri grant w911nf-11 - 1 - 0268 .hn and mj were supported by the australian research council , and mj was also supported by the air force office of scientific research ( afosr ) under grant afosr fa2386 - 09 - 1 - 4089 aoard 094089 .to analyze measurement - based feedback in section [ measfb ] we used the stochastic master equation , while we used the quantum noise formalism of input - output theory to analyze coherent feedback . in the standard formalism of quantum mechanics used by physicists ,the approach used to derive the former is very different from the analysis that leads to the latter . since the derivation of the quantum langevin equations by collett and gardiner ( cg ) involves approximations , it is not at all clear that they describe the same physical process as the stochastic master equation. nevertheless , one can show explicitly that the auto - correlation functions of the output fields of the former agree exactly with those of the measurement records of the latter , and this is enough to show equivalence for most applications .but doing so is not simple ( see for example ) .there is another way to formulate measurement theory in quantum mechanics , which uses measure theory in the way that it is used in probability theory .the resulting structure is called _ quantum probability _ .this formulation of measurement theory can be used to construct both the quantum noise formalism and continuous measurement theory , and in this case it is clear by construction that the two descriptions refer to the same process .this quantum probability formulation of input - output theory was first developed by hudson and parthasarathy ( hp ) , and exploited for continuous measurement by belavkin . because the hp formalism contains an explicit mapping between the quantum noise operators and the classical measurement record the latter being a classical stochastic process it allows us to write a measurement - based feedback process using quantum langevin equations , something that is not possible in the input - output formalism as derived by cg using the standard formulation of quantum mechanics . to do this for the measurement - based feedback cooling scheme described in section [ physimp ] we first write down the quantum langevin equations for the oscillator , which are given by eq.([ddtmeans1 ] ) .the output field that our controller measures is the hp formalism now goes beyond the cg formalism by telling us that the white noise quantum field can be interpreted immediately , without any further machinery , as a classical white noise process .that is , we can fully describe the stream of measurement results from a homodyne detection performed on the field by _itself_. the reason for this is that , in the quantum probability framework , is a classical noise process ; its quantum nature is captured by the fact that it does not commute with other noise processed that are also contained in the full probability space of events .the quantum nature of the output field is important : it means that we can not treat _ both _ and as classical noise sources .this is because when we measure the output field we can not choose to measure both in the -basis and -basis at the same time .we could chose a measurement that gave us partial information about both and , but it would produce a stream of measurement results that was neither equal to or .practically what this means is that we are free to send both and into a quantum system in order to process them , but we can only send one of them through a classical processing device .the dynamics of all quantum systems will preserve the correct relationship between non - commuting operators , but classical processing will in general not do so because it is less restricted . interpreting now as the classical measurement record, we can obtain our estimates and from in the usual way by using eqs.([ch8mx1 ] ) and setting to complete the feedback loop we include the feedback force in the quantum langevin equations for the mechanical oscillator , which are then while it may seem odd that the c - number now appears in a differential equation for operators , as usual any c - number merely acts as a multiple of the identity operator .the measurement - based feedback process is now described by the coupled equations ( [ ch8mx1 ] ) , ( [ ddtcondvx ] ) ( [ ddtcondvp ] ) , and ( [ ddtmeans1x ] ) .these equations can be compared more easily to the langevin equations describing the coherent feedback protocol than can the sme .we can further eliminate the output field from eqs.([ch8mx1 ] ) and ( [ ddtmeans1x ] ) by using eq.([x1out ] ) .the result is a set of langevin equations driven by the input noise operators . as with the langevin equations for sideband cooling, we can use these to calculate power spectra and correlation functions , thus providing an alternative method for analyzing measurement - based feedback protocols .examples of the use of the hp input - output theory to describe measurement feedback can be found in .steady - states for linear open quantum systems can be obtained by solving the langevin equations in the frequency domain , and then integrating the spectrum over all frequencies .this integration can be done with an integral formula that can be found in gradshteyn and ryzhik , and which we give below . to begin we recall that the langevin equations for the coupled oscillators , when the interaction is modulated at the frequency , is given by and for compactness we have defined and . to solve the equations of motion for in the frequency domain we take the fourier transform of both sides of the equation . denoting the frequency space variables with a caret , e.g. , the equations of motion become .rearranging gives the dynamical variables are therefore given by a linear combination of the noise sources , where the coefficients are functions of and therefore filter the noise .inverting the matrix , for which an algebraic software package is invaluable , we obtain the matrix with and \left [ f(\gamma)^2 + \omega^2 \right ] - \lambda^2 \omega^2 .\end{aligned}\ ] ] two important properties of the matrix are i ) that each element is a ratio of polynomials , and ii ) that the imaginary unit and the frequency always appear together in eq.([eq132 ] ) .this second property means that taking the complex conjugate of any element of is the same as replacing with .the steady - state variance of a dynamical variable is given by integrating the spectrum for that variable over all .the spectrum for ( for example ) is given by we can obtain the correlation functions for the dynamical variables directly from those of the noise sources : where is the correlation matrix for the noise sources , and is given by the spectrum for is and that for is + ( \lambda^2 \omega)^2 \right\ } } { d(\nu)d(-\nu ) } + \frac{k(\lambda \omega)^2 |f(\gamma)|^2 \left ( |f(k)|^2 + \omega^2 \right)}{d(\nu)d(-\nu ) } \;\;\;\end{aligned}\ ] ] the expressions for the spectra contain -order polynomials in the denominator .if these polynomials had no special structure , they would likely be impossible to integrate analytically .the fact that this is possible is due to the following remarkable integral formula , which is a slightly simplified version of a formula in gradshteyn and ryzhik : where must satisfy , and note that and are determinants of matrices that differ only by their first row .here we need the case , for which the integral is using the above integral formula , and the fact that the steady - state mean squares of and are we obtain here we have defined \\c & = & ab - \lambda^2 \omega^2 , \end{aligned}\ ] ] with 44ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1016/j.physrep.2011.12.004 [ * * , ( ) ] link:\doibase 10.1038/nature05244 [ * * , ( ) ] * * , ( ) link:\doibase 10.1038/nphys939 [ * * , ( ) ] * * , ( ) link:\doibase 10.1038/nphys1301 [ * * , ( ) ] link:\doibase 10.1038/nature08524 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.105.123601 [ * * , ( ) ] * * , ( ) link:\doibase 10.1038/nature10628 [ * * , ( ) ] link:\doibase 10.1038/nature09898 [ * * , ( ) ] link:\doibase 10.1038/nature10261 [ * * , ( ) ] \doibase doi:10.1038/nature11915 [ * * , ( ) ] * * , ( ) \doibase http://dx.doi.org/10.1016/j.automatica.2009.04.018 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.173602 [ * * , ( ) ] link:\doibase 10.1103/physrevx.3.021013 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.107.177204 [ * * , ( ) ] `` , '' ( ) link:\doibase 10.1103/physrevlett.108.153601 [ * * , ( ) ] _ _ , vol .( , ) link:\doibase 10.1103/physreva.77.033804 [ * * , ( ) ] link:\doibase 10.1103/physreva.87.013815 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) _ _( , , ) link:\doibase 10.1103/physreva.30.1386 [ * * , ( ) ] link:\doibase 10.1103/physreva.31.3761 [ * * , ( ) ] link:\doibase 10.1103/physreva.55.3042 [ * * , ( ) ] * * , ( ) _ _( , , ) * * , ( ) * * , ( ) _ _ ( , , ) * * , ( ) ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevx.4.041029 [ * * , ( ) ] _ _ ( , , )
we show that in the regime of ground - state cooling , simple expressions can be derived for the performance of resolved - sideband cooling an example of coherent feedback control and optimal linear measurement - based feedback cooling for a harmonic oscillator . these results are valid to leading order in the small parameters that define this regime . they provide insight into the origins of the limitations of coherent and measurement - based feedback for linear systems , and the relationship between them . these limitations are not fundamental bounds imposed by quantum mechanics , but are due to the fact that both cooling methods are restricted to use only a linear interaction with the resonator . we compare the performance of the two methods on an equal footing that is , for the same interaction strength and confirm that coherent feedback is able to make much better use of the linear interaction than measurement - based feedback . we find that this performance gap is caused not by the back - action noise of the measurement but by the projection noise . we also obtain simple expressions for the maximal cooling that can be obtained by both methods in this regime , optimized over the interaction strength .
many problems of fluid thermo - mechanics involving unbounded domains occur in many areas of applications , e.g. flows of a liquid in duct systems , fluid flows through a thin or long pipe or through a system of pipes in hemodynamics and so on . from a numerical point of view , these formulations are not convenient and quite practical .therefore , an efficient natural way is to cut off unbounded parts of the domain by introducing an artificial boundary in order to limit the computational work .then the original problem posed in an unbounded domain is approximated by a problem in a smaller bounded computational region with artificial boundary conditions prescribed at the cut boundaries .hence , let be a bounded domain in with boundary . in a physical sense, represents a `` truncated '' region of an unbounded system of pipes occupied by a moving heat - conducting viscous incompressible fluid . will denote the `` lateral '' surface and represents the open parts ( cut boundaries ) of the piping system .it is physically reasonable to assume that in / outflow pipe segments extend as straight pipes .more precisely , and are -smooth open disjoint not necessarily connected subsets of such that , for , , , , , , and the measure of is zero and are smooth nonintersecting curves ( this means that are smooth curved nonintersecting edges and vertices ( conical points ) on are excluded ) .moreover , all portions of are taken to be flat and and form a right angle at all points of ( in the sense of tangential planes ) , see figure [ pipe ] .the flow of a viscous incompressible heat - conducting fluid is governed by balance equations for linear momentum , mass and internal energy : here , and denote the unknown velocity , pressure and temperature , respectively .tensor denotes the symmetric part of the velocity gradient .data of the problem are as follows : is a body force and a heat source term .positive constant material coefficients represent the kinematic viscosity , reference density , heat conductivity and specific heat at constant volume . following the well - known boussinesq approximation ,the temperature dependent density is used in the energy equation and to compute the buoyancy force on the right - hand side of equation .everywhere else in the model , is replaced by the reference value .change of density with temperature is given by strictly positive , nonincreasing and continuous function , such that the energy balance equation takes into account the phenomena of the viscous energy dissipation and adiabatic heat effects .for rigorous derivation of the model like we refer the readers to .rigorously derived asymptotic models describing stationary motion of heat - conducting incompressible viscous fluid through pipe - like domains can be found in . to complete the model , suitable boundary and initial conditions have to be added .concerning the boundary conditions of the flow , it is a standard situation to prescribe a homogeneous no - slip boundary condition for the velocity of the fluid on the fixed walls of the channel , i.e. since nothing is known in advance about the flow through the open parts , it is really not clear what type of boundary condition for the velocity should be prescribed on .the condition frequently used in numerical practice for viscous parallel flows is the most simple outflow boundary condition of the form which seems to be natural since it does not prescribe anything on the cut cross - section of an in / outlets of the truncated region .therefore , this condition is usually called the `` do nothing '' ( or `` free outflow '' ) boundary condition . in , is the outer unit normal vector to , , while quantities are given functions .in particular , for time - dependent flows , are given functions of time .boundary condition results from a variational principle and does not have a real physical meaning . for further discussion on theoretical aspects as well as practical difficulties of this boundary condition and the physical meaning of the quantities we refer to .assume that are given smooth functions of time on , , and consider the smooth extension on such that .introducing the new variable this amounts to solving the problem with the homogeneous `` do nothing '' boundary condition transferring the data from the right - hand side of the boundary condition to the right - hand side of the linear momentum balance equation .hence , for simplicity , we assume throughout this paper , without loss of generality , that , i.e. concerning the heat transfer through the walls of pipes we consider the newton boundary condition in which designates the heat transfer coefficient , is the prescribed temperature outside the computational domain and represents the heat flux imposed on the lateral surfaces . on the open parts of the piping system we use the classical outflow ( `` do nothing '' )condition initial conditions are considered as the given initial velocity field and the temperature profile over the flow domain obtained results in this paper can be extended to problems with dirichlet or the mixed ( dirichlet - neumann ) boundary conditions for the temperature on the walls .namely , instead of , we can consider ( , ) the paper is organized as follows . in section [ sec : preliminaries ] , we introduce basic notations and some appropriate function spaces in order to precisely formulate our problem .furthermore , we rewrite the energy equation by using the appropriate enthalpy transformation . in section [ main_result ] , we present the strong form of the model for the non - stationary motion of viscous incompressible heat - conducting fluids in a system of 3d pipes considered in our work , specify our smoothness assumptions on data and formulate the problem in a variational setting .we also provide the bibliographic remarks on the subject and indicate what kind of difficulties we should overcome in the process .the main result , the existence of strong - weak solutions , stated at the end of section [ main_result ] , is proved in section [ sec : proof ] .the proof rests on application of schauder fixed point theorem .first , we present basic results on the existence and uniqueness of solutions to auxiliary problems , the decoupled initial - boundary value problems for the non - stationary stokes system with mixed boundary conditions and the parabolic convection - diffusion equation with the nonlinear boundary condition . in the proof of the main result we rely on the energy estimates for auxiliary problems , regularity of stationary solutions to the stokes problem and interpolations - like inequalities .vectors and vector functions are denoted by boldface letters . throughout the paper , we will always use positive constants , , , , , which are not specified and which may differ from line to line but do not depend on the functions under consideration .throughout this paper we suppose ] such that and the following system holds for every \in { v}_{{\gamma_2}}^{1,2 } \otimes { w}^{1,2} ] is called the strong - weak solution to the system .the main advantage of the formulation of the navier - stokes equations in free divergence spaces is that the pressure is eliminated from the system .having in hand , this unknown can be recovered in the same way as in .let us briefly describe some difficulties we have to solve in our work .the equations represent the system with strong nonlinearities ( quadratic growth of in dissipative term ) without appropriate general existence and regularity theory . in , frehse presented a simple example of discontinuous bounded weak solution of the nonlinear elliptic system of the type , where is analytic and has quadratic growth in .however , for scalar problems , such existence and regularity theory is well developed ( cf . ) . nevertheless , the main ( open ) problem of the system consists in the fact that , because of the boundary condition , we can not prove that .consequently , we are not able to show that the kinetic energy of the fluid is controlled by the data of the problem and solutions of need not satisfy the energy inequality .this is due to the fact that some uncontrolled `` backward flow '' can take place at the open parts of the domain and one is not able to prove global existence results . in , kramar and neustupa prescribed an additional condition on the output ( which bounds the kinetic energy of the backward flow ) and formulated steady and evolutionary navier - stokes problems by means of appropriate variational inequalities . in , kuera and skal 'ak proved the local - in - time existence and uniqueness of a variational solution of the navier - stokes equations for iso - thermal fluids , such that under some smoothness restrictions on and . in ,the same authors established similar results for the boussinesq approximations of the heat conducting incompressible fluids . in ,kuera supposed that the `` do nothing '' problem for the navier - stokes system is solvable in suitable function class with some given data .the author proved that there exists a unique solution for data which are small perturbations of the original ones . in case of isothermal flows , in ,the first author of the present paper proved local - in - time existence and uniqueness of regular solutions to isothermal navier - stokes flows for newtonian fluids in three - dimensional non - smooth domains with various types of boundary conditions , such that , , which is regular in the sense that solutions possess second spatial derivatives . in ,the same author proved the local - in - time existence , global uniqueness and smoothness of the solution of an initial - boundary - value problem for boussinesq flows in three - dimensional channel - like domains excluding viscous dissipation and considering constant density in the energy balance equation . in case of corresponding stationary flows , the existence , uniqueness and regularity of the solutionhas been recently proved in . in this paper, we extend the existence result from to non - stationary problems .the following theorem represents the main result of this paper .[ theorem : existence_result ] assume and then there exists the strong - weak solution ] and for almost every and let us estimate the right - hand side in . applying sobolev embeddings together with the young s inequality we deduce now , by virtue of and , we can write hence , for \in x \otimes { l^{2}(i;l^{2})}u ] be a sequence in such that \rightarrow [ \tilde{{\mbox{\boldmath{}}}},\tilde{e } ] \textmd { in } x \otimes { l^{2}(i;l^{2})}.\ ] ] let = \mathcal{t}([\tilde{{\mbox{\boldmath{}}}},\tilde{e}])u ] .writing for ] separately and subtracting their respective equations we get ( in view of ) now , let us simply modify the right - hand side to obtain for convective terms on the right - hand side in we can write and combining with the latter estimates we deduce in addition , in view of , we have in and , converges to zero by the lebesgue dominated convergence theorem .now , estimates and yield provided \rightarrow [ \tilde{{\mbox{\boldmath{}}}},\tilde{e } ] \textmd { in } x \otimes { l^{2}(i;l^{2})}.\ ] ] we now turn , for a moment , to the energy balance equation .using the same procedure as before , writing for ] , respectively , and subtracting both resulting equations yield the next step is to use in order to obtain let us estimate all terms on the right - hand side in to get and further and finally in view of , choosing sufficiently small and combining together with the estimates we deduce applying the gronwall s inequality to the estimate and the fact that , we arrive at for all , where here , by the lebesgue dominated convergence theorem and , for all as and by we deduce finally , in view of and we conclude = \mathcal{t}([\tilde{{\mbox{\boldmath{}}}}_n,\tilde{e}_n ] ) \rightarrow \mathcal{t}([\tilde{{\mbox{\boldmath{}}}},\tilde{e } ] ) = [ { { \mbox{\boldmath{}}}},{e}].\ ] ] hence , is continuous . using theorem [ aubin_compact ] and the embeddings we have by continuity of and compact embeddings and , is completely continuous .we conclude the proof by deriving some estimates of and . applying to linear problem with and taking into account and we can write for the estimate further , following corollary [ corolary_stokes ] we have let and . combining and we get hence , . let us turn , for a moment , to the equation with the initial condition and derive some estimates of .one is allowed to use as a test function in to obtain latexmath:[\[\begin{gathered } \label{est:400 } \frac{1}{2}\frac{d}{dt}\|e(t)\|^2_{l^2 } + a_{e}(\kappa(\tilde{e}(t)),{e}(t),e(t ) ) + \gamma(\beta({e}(t)),e(t ) ) \\\leq everywhere in .we are going to estimate all terms on the right - hand side of the latter inequality .evidently , we have can be estimated as in to obtain the last term in can be handled using the interpolation inequality and the young s inequality to get by virtue of and the fact that we have .hence , choosing sufficiently small in and combining we arrive at using the gronwall s inequality yields \end{gathered}\ ] ] for all .in view of and the fact that we have therefore hence , combining we can write \exp\left ( c_1 + c_2 t \right).\end{gathered}\ ] ] now , combining , and , we deduce that satisfies the _ a priori _ estimate of the general form we note that , by the _ a priori _ estimate , is bounded in independently of and .we have previously shown that implies .hence , there exists a fixed ball defined by \in x \otimes l^{2}({i};{l}^{2 } ) , \ ; \|{\mbox{\boldmath{}}}\|_x \leq \frac{1}{2 c_s t^{1/8 } } , \ ; \|z\|_{l^{2}({i};{l}^{2 } ) } \leq r \right\}\ ] ] ( sufficiently large ) such that , where the operator is completely continuous .now , by theorem [ th : schauder ] , there exists the strong - weak solution $ ] to problem . this completes the proof of the main result . first author of this work has been supported by the project gar 13 - 18652s .the second author of this work has been supported by the _ croatian science foundation _( scientific project 3955 : _ mathematical modeling and numerical simulations of processes in thin or porous domains _ ) .bene , m. , kuera , p. : _ solutions to the navier - stokes equations with mixed boundary conditions in two - dimensional bounded domains . _submitted for publication , available online at : http://arxiv.org/abs/1409.4666 ladyzhenskaya , o.a . ,solonnikov , v.a . ,uraltseva , n.n . : _ linear and quasilinear equations of parabolic type . _ translations of mathematical monographs 23 , american mathematical society , providence , r.i .
we study an initial - boundary - value problem for time - dependent flows of heat - conducting viscous incompressible fluids in a system of three - dimensional pipes on a time interval . here we are motivated by the bounded domain approach with `` do - nothing '' boundary conditions . in terms of the velocity , pressure and enthalpy of the fluid , such flows are described by a parabolic system with strong nonlinearities and including the artificial boundary conditions for the velocity and nonlinear boundary conditions for the so called enthalpy of the fluid . the present analysis is devoted to the proof of the existence of weak solutions for the above problem . in addition , we deal with some regularity for the velocity of the fluid .
the dynamics of crime and the impact of social relations on the increase of violence has been the object of study in several areas such as social sciences , criminology , computing , economics and physics .when , in the 1950s , naroll and bertalanffy utilized the concept of allometry a term originally coined in the field of biology to describe scaling laws , _e.g. _ , the relationship between mass and metabolic rate of organisms so as to adapt it to the social context , a particularly promising line of research was opened , which today arouses interest of scientists from wide - ranging areas .more recently , bettencourt _ et al ._ revealed that allometric relationships are statistically present in many aspects of city infrastructures and dynamics .in particular , they observed a characteristic superlinear relation between the number of serious crimes and the resident population in the united states ( us ) cities which clearly denotes the an intricate social mechanism behind the dynamics of violence .more recently , melo _ et al . _ provided strong quantitative evidence showing that an entirely similar behavior can also be observed in brazil . despite the importance of the results presented in the aforementioned studies , their impacts on urban planning , more specifically , on the development of public safety policies ,are limited due to its purely descriptive nature , which prevents a deeper understanding of the organic causes leading to such a disproportional behavior . in this way ,a microdynamical approach based on the interactions between local neighborhoods within metropolitan areas certainly represents a more realist view to the problem .our research motivation is aimed at elucidating issues related to the understanding of the impacts of the influence of social relations on crime . in criminology ,the role of urban space and its social relations has been previously emphasized to explain the origin of crime .particularly , the theory of routine activities , proposed by cohen and felson , states that crimes , more specifically property crimes , such as robbery and theft , occur by the convergence of the routines of an offender , motivated to commit a crime , and an unprotected victim . in this context , can we explain the occurrence of crimes in different areas of the city based on the current population present in the corresponding urban sub - clusters ?is this effective population equally important for any type of crime ?how can we systematically delimit the boundaries of these local neighborhoods so that the social influence is accounted for in a consistent way ? in order to answer these questions , we used actual georeferenced data of crimes committed , and of resident and floating populations for census tracts in the city of fortaleza , brazil .the concept of resident population has been widely used to understand the effects that the growth of major cities has on social and environmental indicators .in particular , bettencourt _ et al . _ showed that the number of homicides scales superlinearly with the population of cities in the us .subsequently , melo _ et al ._ confirmed this behaviour for brazilian cities , but also demonstrated that suicides scale sublinearly with their resident populations .additionally , oliveira _ et al ._ found a superlinear allometric relationship between the resident population and emissions ; this study also raised the issue that the allometric exponents may undergo endogenous trends depending on the respective definition of urban agglomerates .regarding the floating population , it takes into account the complex dynamics of urban mobility , _ i.e. _ , it has a transient characteristic within the city . to measure social influence quantitatively in urban sub - clusters, we delimited their boundaries beyond the mere administrative divisions , _ e.g. _ division by neighborhoods or census tracts , by using the city clustering algorithm .this model defines these boundaries by population density and the level of commuting between areas of the city . in the present study, we identified that the incidence of property crimes has a superlinear allometric relationship , with the floating population in certain areas of the city .this result implies that the increased flow of people in a particular area of the city will take place at the cost of a proportionally greater rate of property crime happening in the region .more important , this superlinear behavior at the subscale of the city neighborhood provides a plausible explanation for the allometry of serious crimes found in . precisely , the floating population being systematically larger that the resident one should lead to the disproportional behavior observed for serious crimes and ( resident ) population at the city scale .we also found a superlinear allometric relationship between the number of crimes of disturbing the peace and the resident population .this result shows that the effect of social influence must be adequately correlated with resident or floating population , depending on the type of crime considered .in order to quantify the effects of the social influence on the police calls within a large metropolis , we used three georeferenced datasets for the brazilian city of fortaleza : from the first , we obtain the _ resident population _ ( pop ) , defined by the number of residents per _ census tract _ administrative territorial unit established for the purposes of cadastral control and provided by the brazilian institute of geography and statistics ( ibge ) . in all, fortaleza has 3018 census tracts , with approximately 2,400,000 residents , spread over an area of 314 square kilometers ( ) in year 2010 . from the second, we estimate the _ floating population _ ( flo ) for each census tract through a flow network built on data of the bus system provided by the fortaleza s city hall for the year 2015 .flo was measured by the number of people who pass through a census tract in one day .the city of fortaleza has 2034 buses circulating along 359 bus lines serving approximately 700,000 people who use the city s mass transit system on a daily basis . in the case of fortaleza, buses still represent the main means of public transportation .the process of generating of the flow network will be detailed in the supporting information ( see the s1 appendix ) .finally , we obtain the _ crime dataset _ from the integrated coordination office of public safety operations ( ciops ) , provides the geographic locations of 81,911 calls to the `` 190 '' ( phone number for emergency ) service about property crimes ( pc ) and 53,849 calls to the same service about disturbing the peace ( dp ) .these calls were made to police between august 2005 and july 2007 .the color maps show in figs [ fig1](a)-[fig1](d ) provide the local density in logarithmic scale of pop , flo , dp and pc , respectively , for the city of fortaleza .we can clearly identify stronger correlations between pop and dp as well as between flo and pc .in special , there is an evident higher incidence of hot spots in ( b ) and ( d ) than in ( a ) and ( c ) .also , at the downtown area , highlighted by black circles on each map , the high density of flo is compatible with the high rates of pc rate , while low pop densities seem to explain the low frequency of dp complaints . ) .( b ) the floating population ( flo ) by .( c ) the disturbing the peace ( dp ) complaints by .( d ) the property crimes ( pc ) by .the black circle highlights the downtown area of the city .this region has a low density of residents and disturbing the peace calls , and is dense in the flow of people and property crimes.,scaledwidth=100.0% ] in spite of seeming trivial to suggest that there are correlations between pop and dp , as well as flo and pc from the density maps shown in fig [ fig1 ] , the respective scattering plots show uncorrelated behaviors ( fig [ fig2 ] ) . actually , we conjecture that such correlations exist indeed .the most census tracts have a small area , sometimes the size of one city block , and it is likely that such an agglomeration scale is insufficient to capture the correlations and therefore reveal the impact of social influence on dp and pc . based on this hypothesis , we considered coarse - grainning these spatial properties of fortaleza into clusters , using the census tracts grid as a maximal resolution base .this clustering aims to find the boundaries of the flow of people and the residential areas of the city .to define the boundaries beyond administrative delineations , we considered the notion of spatial continuity through the aggregation of census tracts that are near one another using the city clustering algorithm ( cca ) .the cca constructs the population boundaries of an urban area considering two parameters , namely , a population density threshold , , and a distance threshold , . for the census tract ,the population density is located in its geometric center ; if , then the census tract is considered populated .the length represents a cutoff distance between census tracts to consider them as spatially contiguous , _ i.e. _ , all of the nearest neighboring census tracts that are at distances smaller than are clustered .hence , a cluster made by the cca is defined by populated areas within a distance less than , as seen schematically in fig [ fig3 ] .previous studies have demonstrated that the results produced by the cca can be weakly dependent on and for some range of parameter values . here will be quantified in meters ( m ) and in inhabitants by . ; in contrast , the gray polygons can not be clustered ( ) .( a ) the red dot represents the geometric center of the census tract and the black circle with radius seeks neighbors belonging to the same cluster .( b)-(c ) the same search operation is made for the other census tracts and is done until there are no more neighbors within the radius of operation .( d ) the algorithm finishes running and the cluster is found.,scaledwidth=100.0% ] although the algorithm begins collating an arbitrary seed census tract , it does not produce distinct clusters when varying this seed ; the two factors that are responsible for clustering behavior are the parameters and . in order to determine the effect of the parameterization on the value of the exponent , we sought a range within the parameters where has low sensitivity to this variation .fig [ fig4 ] shows the behavior of exponent in function of the variation of the cca parameters .as shown in figs [ fig4]a and [ fig4]b , the value of the parameter obtained from the least - square regressions to the data of pop against dp remains practically insensitive to the cca parameters in the range , regardless of the values of adopted in the estimation process .moreover , the resulting average provides strong evidence to support a superlinear type of relation between these two variables .an entirely similar behavior can be observed for flo against dp , but now the exponent remains practically invariant within the range .the resulting average value of also indicates the presence of a superlinear allometric relation . by varying the parameters of the city clustering algorithm ( cca ) , and . *( a ) the variation of in correlation between the resident population ( pop ) and the disturbing the peace ( dp ) complaints is illustrated ; ( b ) the variation is illustrated for correlations between the floating population ( flo ) and the property crimes ( pc ) .both in ( a ) and ( b ) , the x - axis represents , and this parameter was varied from 0 to 800 meters ( m ) ( moment when the largest cluster consumes nearly the entire city ) ; exponent is shown on the y - axis .the colors of the lines represent the variation of the parameter , which corresponds to the resident population density in ( a ) and the floating population density in ( b ) ; this parameter was varied from 1000 to 8000 .it was not necessary to use values larger than 8000 because many census tracts start being discarded and the cca can no longer form clusters .the graphs also show red dashed lines ; between these lines is highlighted the range where , regardless of the parameterization , the exponent has smaller ranges of variation . finally , the dotted black line highlights exponent , in which the relationship between variables is isometric , in both graphs the exponent oscillates to low values of ; in ( a ) , the relationship is superlinear starting at m ; however in ( b ) , superlinearity appears at m.,scaledwidth=100.0% ] at this point , it is necessary to determine a suitable criteria for selecting an adequate value of the parameter . while lower values of lead to the formation of a larger number of cca clusters , reduced values of tend to eliminate fewer census tracts from the map , thereby including a larger portion of in the population under analysis . herewe propose that a proper choice of would be associated with a more homogeneous spatial distribution of the population .more precisely , we seek for cca clusters whose areas should scale as close to isometrically as possible with the population data , namely , as , with , where is the population , is the area ( are ) of the clusters , and is constant . when varying the city clustering algorithm ( cca ) parameters and . *( a ) the variation of in correlations between the resident population ( pop ) and the area ( are ) in square kilometers ( ) of clusters discovered with the cca .( b ) the variation for correlations between the floating population ( flo ) and are . in ( a ) and ( b ), the x - axis represents the parameter , and the y - axis represents the exponent .the line colors represent the variation of the parameter .,scaledwidth=100.0% ] in order to follow the procedure previously described , we obtain from fig [ fig5]a that and correspond to the pair of cca parameter values leading to the closest to isometric relation found between are and pop . in the case of are and flo , the values are and , as depicted in fig [ fig5]b ( see the s2 appendix ) .the census tracts were grouped , using the cca , by pop and flo ( fig [ fig6 ] ) . in the fig [ fig6]a ,the division achieved by pop is illustrated ; the city was divided using = 270 m and = 6000 resident people per . in the fig [ fig6]b, we show the division achieved by flo , using = 320 m and = 2000 floating people per per day .we emphasize that there are bigger gaps in the pop map ( fig [ fig6]a ) than in the flo map ( fig [ fig6]b ) .the reason for such behavior is the fact that fortaleza has commercial regions , _i.e. _ , regions where there is not a large presence of residents . regarding floating people ,there are people moving practically throughout the entire city , both in commercial areas and in residential areas . .( a ) the population density was used in order to find the boundaries of the clusters with = 270 m and = 6000 resident people per .( b ) the division found by considering urban mobility is shown ; the map illustrated here was generated for = 320 m and = 2000 floating people per km in one day in fortaleza.,scaledwidth=100.0% ] as compared to the results shown in fig [ fig2 ] , the application of the cca to the data discloses a rather different scenario for the correlations among the variables investigated here .first , as shown in fig [ fig7]a and [ fig7]b , superlinear relations in terms of power laws , , are revealed between pc and flo as well as between dp and pop , with exponents and , respectively . in contrast , the relations obtained between dp and flo and pc and pop are closer to isometric ( linear ) , with exponents and respectively , although the low values of the corresponding determination coefficients indicate that these results should be interpreted with caution ., between the floating population ( flo ) and the property crimes ( pc ) .( b ) a superlinear relationship was also found between the resident population ( pop ) and the disturbing the peace ( dp ) complaints , with exponent .( c)-(d ) the scatering plots of dp with flo and pc with pop show an isometric relation was found between the variables , but with lower correlations than ( a ) and ( b ) .the is defined as determination coefficient .,scaledwidth=100.0% ]in this paper , we define a methodology to understand the impact that society has on calls to the police finding the boundaries of social influence in a large metropolis through the analysis of georeferenced datasets for the city of fortaleza , brazil .we used the cca , a clustering algorithm , in order to define intracity clusters , _i.e. _ spatially contiguous populated areas within a cutoff distance .the volume of social influence was measured in various localities of the city based on the presence of residents and the flow of passers - by through the census tracts . unlike intercity studies , where social influence was measured only by the presence of residents , we propose that , within a city , urban mobility should be considered to understand the dynamics of social indicators , such as criminal activity .our results show that the incidence of property crimes grows superlinearly as a power law with the floating population , with allometric exponent .therefore , the increase of the flow of people in a region of the city leads to a disproportionally higher number of property this type of crime .our results are in agreement with the routine activities hypothesis , which states that a crime occurs by the convergence of the routines of three agents , specifically : the presence of a motivated offender , the presence of an unprotected victim , and the absence of a guardian able to prevent the transgression . in other words , the regions with higher incidence of floating population potentiate the meeting of the routines of these three agents .this result is in clear contrast with the incidence of crimes related with peace disturbance , where an allometric relation can also be detected , but with the resident population ( pop ) instead .we hope the our results could shed some light on the understanding of crimes inside urban areas , as well as assist eventual violence mitigation policies .the findings described in this paper bring alternatives to implementing innovative practices to decision makers within cities .the most obvious of these relates to the fact that , by showing the correlation of different types of crimes with the home population but also with the floating population , it is also clear that the police force allocation strategies should be implemented via the analysis of different cluster configurations that depend on the type of crime .for example , the allocation of community policing , more appropriate to resolve conflicts that potentially can emerge from the disturbance of people s peace , must be planned from a cluster configuration and a hot spot analysis that were produced from the perspective of the density of resident population .when it is necessary to establish a policy for allocating a uniformed police in order to mitigate crimes against property , the allocation of the police force must be conducted from an analysis from the movement of people .in addition to these police allocation strategies , the results described herein provide important indicators for the creation of public policies for land use and environmental design in general . work in this line has been developed such as in ref . where a framework has been proposed to associate the physical spaces and the feeling of safety as well as , who launched the environmental criminology putting focus of criminological study on environmental or context factors that can influence criminal activity .these include space ( geography ) , time , law , offender , and target or victim .these five components are a necessary and sufficient condition , for without one , the other four , even together , will not constitute a criminal incident .the discovery demonstrated in the article that there is a superlinear relationship between crime and population ( resident or floating ) in clusters within cities strengthens the claim that changes in urban form can lead to reduced crime as discussed in ref .the urban mobility system of a large city is composed of several interconnected networks , such as subway , bus , bicycle , taxicab , and private vehicle networks .buses are the main means of transportation for the most inhabitants in the city of fortaleza , being used by about 700,000 people daily .taking this fact into account , we assumed that the urban mobility within the city can be represented by the use of the bus system .thus , the trajectories of bus users will be used to infer the floating population at the different points of the city . in order to understand the people flow throughout the city, we used four spatio - temporal datasets related to fortaleza s bus network , which are : bus stops ; bus lines ; gps tracking of vehicles ; and ticket card validation .all datasets refer to a normal business day , in fact , the march 11th , 2015 - a wednesday . in total , fortaleza has 4,783 bus stops served by 2,034 buses along 359 different routes . the integrated transportation model adopted by fortaleza city hall , called _ bilhete nico _ , allows the registred users to make a bus transfer anywhere in the city , as long as it is within two hours since the last validation of their ticket card .the validation process is understood as the act of the user swiping his / her ticket card at the turnstile on the bus or at the bus terminal .usually , such procedure happens at the beginning of the trip , since the turnstile is close to the bus entrance in fortaleza . in this context, we are able to define the origin - destination matrix ( odm ) for fortaleza s bus network through the following hypoteses : we can assume that an user s origin point could be represented by the earliest of all first daily ticket validations in the interval of two weeks before march 11th , as well as an user s destination point could also be represented by the earliest of all last daily ticket validations in the interval of one month before march 11th , bearing in mind that an user could have different destination points along the week , _i.e. _ we have to analyze mondays with mondays , tuesdays with tuesdays , and so on .hence , we estimated the origin - destination pair for about 40% of the bus users representing the overall behavior of the urban mobility in fortaleza .finally , we supposed that the trajectories of bus users are defined by the composition of routes of the buses took by them between their origin - destination pair . in this context , we describe the trajectories of bus users as a directed graph , where and are the set of vertices and edges , respectively .an edge between the vertices and is defined by the ordered pair . in our approach ,the vertices represent bus stops and the edges represent the demand of bus users between two consecutive bus stops . for each vertice , we defined a weighted function as the sum of the users passing by .thus , we calculated the floating population as the sum of all within each census tracts .in the main text , we observed the emergence of a trend toward an isometric relation between the populations ( pop and flo ) of the cca clusters and their respective areas ( are ) .the fig [ s2_appendix_fig2 ] shows the closest relations to an isometric behavior for both cases .the cca parameters used to define the clusters were and for the pop case , illustrated in fig [ s2_appendix_fig2]a , and and for the flo case ( fig [ s2_appendix_fig2]b ) . ) . *( a ) the correlation between pop and the area of the clusters ( are ) calculated using city clustering algorithm ( cca ) where residents per and meters ( m ) .( b ) the correlation between flo and are calculated using the cca for floating people per and m. the red line shows the ordinary least square ( ols ) regression applied to the logarithm of the data , and the blue continuous line indicates the nadaraya - watson kernel regression . finally , the blue dashed lines delimit the 95% confidence interval estimated by 500 random bootstrapping samples with replacement . the is defined as the determination coefficient .,scaledwidth=100.0% ] r. guedes , v. furtado , and t. pequeno , `` multiagent models for police resource allocation and dispatch , '' in _ intelligence and security informatics conference ( jisic ) , 2014 ieee joint _ , pp . 288291 , ieee , 2014 . l. g. alves , h. v. ribeiro , e. k. lenzi , and r. s. mendes , `` distance to the scaling law : a useful approach for unveiling relationships between crime and urban metrics , '' _ plos one _ , vol . 8 , no . 8 , p. e69580 , 2013 .l. m. bettencourt , j. lobo , d. helbing , c. khnert , and g. b. west , `` growth , innovation , scaling , and the pace of life in cities , '' _ proceedings of the national academy of sciences _ , vol .104 , no .17 , pp . 73017306 , 2007 .h. d. rozenfeld , d. rybski , j. s. andrade , m. batty , h. e. stanley , and h. a. makse , `` laws of population growth , '' _ proceedings of the national academy of sciences _ , vol .105 , no .48 , pp . 1870218707 , 2008 .h. d. rozenfeld , d. rybski , x. gabaix , and h. a. makse , `` the area and population of cities : new insights from a different perspective on cities , '' _ the american economic review _ , vol .101 , no . 5 , pp .22052225 , 2011 .`` coordenadoria integrada de operaes de segurana ( ciops ) . ''available : http://dados.fortaleza.ce.gov.br/dataset/8e995f96-423c-41f3-ba33-9ffe94aec2a8/resource/de4e876a-ee24-4d6e-9722-db9dc454bbe6/download/policecalls.csv . accessed : 2016 - 10 - 06 .j. b. kinney , p. l. brantingham , k. wuschke , m. g. kirk , and p. j. brantingham , `` crime attractors , generators and detractors : land use and urban crime opportunities , '' _ built environment _ , vol .34 , no . 1 ,pp . 6274 , 2008 .c. caminha , v. furtado , v. pinheiro , and c. ponte , `` micro - interventions in urban transportation from pattern discovery on the flow of passengers and on the bus network , '' in _ smart cities conference ( isc2 ) , 2016 ieee international _ , pp .16 , ieee , 2016 .
we investigate at the subscale of the neighborhoods of a highly populated city the incidence of property crimes in terms of both the resident and the floating population . our results show that a relevant allometric relation could only be observed between property crimes and floating population . more precisely , the evidence of a superlinear behavior indicates that a disproportional number of property crimes occurs in regions where an increased flow of people takes place in the city . for comparison , we also found that the number of crimes of peace disturbance only correlates well , and in a superlinear fashion too , with the resident population . our study raises the interesting possibility that the superlinearity observed in previous studies [ bettencourt _ et al . _ , proc . natl . acad . sci . usa * 104 * , 7301 ( 2007 ) and melo _ et al . _ , sci . rep . * 4 * , 6239 ( 2014 ) ] for homicides versus population at the city scale could have its origin in the fact that the floating population , and not the resident one , should be taken as the relevant variable determining the intrinsic microdynamical behavior of the system .
in this paper , we outline a framework for modelling a proximity network from transportation data , and propose an analytical method to estimate the infection probability of each individual from stochastic simulations .the prevalence , i.e. the average number of recovered agents who were once infected before the spread of the epidemic subsides , is of particular interest .we can construct an individual - based network using a dataset of human mobility .the _ people flow data _ record the one - day movement of individuals living in the kanto region of japan , which includes the tokyo metropolitan area .details of the dataset are given in the methods section .we randomly chose the mobility track of individuals from this dataset .time - dependent proximity networks were then constructed by connecting individuals when they came within a certain geographical proximity , defined by a distance threshold ( fig .[ fig : map ] ) .let us denote the adjacency matrix of the proximity network as .all results in the main text were obtained with m and . note that we observed the percolation - like transition and the scaling relation of the giant cluster component size ratio as . using this scaling relation , m with corresponds to m with the real population in the corresponding region , where .a detailed discussion can be found in supplementary information si1 .we simulated the disease propagation using the agent - based sir model on the above - mentioned network , which assigns susceptible ( s ) , infected ( i ) , or recovered ( r ) states to each agent .we assumed that each agent repeats the same trip pattern every day in the same way as recorded in the people flow data . when a susceptible agent is connected with an infected one , the formeris infected with probability for a time interval .if a susceptible agent is connected with infected agents , they are infected with probability .further details of the stochastic simulation are given in the methods section .in this study , is sufficiently small which enables us to perform the simulation with the discretised time step .an infected agent recovers with probability over this time interval .once , , and are given , stochastic simulations can be conducted to determine the epidemic dynamics .we denote the probabilities that agent is in state s , i , or r at time as , , and , respectively ; the equality holds .we define the infection probability of agent as , because a recovered agent has experienced the infected state before recovering .the epidemic dynamics can be characterized by three different stages : the initial stage , in which stochastic fluctuations are dominant , the exponential growth stage , and the final stage , where nonlinearity suppresses the further spread of the disease .the latter two stages representing the dynamics of the outbreak are of particular interest . in these stages ,the time evolution of the epidemic can be approximated by the following deterministic differential equations : here , we approximate the time - dependent adjacency matrix with its time averaged form .this averaging gives a sufficient approximation of the time evolution of , , and if the parameters and are sufficiently small , as discussed in supplementary information si2 .we also assume that node being susceptible and node being infected are statistically independent events .although this assumption does not hold for small networks , we have verified that the numerical solution of equation ( [ eq : mf ] ) gives a good approximation of the dynamics if is much greater than the percolation transition point and the giant cluster component is sufficiently large .one of the most common methods used to analyse the contagion processes within networks is hmf .this approach assumes that , in a statistical sense , nodes grouped by the degrees behave in the same way . under this assumption, one can derive the equation for the infection probability of each agent , and , in the absence of degree correlation , the epidemic threshold , i.e. the critical value of the epidemic s spread , can be represented as a function of the first and second moments of the degree distribution .one question that persists is whether such an approximation is valid in realistic networks of human contact . actually , the accuracy of hmf depends on factors such as the mean degree and first - neighbour degree . for the simplest case of the absence of degree correlation , where hmf is often applied ,the component of the first eigenvector is proportional to the degree .however , this assumption does not hold for the proximity network ( fig .[ fig : series]a ) .this highlights the need to develop an analytical framework to identify people with a high infection probability along with the overall infection propagation pattern . in this study , we have developed a method based on spectral decomposition and mode truncation which demonstrates that a small number of dominant modes , which may not necessarily have the largest eigenvalues , give a higher contribution in the prevalence of epidemics .in this section , we present a method for analysing the final epidemic size based on the spectral analysis of the averaged matrix , and elucidate the relevant dynamics of the epidemic s spread .the node - wise probabilities , , and can be expanded as where denotes the eigenvector of associated with the eigenvalue .all eigenvalues of a real symmetric matrix are real , and can be labeled in descending order as . the normalization condition is adopted . since is symmetric ,the left and right eigenvectors coincide , and the expansion coefficients can be obtained with , , and , respectively .equation ( [ eq : mf ] ) can be rewritten in terms of these expansion coefficients ( supplementary information si3 ) .the time evolution of the average number of infected and recovered agents , and , is plotted in fig . [fig : series]b .the time series of is converted to that of as in fig .[ fig : series]c . in the exponential stage of the epidemicspreading , , , and are small .the exponential growth of each coefficient can be described by the following linearised equation ( see supplementary information si4 for the derivation of this expression from the nonlinear mode coupling equation ) .hence , holds .it is evident from this equation that a mode with a larger eigenvalue grows faster in this stage . in particular, the exponential growth rate of the number of infected agents is characterised by .the stochastic simulation of the agent - based model verifies that is indeed dominant in the initial stage ( fig .[ fig : series]c ) .in addition , the epidemic threshold is related to as ( fig .[ fig : series]d ) .however , as shown in fig .[ fig : series]c , the coefficients ( ) are comparable to in the later stages of the epidemic , where is the node - wise average of a vector .the importance of each mode needs to be assessed to ascertain its dominance in the spreading mechanism .this can be achieved by introducing a quantifier of the _ contribution _ of each mode to the prevalence as the definition of this contribution enables us to account for the prevalence as the sum of the contributions of all modes , .it is worth introducing the contribution for the case where all agents are infected , i.e. for all .since the coefficients satisfy in this case , one can easily verify that holds .interestingly , the contribution does not monotonically increase with the eigenvalue , although a positive correlation can be observed ( fig .[ fig : final_size]a ) .since and are functions of the average of the eigenvector components , a mode associated with a smaller eigenvalue may contribute more than one with a larger eigenvalue .although the larger eigenvalue denotes faster growth in the exponential regime , it does not necessarily mean a larger contribution to the prevalence .it is interesting to note that the `` hidden '' important structures , corresponding to modes that make a large contribution to the prevalence but have smaller eigenvalues , are unravelled through the spectral analysis .it is still possible to detect such modes when the prevalence is much smaller than the all - infected case ( fig .[ fig : final_size]b ) .the heterogeneity in the contribution is evident from the cumulative contribution and ( fig .[ fig : final_size]c ) . here ,the modes are sorted such that , where denotes the permutation of the index set .note that holds , because the infection pattern is represented using all modes .figure [ fig : final_size]c suggests for the all - infected case , indicating that approximately 90% of the prevalence is described by the top 10% of modes for the present proximity network .this figure indicates that the number of modes needed to describe the prevalence decreases if the prevalence is smaller than .the above observations led us to the idea of describing the epidemic dynamics with a small number of dominant modes that give a high contribution . in the case of the agent - based sir dynamics, we can derive the truncated equation for the contributions at the final time with ( ) modes , , as \frac{\phi_j^{(\sigma(b ) ) } } { \langle \phi^{\sigma ( b ) } \rangle } \right\ } , \label{eq : self - conf_eig } \end{gathered}\ ] ] where .this equation is derived in the methods section .this transcendental equation approximates the epidemic dynamics with fewer effective degrees of freedom .if we use all the modes , i.e. , the solution of equation ( [ eq : self - conf_eig ] ) coincides with the solutions of equation ( [ eq : mf ] ) for .furthermore , the solution of this equation computed with newton s method provides a good description of the contribution obtained by the stochastic simulation ( fig .[ fig : final_size]d ) .we further note that considering fewer than modes suffices to approximate the prevalence as obtained by the stochastic simulation ( figs .[ fig : series]d , [ fig : final_size]c ) . hence, this procedure replaces the need to solve an ordinary differential equation in independent variables ( [ eq : mf ] ) with the requirement to use information from the dominant modes . as discussed above, the essence of theoretical analyses of epidemic spreading within networks is to infer the dynamical information from the network topology .conventionally , hmf uses degree distributions and degree correlations as topological information .in contrast , our approach uses spectral information , which can take more general properties of the network topology into account . as described in the methods , the final size equation in hmf derived from equation ( [ eq : self - conf_eig ] ) under the following simple assumptions : ( i ) degree correlation is absent , and ( ii ) the first mode almost perfectly governs the dynamics . as shown in fig .[ fig : series]a , the first assumption is not satisfied in the proximity network .moreover , it is evident from figs .[ fig : series]d and [ fig : final_size]a that considering only the first mode is not sufficient to account for the whole epidemic dynamics , as is less than 6% .the contribution of other dominant modes must be taken into account to quantify the epidemic dynamics ( fig .[ fig : final_size]d ) .thus , hmf with no degree correlation is not a sufficient approximation for the proximity network .it would be possible to improve the approximation by considering the degree correlation in hmf , but it is generally difficult to include information about the second - nearest neighbours .in contrast , the proposed approach provides the flexibility to improve the approximation by using an arbitrary number of dominant modes .another feature of epidemic spreading in proximity networks is the strong heterogeneity of the infection risk in space and time .it is important to identify such locations that have a high probability of infection , namely `` hot spots '' , for the mitigation and management of the epidemic .the spatial distribution of the hot spots can be mapped to provide an insight into the nature of epidemic spreading in urban areas ( fig .[ fig : risk ] ) .more importantly , to uncover the relevant epidemic dynamics , such identification should be conducted with fewer parameters than the number of agents . for the present model to be predictive ,let us infer the infection probability of a test agent from the mobility pattern of other agents .the probability of infection of this agent depends on the presence of other infectious agents in the vicinity , which in turn varies from region to region and at different times of day , .let be a position vector and be the position of agent at time .we introduce a kernel function signifying the presence or absence of any interaction between two agents based on the distance between them .since agents are connected within the threshold , the heaviside step function is adopted as the kernel function .this allows us to define a spatio - temporal risk factor of infection as which can be interpreted as the weighted sum of probabilities . integrating along the trajectory of the test agent ,one can obtain the infection probability of this agent .details of the derivation are discussed in supplementary information si5 .figure [ fig : risk]a depicts the spatial distribution of the risk factor across the kanto region of japan . in the present model , which assumes homogeneous parameter values of and , the high - risk area duringthe daytime is the central business district , where many workers are located . in the nighttime ,certain suburban areas from where people commute to the centre become high risk .furthermore , people travelling to the central business district from their suburban residences allow the disease to spread over the whole metropolitan area .figure [ fig : risk]c shows that the infection probability of the test agents can be estimated from the integration of , obtained from stochastic simulation , along its trajectory .instead of counting the infection probability in the neighbourhood of the region of interest , the spectral properties of the averaged adjacency matrix can be incorporated into the risk factor by choosing the first few high - contribution modes .namely , one can approximate and as and using modes .hazard maps based on the mode truncation at different times are similar to those given by the stochastic simulation ( fig .[ fig : risk]a ) . moreover , fig .[ fig : risk]d shows that the integration of along the path of the test agents predicts their infection probability .thus , we conclude that mode truncation enables us to describe the spatio - temporal infection risk with a small number of modes . in other words , few modes that have high contributionsdominate the spread of the epidemic .data on human mobility , particularly contact information of urban residents , play a key role in the analysis of an epidemic spreading . such high - resolution data allow detailed individual - level modelling , rather than coarse - grained metapopulation studies . at the same time, novel theoretical tools can be used to uncover the dominant dynamics at work in data - driven models without the need for oversimplification .linear analysis characterises the exponential growth in the initial stage ; nonetheless , this study clarified that the final size of the epidemic is provided by the analysis of various modes .the mode truncation described here revealed that the prevalence of epidemics is actually dominated by a small number of modes as compared to the number of agents , i.e. the effective number of degrees of freedom is much smaller than the system size .the proposed method practically limits the expanse of mitigation policies such as targeted vaccination , and provides specific alerts by following the spreading trail of infection .this also quantifies the spatio - temporal risk associated with empirical travel patterns .the analytical framework and realistic mobility data presented here provide a promising starting point for the study and possible confrontation of epidemics in the real world .future studies should examine ways to facilitate the rapid decay of dominant modes using dynamic intervention strategies .people flow data were generated based on person trip surveys made by japan s ministry of land , infrastructure , transport , and tourism . the surveys were based on questionnaires that asked about basic individual information such as gender , age , occupation , places visited , and trip modes on one day . for the sake of privacy ,spatial information was recorded with a resolution such that the complete location could not be determined .the questionnaires allowed us to access advanced information such as transportation modes , travel times , transfer points , and trip purposes .traces of the trips between two places reported in the raw data were interpolated using geographical information systems methods .the proximity network was constructed by placing a link between two individuals at time if they were within the threshold distance of one another ( fig . [ fig : map]b ) .we added another constraint by only placing a link between individuals using the same transportation mode ; it is impractical to connect individuals walking along the road with those traveling in an automobile or train because of the improbability of the transmission of infection .the connectivity of the network can be described by the time - dependent adjacency matrix , where if individuals and are connected at time and otherwise .this allowed us to obtain networks for agents at every time interval .proximity networks are spatially embedded temporal networks , which appear in various forms of transportation and infrastructure networks .the agent - based sir model simulation was performed with the same time - dependent adjacency matrix .the time evolution of the probabilities , , and was computed using 500 independent stochastic simulation runs with different random initial conditions .we also assumed that the human mobility patterns had a periodicity of 24 h owing to the circadian nature and economic situation of people going to their place of work and back home .parameter values were taken as , , and , except for fig .[ fig : series]d , in which varies .one of the main results of this article , the equation for the asymptotic solution of equation ( [ eq : self - conf_eig ] ) , is derived as follows .substituting the expansion ( [ eq : expansion ] ) into equation ( [ eq : mf ] ) , we obtain let us define \phi_j^{(a ) } , \label{eq : psi - r}\end{aligned}\ ] ] so that the solution of this equation is given as or equivalently , for , should vanish , and the conservation of probability condition is given as .this is equivalent to substituting equation ( [ eq : s_evo ] ) into this expression , we obtain \phi_j^{(b ) } \right\}. \label{eq : self - conf_eig_orig } \end{aligned}\ ] ] this is a transcendental equation for , and one can determine the infection probability by solving this equation instead of solving the differential equation .we choose ( ) modes and neglect the other modes to obtain \phi_j^{(\sigma(b ) ) } \right\ } , \label{eq : self - conf_eig_orig2 } \end{aligned}\ ] ] where and denote arbitrary permutations of modes .finally , we multiply both sides by , and obtain equation ( [ eq : self - conf_eig ] ) by choosing the top modes with respect to their contribution .the relationship between the equation for the expansion coefficients ( [ eq : self - conf_eig_orig ] ) and that obtained with hmf can be described as follows .we concentrate on the simplest case , where the probability that a node with degree and one with degree are connected is proportional to the product of their degrees , i.e. , where denotes the degree distribution of the network satisfying . in this case ,the largest eigenvalue of the adjacency matrix is , and the component of the corresponding normalised eigenvector is . here, we denote the degree of node as .we assume that the probability of nodes with the same degree is identical , i.e. holds . substituting the expression for the first eigenvector into equation ( [ eq : self - conf_eig_orig ] ) and neglecting other modes with the initial condition , we get , \label{eq : final_dd}\end{aligned}\ ] ] where the sum in the second equation is taken for degree .note that holds , where we have defined substituting this equation into equation ( [ eq : final_dd ] ) and taking the limit , we obtain this is the equation for in hmf derived in . 50 peiris , j. s. m. _ et al . _ .17671772 ( 2003 ) .fraser , c. _ et al ._ pandemic potential of a strain of influenza a ( h1n1 ) : early findings .15571561 ( 2009 ) .gire , s. k. _ et al ._ genomic surveillance elucidates ebola virus origin and transmission during the 2014 outbreak .13691372 ( 2014 ) .eubank , s. _ et al ._ modelling disease outbreaks in realistic urban social networks . 180184 ( 2004 ) .saito , m. m. _ et al ._ enhancement of collective immunity in tokyo metropolitan area by selective vaccination against an emerging influenza pandemic .e72866 ( 2013 ) .ferguson , n. m. _ et al ._ strategies for containing an emerging influenza pandemic in southeast asia. 209214 ( 2005 ) .ajelli , m. & merler , s. an individual - based model of hepatitis a transmission . 478488 ( 2009 ) .merler , s. & ajelli , m. the role of population heterogeneity and human mobility in the spread of pandemic influenza . 557565 ( 2010 ) .ferguson , n. m. _ et al ._ strategies for mitigating an influenza pandemic .448452 ( 2006 ) .halloran , m. e. _ et al ._ modeling targeted layered containment of an influenza pandemic in the united states .46394644 ( 2008 ) .wang , b. , cao , l. , suzuki , h. & aihara , k. epidemic spread in adaptive networks with multitype agents .035101 ( 2011 ) .colizza , v. , pastor - satorras , r. , & vespignani , a. reaction diffusion processes and metapopulation models in heterogeneous networks . , * 3 , * 276282 ( 2007 ) .balcan , d. & vespignani , a. phase transitions in contagion processes mediated by recurrent mobility patterns . 581586 ( 2011 ) .poletto , c. , tizzoni , m. , & colizza , v. heterogeneous length of stay of hosts movements and spatial epidemic spread . 476( 2012 ) .colizza , v. , barrat , a. , barthlemy , m. & vespignani , a. the role of the airline transportation network in the prediction and predictability of global epidemics .20152020 ( 2006 ) .colizza , v. , barrat , a. , barthelemy , m. , valleron , a .-j . & vespignani .a. modeling the worldwide spread of pandemic influenza : baseline case and containment interventions . e13 ( 2007 ) .balcan , d. _ et al . _ multiscale mobility networks and the spatial spreading of infectious diseases .2148421489 ( 2009 ) .sattenspiel , l. & dietz , k. a structured epidemic model incorporating geographic mobility among regions . 7191 ( 1995 ) ., a. epidemic outbreaks on structured populations .125129 ( 2007 ) .watts , d. j. , muhamad , r. , medina , d. c. & dodds , p. s. multiscale , resurgent epidemics in a hierarchical metapopulation model . 1115711162 ( 2005 ) .rohani , p. , earn , d. j. d. & grenfell , b. t. opposite patterns of synchrony in sympatric disease metapopulations .968971 ( 1999 ) .rvachev , l. a. & longini jr , i. m. a mathematical model for the global spread of influenza . 322 ( 1985 ) .riley , s. large - scale spatial - transmission models of infectious disease .12981301 ( 2007 ) .grais , r. f. , ellis , j. h. & glass , g. e. assessing the impact of airline travel on the geographic spread of pandemic influenza . 10651072 ( 2003 ) .tellier , r. aerosol transmission of influenza a virus : a review of new studies .s783s790 ( 2009 ) .onnela , j. p. _ et al ._ 73327336 ( 2007 ) .cattuto , c. _ et al . _ e11596 ( 2010 ) .sekimoto , y. , shibasaki , r. , kanasugi , h. , usui , t. & shimazaki , y. pflow : reconstructing people flow recycling large - scale social survey data . 2735 (2011 ) .masuda , n. & holme , p. predicting and controlling infectious disease epidemics using temporal networks . 6 ( 2013 ) .danon , l. _ et al . _ networks and the epidemiology of infectious disease . 284909 ( 2011 ) .pastor - satorras , r. , castellano , c. , van mieghem , p. & vespignani , a. epidemic processes in complex networks . 925979 ( 2015 ) .kitsak , m. _ et al . _ .888893 ( 2010 ) .brockmann , d. & helbing , d. 13371342 ( 2013 ) .salath , m. _ et al ._ a high - resolution human contact network for infectious disease transmission .2202022025 ( 2010 ) .kitchovitch , s. & li , p. risk perception and disease spread on social networks . 23452354 ( 2010 ) .pastor - satorras , r. & vespignani , a. epidemic spreading in scale - free networks .32003203 ( 2001 ) .moreno , y. , pastor - satorras , r. & vespignani , a. epidemic outbreaks in complex heterogeneous networks .521529 ( 2002 ) .bogun , m. , pastor - satorras , r. & vespignani , a. epidemic spreading in complex networks with degree correlations . in _ statistical mechanics of complex networks notes in physics , springer ,berlin _ , 2003 .yang , r. _ et al ._ epidemic spreading on heterogeneous networks with identical infectivity .189193 ( 2007 ) . keeling , m. j. the effects of local spatial structure on epidemiological invasions .859867 ( 1999 ) .eames , k. t. d. & keeling , m. j. modeling dynamic and network heterogeneities in the spread of sexually transmitted diseases .1333013335 ( 2002 ) .boots , m. & sasaki , a. parasite - driven extinction in spatially explicit host - parasite systems . 706713 ( 2002 ) .youssef , m. & scoglio , c. an individual - based approach to sir epidemics in contact networks .136 144 ( 2011 ) .barrat , a. , barthlemy , m. & vespignani , a. ( cambridge univ .press , 2008 ) .stehl , j. , _et al . _simulation of an seir infectious disease model on the dynamic contact network of conference attendees .87 ( 2011 ) .newman , m. ( oxford univ .press , 2010 ) .gleeson , j. p. , melnik , s. , ward , j. a. , porter , m. a. & mucha , p. j. accuracy of mean - field theory for dynamics on real - world networks .026106 ( 2012 ) .wang , y. , chakrabarti , d. , wang , c. & faloutsos , c. epidemic spreading in real networks : an eigenvalue viewpoint . in _ proc .22nd international symposium on reliable distributed systems _ 2534 ( 2003 ) .barthlemy , m. spatial networks. 1 101 ( 2011 ) .n.f . is grateful for the stimulating discussions with t. takaguchi , t. aoki , y. sughiyama , and y. yazaki .this research is supported by the aihara project , the first program from jsps , initiated by cstp , and by crest , jst .this research is the result of the joint research with center for spatial information science , the university of tokyo ( no . 315 ) .is supported by jsps kakenhi grant number 15k16061 .all authors designed the research .n.f . and a.r.s .carried out the numerical simulations .n.f . , a.r.s , and k.i .analysed the results ., k.i . , and k.a .contributed to the analytical calculations .n.f . and a.r.swrote the bulk of the manuscript .k.i . and k.a .edited the manuscript .the authors declare no competing financial interests .the data size of the people flow data of is much smaller than the population of the corresponding tokyo metropolitan area , .therefore , it is natural to ask whether it makes sense to consider a model with a small number of agents . a clue to answering this question is to use scaling .we found that this system exhibits a percolation - like transition : the order parameter of the giant cluster component size ratio has a second - order phase transition ( fig .[ fig : gcc ] left ) .although it is unclear whether the percolation - like transition takes place for large ( because of the spatial resolution of the data for privacy reasons ) , we found that obeys the scaling law ( fig .[ fig : gcc ] right ) .far from the percolation transition point , holds , where is a scaling function .if we assume that this scaling holds for the size of the real population , the parameter values , m used in the simulations in the main text correspond to m in the real population .scaling of the giant cluster component size ratio for different .the right - hand figure shows that this is a function of the scaled parameter ., scaledwidth=95.0% ] scaling of the giant cluster component size ratio for different .the right - hand figure shows that this is a function of the scaled parameter ., scaledwidth=95.0% ]we discuss an approximation to replace the time - dependent adjacency matrix with the time - averaged one , and determine the condition under which this approximation is valid .let us consider a general example where a dynamical variable evolves via contact with other nodes as where the parameter represents the coupling strength and is the instantaneous adjacency matrix of a temporal network at time .equation ( [ eq : matrix_ode ] ) can be solved as \bm x(0 ) , \label{eq : tprod}\end{aligned}\ ] ] where denotes the time - ordered exponential .if changes with the time interval , equation ( [ eq : tprod ] ) can be rewritten as where is the time - averaged adjacency matrix .the above equation can be interpreted as follows .the terms of in the power series expansion in equation ( [ eq : tprod ] ) consist of time - ordered products of adjacency matrices of the temporal network .these terms collect paths of length , namely all possible indirect interactions occurring times between two nodes at different times taking into account the time order of the interactions .if we consider the case of a small value of , the time evolution can be approximated up to the first order .this corresponds to assuming that the interaction only takes place by direct contact between agents , and all multi - step interactions during the time interval are negligible . in such a case ,the time - averaged network is sufficient to describe the dynamical processes in the temporal network .note that the time - order information from one day does not appear in the first - order terms .one can derive the mode coupling equation from the following differential equations ( 1 ) in the main text : let us substitute equation ( 2 ) in the main text , into this equation .taking into account the eigenvalue equation for the eigenvector , it is easy to verify that & = - \beta \bigg ( \sum_{b=1}^n \hat s_b \phi^{(b)}_j \bigg ) \bigg [ \sum_{k=1}^n \overline{a}_{jk}(t ) \bigg ( \sum_{c=1}^n \hat i_c \phi^{(c)}_k \bigg ) \bigg]\\ & = - \beta \bigg ( \sum_{b=1}^n \sum_{c=1}^n \lambda_c \phi^{(b)}_j \phi^{(c)}_j \hat s_b \hat i_c \bigg),\\ % \frac { d } { dt } \bigg[\sum_{a'=1}^n \hat i_{a ' } \phi^{(a')}_j \bigg ] & = \beta \bigg ( \sum_{b=1}^n \hat s_b \phi^{(b)}_j \bigg ) \bigg [ \sum_{k=1}^n \overline{a}_{jk}(t ) \bigg ( \sum_{c=1}^n \hat i_c \phi^{(c)}_k \bigg ) \bigg ] -\mu \sum_{a'=1}^n \hat i_{a ' } \phi_j^{(a ' ) } \\ & = \beta \bigg ( \sum_{b=1}^n \sum_{c=1}^n \lambda_c \phi^{(b)}_j \phi^{(c)}_j \hat s_b \hat i_c \bigg ) -\mu \sum_{a'=1}^n \hat i_{a ' } \phi_j^{(a ' ) } , \\% \frac{d } { dt } \bigg[\sum_{a'=1}^n \hat r_{a ' } \phi^{(a')}_j \bigg ] & = \mu \sum_{a'=1}^n \hat i_{a ' } \phi_j^{(a ' ) } .\label{eq : mf2 } \end{split}\end{aligned}\ ] ] multiplying on both sides of the above equations and taking the sum over to use the orthonormal condition where is kronecker s delta , we obtain where . note that each element of consists of the product of the eigenvector components and is time - independent .it is easy to extend this equation to the case where the static adjacency matrix is asymmetric .let us define the left and right eigenvectors of for the eigenvalue as and , and set the normalisation condition .then , equation ( [ eq : mf3 ] ) holds if we define . by considering ( ) modes and neglecting the other modes in equation ( [ eq : mf3 ] ), we obtain the transient dynamics of the epidemic from a small number of modes .equations ( [ eq : mf3 ] ) contain mode coupling terms , but the modes are decoupled in the linearised equation .let us take and focus on the initial stage of the epidemic spreading .assume that and are small , and approximate by neglecting the product .since appears only as the product , the linearised approximation allows us to put . under this assumption, we can take substituting this into the evolution equation for in equation ( [ eq : mf3 ] ) , the linearised equation becomes by taking the summation over using equation ( [ eq : approx_mode ] ) , we obtain in this way , we derive equation ( 3 ) in the main text .we now discuss how to estimate the infection probability of a test agent .let be the position of agent at time .since the infection probability depends on the distance between agents , we introduce the kernel function . in this study, we used the heaviside step function , where for and otherwise .then the element of the adjacency matrix is given as .we want to estimate the risk of infection of a test agent who travels inside the urban area with an arbitrary trajectory and returns to the original location after time .equation ( 1 ) in the main text can be solved as where . \end{aligned}\ ] ] for , should vanish , and the conservation of probability condition is given as . therefore , the infection probability of this agent is \right\}. \label{eq : final}\end{aligned}\ ] ] this equation is transcendental and can not be solved analytically . however ,if is sufficiently large and the contribution of the test agent to is negligible , we can approximate the infection probability of the test agent using .since the time - averaged adjacency matrix is given in terms of the risk factor as we obtain , \label{eq : final_add}\end{aligned}\ ] ] by taking and .therefore , the final infection probability of a susceptible agent traveling along an arbitrary trajectory is given by the time integral of the risk factor along its trajectory .if the mode truncation with ( ) modes approximates , the risk factor is given by the superposition of independent variables as the infection probability is approximated with the mode truncation , \end{aligned}\ ] ] by replacing with in equation ( [ eq : final_add ] ) .
recently developed techniques to acquire high - quality human mobility data allow large - scale simulations of the spread of infectious diseases with high spatial and temporal resolution . analysis of such data has revealed the oversimplification of existing theoretical frameworks to infer the final epidemic size or influential nodes from the network topology . here we propose a spectral decomposition - based framework for the quantitative analysis of epidemic processes on realistic networks of human proximity derived from urban mobility data . common wisdom suggests that modes with larger eigenvalues contribute more to the epidemic dynamics . however , we show that hidden dominant structures , namely modes with smaller eigenvalues but a greater contribution to the epidemic dynamics , exist in the proximity network . this framework provides a basic understanding of the relationship between urban human motion and epidemic dynamics , and will contribute to strategic mitigation policy decisions . epidemics of infectious diseases in the human population , e.g. the sars outbreak in 20022003 , 2009 h1n1 influenza pandemic , and ebola outbreak , can be a serious factor in human mortality , and have a significant socio - economic impact in terms of reducing a population s healthy years because of morbidity . in recent years , the mathematical modelling of the outbreak and spread of infectious diseases has mainly been performed using two approaches : agent - based models and structured metapopulation models . both techniques incorporate real or synthetic data on the long - range mobility and migration of populations , but vary in their granularity of the population , from individuals to sub - groups of society . various spatial scales have been studied , ranging from specific geographic locales such as cities to nations and intra-/inter - continental regions . in particular , it is imperative to study the mechanisms of an epidemic s spread in urban scenarios , considering restrictions on available space and transportation methods ( e.g. traffic fluxes , travel routes , timescales of modes of transport ) , which result in high density and heterogeneous contact patterns among residents . these factors make the populace vulnerable to epidemics of diseases that spread through exhaled aerosols , e.g. . to understand the details of epidemic dynamics within cities , it is crucial to exploit the heterogeneous contact patterns derived from human mobility data . fortunately , recent technical advances in data acquisition methods enable us to incorporate human mobility data obtained from various sources into mathematical models . network epidemiology is a useful means of uncovering the dependence of the contagion dynamics on the heterogeneous contact pattern . the approach of tracking connections between people provides a comprehensive framework with which to study the effects of the underlying network of people in the region of interest ( e.g. cities ) . the present understanding of epidemic or social contagion processes has been bolstered by many network - based studies , which treat cities as social petri - dishes . analytical approaches such as heterogeneous mean field ( hmf ) theory and moment closure approximation provide an important bridge between the network topology and the spread of epidemics in complex networks . for example , the epidemic threshold is absent in scale - free networks with no degree correlation . however , certain simplifications made to obtain analytical results , e.g. neglecting the degree correlation in the hmf , could be superfluous , and these approximations may not necessarily hold in realistic networks . therefore , a novel analytical approach is required to better understand the spread of epidemics in heterogeneous populations .
considering the state of the art , we propose a new method for multivariate approximation which allows to interpolate large scattered data sets stably , accurately and with a relatively low computational cost .the interpolant we consider is expressed as a linear combination of some basis or kernel functions . focusing on radial basis functions ( rbfs ) ,the partition of unity ( pu ) is performed by blending rbfs as local approximants and using locally supported weight functions . with this approacha large problem is decomposed into many small problems , , and therefore in the approximation process we could work with a large number of nodes .however , in some cases , local approximants and consequently also the global one suffer from instability due to ill - conditioning of the interpolation matrices .this is directly connected to the order of smoothness of the basis function and to the node distribution .it is well - known that the stability depends on the flatness of the rbf .more specifically , if one keeps the number of nodes fixed and considers smooth basis functions , then the problem of instability becomes evident for small values of the shape parameter .of course , a basis function with a finite order of smoothness can be used to improve the conditioning but the accuracy of the fit gets worse .for this reason , the recent research is moved to the study of more stable bases . for particular rbfs , techniques allowing to stably and accurately compute the interpolant , also in the _ flat limit _ , have been designed in the recent years .these algorithms , named rbf - qr methods , are all rooted in a particular decomposition of the kernel , and they have been developed so far to treat the gaussian and the inverse multiquadric kernel .we refer to for further details on these methods .a different and more general approach , consisting in computing , via a truncated singular value decomposition ( svd ) stable bases , namely weighted svd ( wsvd ) bases , has been presented in .we remark that in the cases where the rbf - qr algorithms can be applied , they produce a far more stable solution of the interpolation problem .nevertheless , the present technique applies to _ any _ rbf kernel , and to any domain . in this paper , a stable approach via the pu method , named wsvd - pu , which makes use of local wsvd bases and uses compactly supported weight functions ,is presented .thus , following , for each pu subdomain a stable rbf basis is computed in order to solve the local interpolation problem .consequently , since the local approximation order is preserved for the global fit , the interpolant results more stable and accurate .concerning the stability , we surely expect a more significant improvement in the stabilization process with infinitely smooth functions than with functions characterized by a finite order of regularity . moreover , in terms of accuracy , the benefits coming from the use of such stable bases are more significant in a local approach than in a global one .in fact , generally , while in the global case a large number of truncated terms of the svd must be dropped to preserve stability , a local technique requires only few terms are eliminated , thus enabling the method to be much more accurate .concerning the computational complexity of the algorithm , the use of the so - called block - based space partitioning data structure enables us to efficiently organize points among the different subdomains , .then , for each subdomain a local rbf problem is solved with the use of a stable basis .the main and truly high cost , involved in this step , is the computation of the svd . to avoid this drawback , techniques based on krylov space methodsare employed , since they turn out to be really effective , .a complexity analysis supports our findings .the guidelines of the paper are as follows . in section [ wsvd ], we present the wsvd bases , computed by means of the lanczos algorithm , in the general context of global approximation .such method is used coupled with the pu approach which makes use of an optimized searching procedure , as shown in section [ pum_wsvd ] .the proposed approach turns out to be stable and efficient , as stressed in section [ compl_anal_wsvd ] . in sections[ ne_lan ] and [ applicazione ] extensive numerical experiments and applications , carried out with both globally and compactly supported rbfs of different orders of smoothness , support our results .moreover , all the matlab codes are made available to the scientific community in a downloadable free software package : http://hdl.handle.net/2318/1527447 .in subsection [ rbf_prelim ] we briefly review the main theoretical aspects concerning rbf interpolation , , while the remaining subsections are devoted to the efficient computation of the wsvd basis via krylov space methods .our goal is to recover a function , being a bounded set in , using a set of samples of on pairwise distinct points , namely ^t ] ) and strictly positive definite in for ( see ) .ll ' '' '' rbf & + ' '' '' gaussian ( ga ) & + ' '' '' inverse multiquadric ( imq ) & + ' '' '' mat ( m6 ) & + ' '' '' mat ( m4 ) & + ' '' '' wendland ( w6 ) & + ' '' '' wendland ( w4 ) & + the real coefficients ^t ] such that or , equivalently , an invertible value matrix {i , j=1}^n ] . at first ,a partition of unity structure , composed by circular patches of radius : and whose centres , are a grid of points on , is generated .as in , the number of pu subdomains is chosen so that .this choice and lead to a reliable partition of unity structure since , in this way , patches form a covering of the domain . in order to find the points belonging to the different subdomains andconsequently solve , with the use of stable bases , small interpolation problems , we propose a new partitioning structure .it leads to a natural searching procedure that turns out to be really cheap in terms of computational complexity . to this aimwe first cover with square blocks , where the number of blocks along one side of the unit square is : in this way the width of blocks is equal to the subdomain radius .this choice can appear trivial , but on the contrary it enables us to consider in the searching process an optimized number of blocks .blocks are numbered from to ( bottom to top , left to right ) .thus , with a repeated use of a quicksort routine the set is partitioned by the block - based partitioning structure into subsets , , where are the points stored in the -th _ neighbourhood _ , i.e. in the -th block and in its eight neighbouring blocks . in such framework , we are able to get an optimal procedure to find the nearest points .in fact , given a subdomain , whose centre belongs to the -th block , we search for all data lying in the -th subdomain only among those lying in the -th neighbourhood . the same partitioning structure , in case of compactly supported rbfs ( csrbfs ) , must be considered locally for each subdomain .in fact , in order to build the -th stable approximation matrix , among all points lying in the -th subdomain , only those belonging to the support of the csrbf must be considered . among several routines which can be employed to determine the neighboring points , we choose the block - based data structureanyway , we stress that the algorithm , here proposed , works in any dimension , while the block - based data structure is only implemented for , . thus in higher dimensions such structure must be replaced by standard routines , such as kd - trees , . for easiness of the reader ,the procedure is here described in the unit square , however , following , any extension to irregular domains is possible .since the stable wsvd - pu algorithm is characterized by the construction of local rbf stable approximants , we consider the local data sets , composed by points , .thus , the complexity of this algorithm is influenced by the following computational issues : 1 .organize by means of a partitioning structure the nodes among the subdomains , 2 .compute the stable basis on each subdomain concerning the efficient organization of points , an extensive complexity analysis , briefly shacked in subsection [ comp_ps ] , can be found in .the cost associated to the computation of a local stable basis is investigated in subsection [ compl_lan ] . performing the lanczos procedure on a matrix , where is the number of vectors computed by the algorithm , i.e. is the _ good _ low rank approximation , ( a priori unknown in our case ) , .given the interpolation matrix defined on the , the lanczos method forms the matrix for after iterations .usually we have , but in some cases the maximum number of iterations can be reached and so , in a more general setting , .this routine requires : time complexity .thus for each subdomain the upper bound for the computational time of the lanczos procedure is given by the right - hand side of . in case of sparse matrices , such as the ones arising from the use of csrbfs, the lanczos procedure can be performed in : time complexity , where is the number of non - zero entries .then a singular value decomposition is applied to the matrix .we remark that performing a singular value decomposition on a matrix requires time complexity .the singular value decomposition for each subdomain is applied to the matrix ; once more we stress that .thus for each subdomain the singular value decomposition can be performed in : time complexity .let us now focus on the block - based partitioning structure used to organize the data sites in blocks .we remark that such efficient organization of points is specifically implemented for 2d data sets .anyway , the proposed wsvd - pu algorithm is robust enough to work in any dimension , provided that a different partitioning structure is performed .let be the number of data sites belonging to a strip .the procedure used to store the points among the different subdomains is based on recursive calls to a _ quicksort _ routine which requires , where is the number of elements to be sorted .thus , letting the average number of points lying in a strip , the computational cost needed to organize the points among the different subdomains is : concerning the searching procedure , for each subdomain a quicksort procedure is used to order distances .thus observing that the data sites in a neighbourhood are about and taking into account the definitions of and , the complexity can be estimated by : the estimate follows from the fact that we built a partitioning structure strictly related to the size of the subdomains and ad hoc for the pu method .the same computational cost , in case of csrbfs , must be considered locally for each subdomain , to build the sparse interpolation and evaluation matrices . in such stepswe usually have a relatively small number of nodes , with , where the index identifies the -th subdomain .this section is devoted to point out , by means of extensive numerical simulations , stability and accuracy of the wsvd - pu interpolant . to thisaim comparisons with the standard pu interpolant will be carried out .experiments are performed considering , uniformly random halton nodes , a grid of subdomain centres and a grid of evaluation points , which are contained in the unit square \times [ 0 , 1] ] . moreover , in order to point out the versatility of the proposed method , different kernels with different order of smoothness are considered , see table [ tab_1 ] .the error is computed using as test function the well - known franke s function : +\frac{3}{4}\exp\left[-\frac{(9x_1 + 1)^2}{49}-\frac{9x_2 + 1}{10}\right]\\ & + \frac{1}{2 } \exp\left[-\frac{(9x_1 - 7)^2+(9x_2 - 3)^2}{4}\right]-\frac{1}{5 } \exp\left[-(9x_1 - 4)^2-(9x_2 - 7)^2\right].\end{aligned}\ ] ] in figure [ fig_1 ] we compare the rmses obtained by means of the wsvd - pu interpolant ( solid line ) with the ones obtained performing the classical pu method ( dashed line ) . as tolerance value in we set .these graphs point out that the use of the wsvd - pu local approach reveals a larger stability than the standard pu interpolant .moreover , the use of a local method enables us to improve the rmse for the optimal shape parameter in case of flat kernels , see figure [ fig_1 ] and table [ tab_1 ] .this is consistent with the fact that in a local stable method , differently from , we have to solve small linear systems and therefore few terms are neglected in .furthermore , from figure [ fig_1 ] we can note that the wsvd - pu method turns out to be more effective with flat kernels , while for more picked bases the improvement of using stable bases becomes negligible as the order of bases function decreases .thus , from our numerical experiments , we can observe three kinds of behavior depending on different rbf regularity classes .specifically , the features of such classes , which differ both in terms of stability and accuracy from the standard basis , can be summarized as : * for kernels : improvement of stability and of the optimal accuracy ; * for kernels , with : improvement of stability and same optimal accuracy ; * for kernels : same stability and same optimal accuracy . for , and kernels .the classical pu interpolant is plotted with dashed line and the wsvd - pu approximant with solid line . from left to right , top to bottom we consider the ga , imq , m6 , w6 , m4 and w4 kernels , respectively.,title="fig:",width=332 ] -1.6 cm for , and kernels .the classical pu interpolant is plotted with dashed line and the wsvd - pu approximant with solid line . from left to right , top to bottom we consider the ga , imq , m6 , w6 , m4 and w4 kernels , respectively.,title="fig:",width=332 ] + for , and kernels .the classical pu interpolant is plotted with dashed line and the wsvd - pu approximant with solid line . from left to right , top to bottom we consider the ga , imq , m6 , w6 , m4 and w4 kernels , respectively.,title="fig:",width=332 ] -1.6 cm for , and kernels .the classical pu interpolant is plotted with dashed line and the wsvd - pu approximant with solid line . from left to right , top to bottom we consider the ga , imq , m6 , w6 , m4 and w4 kernels , respectively.,title="fig:",width=332 ] + for , and kernels .the classical pu interpolant is plotted with dashed line and the wsvd - pu approximant with solid line . from left to right , top to bottom we consider the ga , imq , m6 , w6 , m4 and w4 kernels , respectively.,title="fig:",width=332 ] -1.6 cm for , and kernels .the classical pu interpolant is plotted with dashed line and the wsvd - pu approximant with solid line . from left to right , top to bottom we consider the ga , imq , m6 , w6 , m4 and w4 kernels , respectively.,title="fig:",width=332 ] ' '' '' & method & rmse & & rmse & & rmse & & rmse & + ' '' '' & pu & & & & & & & & + & wsvd - pu & & & & & & & & + ' '' '' & pu & & & & & & & & + & wsvd - pu & & & & & & & & + ' '' '' & pu & & & & & & & & + & wsvd - pu & & & & & & & & + moreover , since we are interested in pointing out the efficiency of the proposed wsvd - pu algorithm , in table [ tab_3 ] we also report the cpu times obtained by using our stable interpolation method with the gaussian rbf as local approximant , for each of the three different data sets .tests have been carried out on a intel(r ) core(tm ) i3 cpu m330 2.13 ghz processor . ' '' '' & & & + ' '' '' cpu [ s ] & & & +in this section we focus on an application to earth s topography , which consists in approximating with our algorithm a set of real scattered data . in particular, we consider the so - called _ glacier _ data set .it is composed by points representing digitized height contours of a glacier , .the difference between the highest and the lowest point is m. such points , differently from the halton data , are not quasi - uniform .furthermore , they are distributed on an irregular domain ^ 2 ] , we reduce such points taking only those lying in by means of the technique described in . to obtain reliable and numerically significant results of the error , in this applicationit is more appropriate to use the relative rmse ( rrmse ) : in figure [ fig_2 ] , we show how the rrmses vary with respect to the shape parameter $ ] . in doing so, we consider the following kernels : ga , w6 and m4 .as already shown , the results point out once more that the proposed approach is stable and moreover turns to be effective also in applications .the errors for the optimal shape parameter are shown in table [ tab_2 ] .since we refer to points with highly varying densities and thus truly ill - conditioned matrices , the classical pu method does not give acceptable approximations .consequently , we do not report the errors obtained with this standard algorithm . with ga , w6 and m4 kernels for the wsvd - pu approximant.,title="fig:",width=332 ] -1.6 cm with ga , w6 and m4 kernels for the wsvd - pu approximant.,title="fig:",width=332 ] ' '' '' rrmse & & rrmse & & rrmse & + ' '' '' & & & & & +the first , third and fourth authors are partially supported by the university of torino through research project metodi numerici nelle scienze applicate . the second and fifth authors are partially supported by the funds of the university of padova , project cpda124755 multivariate approximation with application to image reconstruction .w. pogorzelski , integral equations and their applications .i , translated from the polish by jacques j. schorr - con , a. kacner and z. olesiak .international series of monographs in pure and applied mathematics , vol .pergamon press , oxford , 1966 .a. safdari - vaighani , a. heryudono , e. larsson , a radial basis function partition of unity collocation method for convection - diffusion equations arising in financial applications , j. sci .* 64 * ( 2015 ) , 341367 .h. wendland , fast evaluation of radial basis functions : methods based on partition of unity , in approximation theory x : wavelets , splines , and applications , c. k. chui , l. l. schumaker , j. stckler ( eds . ) , vanderbilt univ . press ,nashville , tn , 2002 , pp .
in this paper we propose a new stable and accurate approximation technique which is extremely effective for interpolating large scattered data sets . the partition of unity ( pu ) method is performed considering radial basis functions ( rbfs ) as local approximants and using locally supported weights . in particular , the approach consists in computing , for each pu subdomain , a stable basis . such technique , taking advantage of the local scheme , leads to a significant benefit in terms of stability , especially for flat kernels . furthermore , an optimized searching procedure is applied to build the local stable bases , thus rendering the method more efficient . meshfree approximation , radial basis functions , partition of unity , scattered data interpolation , numerical stability , krylov space methods . 65d05,65d15,65y20 .
in 1981 , dicke proposed a concept of interaction - free measurement ( ifm ) .however , current discussion of ifm appears from the following problem stated by elitzur and vaidman : `` let us assume there is an object that absorbs a photon with strong interaction if the photon approaches the object closely enough .can we examine whether or not the object exists without its absorption ? ''the reason that we do not want to let the object absorb the photon is that it might lead to an explosion , for example .elitzur and vaidman themselves present a method of the ifm that is inspired by the mach - zehnder interferometer .then a more refined one is proposed by kwiat _et al_. .an experiment of their ifm is reported in ref .the ifm finds wide application in quantum information processing ( the bell - basis measurement , quantum computation , and so on ) . according to the ifm proposed by kwiat _et al_. , the absorbing object is put in the interferometer that consists of beam splitters , and we inject a photon into it to examine whether or not the object exists .the probability that we can find the object in the interferometer arrives at unity under the limit of in the case where the interaction between the object and the photon is strong enough and perfect . in this paper, we consider the ifm of kwiat _ et al_. with imperfect interaction . in ordinary ifm ,the absorbing object is expected to absorb a photon with probability unity when the photon approaches the object closely enough . however , in this paper , we assume that the photon is absorbed with probability ( ) and it passes by the object without being absorbed with probability when it approaches close to the object .we estimate the success probability of the ifm , namely the probability that we can find the object without the photon absorbed , under this assumption .this problem has been investigated in ref . already . in ref . , although a correct approximating equation of the success probability of the ifm with the imperfect interaction is derived , its derivation is wrong .hence , we give a right treatment of this problem in this paper .interferometer of kwiat _et al_. for the ifm . ] in the rest of this section , we give a short review of the ifm proposed by kwiat _et al_. they consider an interferometer that consists of beam splitters as shown in fig .[ kwinterferometer ] .we describe the upper paths as and lower paths as , so that the beam splitters form the boundary line between the paths and the paths in the interferometer .we write a state with one photon on the paths as and a state with no photon on the paths as .this notation applies to the paths as well .the beam splitter in fig .[ kwinterferometer ] works as follows : [ the transmissivity of is given by , and the reflectivity of is given by in eq .( [ definition - beam - splitter - b ] ) . ]let us throw a photon into the lower left port of in fig .[ kwinterferometer ] .if there is no object on the paths , the wave function of the photon that comes from the beam splitter is given by if we assume , the photon that comes from the beam splitter goes to the upper right port of with probability unity .next , we consider the case where there is an object that absorbs the photon on the paths .we assume that the object is put on every path that comes from each beam splitter , and all of these objects are the same one .the photon thrown into the lower left port of can not go to the upper right port of because the object absorbs it .if the incident photon goes to the lower right port of , it has not passed through paths in the interferometer . therefore , the probability that the photon goes to the lower right port of is equal to the product of the reflectivities of the beam splitters .it is given by . in the limit of , approaches as follows : \nonumber \\ & = & 1.\end{aligned}\ ] ] from the above discussion , we can conclude that the interferometer of kwiat et al . directs an incident photon from the lower left port of with probability at least as follows : ( 1 )if there is no absorbing object in the interferometer , the photon goes to the upper right port of , and ( 2 ) if there is the absorbing object in the interferometer , the photon goes to the lower right port of . furthermore , if we take large , we can set arbitrarily close to .therefore , we can examine whether or not the object exists in the interferometer .the ifm introduced in the former section is realized by beam splitters and interaction between the absorbing object and the photon . in this section ,we consider the case where the interaction is not perfect .( we regard the beam splitters as accurate enough . )we assume that the photon is absorbed with probability and it passes by the object without being absorbed with probability when it approaches close to the object .we estimate the success probability of the ifm under these assumptions .we assume the following transformation in fig .[ kwinterferometer ] .the photon that comes from each beam splitter to the upper path suffers where and . is the state where the object absorbs the photon .we assume that it is normalized and orthogonal to , where and . from now on , for simplicity, we describe the transformations that are applied to the photon as matrices in the basis .writing we can describe the beam splitter defined in eq .( [ definition - beam - splitter - b ] ) as where and the absorption process defined in eq .( [ imperfect - absorption - process ] ) as where .the matrix is not unitary because the process defined by eq .( [ imperfect - absorption - process ] ) causes absorption of the photon ( dissipation or decoherence ) .results of numerical calculation of the success probability from eqs .( [ definition - basis - vectors ] ) , ( [ definition - matrix - b ] ) , ( [ definition - matrix - a ] ) , and ( [ definition - fidelity - ifmgate ] ) . with fixing , we plot as a function of and link them together by solid lines . is the rate at which the object fails to absorb the photon . and are dimensionless quantities . is the number of the beam splitters .four cases of , , , and are shown in order from top to bottom as solid curves . ]the probability that an incident photon from the lower left port of passes through the beam splitters and is detected in the lower right port of in fig .[ kwinterferometer ] is given by we plot results of numerical calculations of the success probability defined by eqs .( [ definition - basis - vectors ] ) , ( [ definition - matrix - b ] ) , ( [ definition - matrix - a ] ) , and ( [ definition - fidelity - ifmgate ] ) in fig .[ imperfectifm - n - exactp ] . with fixing , we plot as a function of and link them together by solid lines . in fig .[ imperfectifm - n - exactp ] , the four cases of , , , and are shown in order from top to bottom . seeing fig .[ imperfectifm - n - exactp ] , we have the following question .if , at what value does converge in the limit of ? at first glance , seems to depend on .however , fig .[ imperfectifm - n - exactp ] lets us expect . in the next section , we show that this expectation is true .in this section , we examine defined in eq .( [ definition - fidelity - ifmgate ] ) for large . for this purpose , first we derive an exact formula of , and second we expand in powers of with fixing .first we derive the exact formula of .we note the following .eigenvalues of the matrices and are given by and .thus , never diverges to infinity .we let to be an upper triangular matrix by unitary transformation as follows : where ^{2 } , \nonumber \\ t & = & 4\sin^{2}\theta+[(1-\sqrt{\eta})\cos\theta - r]^{2},\end{aligned}\ ] ] and , \nonumber \\ y & = & -\frac{1}{\sqrt{st}}(1-\sqrt{\eta})\sin^{2}\theta [ 2(1+\sqrt{\eta})\cos\theta \nonumber \\ & & \quad + \sqrt{2 } \sqrt{1 - 6\sqrt{\eta}+\eta+(1+\sqrt{\eta})^{2}\cos 2\theta } ] , \nonumber \\ z & = & \frac{1}{2}[(1+\sqrt{\eta})\cos\theta+r].\end{aligned}\ ] ] we obtain by induction as follows : where from the above calculations , we obtain the exact formula of as second we expand in powers of with fixing .( we note . )we can expand components of the matrices , , and in powers of as follows : \nonumber \\ & & \quad\quad + o(\frac{1}{n^{5}}),\nonumber \\ u_{10 } & = & \frac{\pi}{2n}\frac{\sqrt{\eta}}{1-\sqrt{\eta } } [ 1+\frac{\pi^{2}}{24n^{2}}\frac{2 - 3\eta+\eta^{3/2}}{(1-\sqrt{\eta})^{3 } } ] \nonumber \\ & & \quad\quad + o(\frac{1}{n^{5}}),\nonumber \\ u_{11 } & = & -1+\frac{\pi^{2}}{8n^{2}}\frac{\eta}{(1-\sqrt{\eta})^{2}}+o(\frac{1}{n^{4 } } ) , \label{components - u - expansion}\end{aligned}\ ] ] and +o(\frac{1}{n^{4 } } ) , \nonumber \\ y & = & -\frac{\pi}{2n } [ 1-\frac{\pi^{2}}{24n^{2}}]+o(\frac{1}{n^{5 } } ) , \nonumber \\ z & = & 1-\frac{\pi^{2}}{8n^{2}}\frac{1-\eta}{(1-\sqrt{\eta})^{2}}+o(\frac{1}{n^{4 } } ) .\label{expansion - components - d}\end{aligned}\ ] ] next , from eqs .( [ exact - components - d - n-1 ] ) and ( [ expansion - components - d ] ) , we expand components of in powers of with fixing . and can be written as follows : , \label{expansion - component - x } \\ z & = & 1-\frac{\pi^{2}}{8n}\frac{1-\eta}{(1-\sqrt{\eta})^{2 } } + o(\frac{1}{n^{2 } } ) .\label{expansion - component - z}\end{aligned}\ ] ] in eq . ([ expansion - component - x ] ) , a factor appears .we never expand in powers of and regard it as a constant . because is a large number, we can assume .thus , we can write in the form , + o(\frac{1}{n^{3 } } ) . \label{expansion - component - y}\ ] ] substituting eqs .( [ definition - matrix - b ] ) , ( [ definition - matrix - u ] ) , ( [ definition - matrix - d - n-1 ] ) , ( [ components - b - expansion ] ) , ( [ components - u - expansion ] ) , ( [ expansion - component - x ] ) , ( [ expansion - component - z ] ) , and ( [ expansion - component - y ] ) into eq . ( [ exact - formula - fidelity - ifmgate ] ) , we obtain an expansion of the success probability in powers of as follows : hence , we can let get close to unity arbitrarily by increasing . [if we make larger , the reflectivity of the beam splitter becomes larger and gets close to unity . ]results of numerical calculation of the success probability as a function of with . is the rate at which the object fails to absorb the photon . and are dimensionless quantities . is the number of the beam splitters .a thick solid curve represents an exact result from eqs .( [ definition - basis - vectors ] ) , ( [ definition - matrix - b ] ) , ( [ definition - matrix - a ] ) , and ( [ definition - fidelity - ifmgate ] ) .a thin solid curve represents an approximate result from eq .( [ approximating - eq - p ] ) . ] from eq .( [ expansion - success - probability ] ) , we obtain the following approximating equation of for large , in fig .[ imperfectifm - n - approxp ] , we plot results of numerical calculation of the success probability as a function of with .a thick solid curve represents an exact result from eqs .( [ definition - basis - vectors ] ) , ( [ definition - matrix - b ] ) , ( [ definition - matrix - a ] ) , and ( [ definition - fidelity - ifmgate ] ) .a thin solid curve represents an approximate result from eq .( [ approximating - eq - p ] ) . seeing fig .[ imperfectifm - n - approxp ] , we find that eq .( [ approximating - eq - p ] ) is a good approximation to for large .we show that even if the interaction between the object and the photon is imperfect , we can let the success probability of the ifm get close to unity arbitrarily by making the reflectivity of the beam splitter larger and increasing the number of the beam splitters .we obtain an approximating equation of with the imperfect interaction . to overcome the imperfection of the interaction , we need to prepare a large number of beam splitters and let their transmission rate get smaller .99 r. h. dicke , ` interaction - free quantum measurements : a paradox ? ' , am .j. phys . * 49 * , 925930 ( 1981 ) .a. c. elitzur and l. vaidman , ` quantum mechanical interaction - free measurements ' , found* 23 * , 987997 ( 1993 ) .p. kwiat , h. weinfurter , t. herzog , a. zeilinger , and m. a. kasevich , ` interaction - free measurement ' , phys .lett . * 74 * , 47634766 ( 1995 ) .p. g. kwiat , a. g. white , j. r. mitchell , o. nairz , g. weihs , h. weinfurter , and a. zeilinger , ` high - efficiency quantum interrogation measurements via the quantum zeno effect ' , phys .. lett . * 83 * , 47254728 ( 1999 ) .h. azuma , ` interaction - free generation of entanglement ' , phys .a * 68 * , 022320 ( 2003 ) .h. azuma , ` interaction - free quantum computation ' , phys .a * 70 * , 012318 ( 2004 ) .
in this paper , we consider interaction - free measurement ( ifm ) with imperfect interaction . in the ifm proposed by kwiat _ et al_. , we assume that interaction between an absorbing object and a probe photon is imperfect , so that the photon is absorbed with probability ( ) and it passes by the object without being absorbed with probability when it approaches close to the object . we derive the success probability that we can find the object without the photon absorbed under the imperfect interaction as a power series in , and show the following result : even if the interaction between the object and the photon is imperfect , we can let the success probability of the ifm get close to unity arbitrarily by making the reflectivity of the beam splitter larger and increasing the number of the beam splitters . moreover , we obtain an approximating equation of for large from the derived power series in .
electricity demand in the residential sector can be decomposed into a combination of individual appliances aggregated by individual households .these appliances are tied together through different activities performed by users throughout a day and each of these activities may involve one or more of these power consuming devices .these appliances are conventionally managed by each user according to his / her preferences , e.g. one may decide to wash clothes early in the morning before he leaves for work , and washing clothes is an activity or task which involves the use of washing machine , dryer etc .different users can perform this task at different hours of the day according to their convenience . and many of such acvitives / task are flexible and can be performed at any time during a day . on the other hand , there may be certain activities which can be regarded as essential and which needs to be performed daily at exactly specified time slots e.g. after sunset from 7 pm till mid night one has to turn on the lights . such activities and the devices involved in these activities then contribute towards electricity load which is essential and which has strict scheduling requirements . in a traditional grid , the dominant setup has been to serve the preferences of the users as the priority need and match electricity supply to the instantaneous demand .this however requires constant manipulation of electricity production levels . as a consequence power generating plantssuffer large deviations from their steady operating points which impose additional costs to the overall system .all this is changing as the grid is becoming smart - .a smart grid can help the operator in shaping the demand ( e.g. schedule the washing machine at a later time slot when there is less demand ) so as to reduce the overall societal cost for them , this can be done through the flattening of the demand curve . to achieve a flatter demand curve, it can propose incentives ( e.g. discount ) to users to change their preference levels for different activities .users can then allow the grid to manage and schedule certain appliances to enjoy these benefits at the expense of suffering some level of inconvenience . in this paper , we attempt to quantify the inconvenience levels , by varying the number of appliances that participate through deviation from their preferred scheduling time slots and also by varying the number of time slots each activity deviates .we can thus identify a compromise between the grid operator objectives and user convenience levels .we believe such understanding is beneficial for the grid to design effective incentive to achieve load balancing in a smart grid .there are some recent studies on this problem . in authors design incentives and propose scheduling algorithms considering strictly convex functions of costs .users are given incentives to move to off peak hours and these incentives are proposed using game theoretic analysis .however they do not consider or quantify the inconvenience levels of the users . in authors propose pricing scheme for users in order to achieve a perfectly flat demand curve .they show that finding an optimum schedule is np - hard problem .they propose centralized and distributed algorithms depending on the degree of knowledge of the state of the network .the authors in propose a strategy to achieve a uniform power consumption over time .their algorithm schedules the devices in such a way that a target power level is not exceeded in each time slot .however again the authors do not take into account the inconvenience level of users while designing these algorithms .in the authors use convex optimization tools and solve a cooperative scheduling problem in a smart grid .the authors in use a water - filling based scheduling algorithm to obtain a flat demand curve .the proposed algorithm does not require any communication between scheduling nodes .the authors also study the possible errors in demand forecast and incentives for customer participations .it should be noted that the objective of all these studies is to achieve flat demand curve for the grid .however in this paper we study the compromise between the grid objective of flat demand vis - a - vis the user inconvenience levels , as the acceptance from the users is the key to have smart grid to be succeed .the rest of the paper is organized as follows . in sectionii we describe the load model , our approach and problem formulation .proposed solution , algorithms and metric for comparing various schedules are described in section iii .simulation results are presented in section iv while the paper is concluded in section v.in this paper we consider two types of loads in the grid i.e. essential and flexible .essential load is due to essential activities and the devices involved in these activities have fixed scheduling needs .flexible load is due to flexible activities and the devices involved in these activities can have flexible scheduling requirements .there is a preferred scheduling time slot for these flexible activities and user feels most convenient if these activities are performed according to their preferences .however we assume a generalized framework that if some activity or task is declared as flexible then it can be scheduled in time slots either before or after the preferred time slot for this activity .for example , pre - cooling a room is an activity that can be scheduled before the preferred time slot , while cloth washing is an activity that can be scheduled after the preferred time slot .we understand that there is no activity that can be scheduled both before and after the preferred time slot , but in this study , we just assume a generic load with such flexibility to facilitate the problem formulation .the level of inconvenience is measured by the deviation of an activity from its specified time slot .the more an activity is scheduled beyond its specified preferred time slot ( either to the left or to the right of it ) the more inconvenience a user faces . in the rest of the paper ,the terms , devices , activities and tasks are used interchangeably .similarly the terms , flexible and shiftable are also used interchangeably . given a set of tasks and their energy consumptions , we propose two extreme schedules to serve as bounds .the first schedule is optimal for the grid in terms of load balancing and the second schedule is best for the user in terms of its preference for non - essential tasks : * * grid convenient ( gc ) schedule * : for the given set of essential and shiftable loads , this represents the best schedule from the perspective of the grid .this schedule does not care about the user preferences in scheduling essential as well as non - essential tasks .instead the objective of this schedule is to achieve maximum load balancing across various time slots .we can obtain this schedule by equally dividing all the load in each time slot . * * user convenient ( uc ) schedule * : this schedule is the best schedule from the customer s perspective .this is another extreme schedule which does not take into account the load balancing preferences of the grid .instead it schedules all the non - essential tasks at the most preferred time slots specified by the users .this schedule is most convenient for the users . any other schedule for the given set of loads will lie between these two extremes .for a given set of essential and shiftable loads , the gc schedule is practically impossible to achieve because there is in reality , not much flexibility in shifting the essential loads . since we assume that we can only shift the non - essential loads , we study the region between these two extreme schedules through the following parameters : * we change the allowable time slot deviation of non - essential devices from their preferred time slots , serving as a proxy to changing the convenience levels of users .it allows for us to schedule a device within a flexible number of time slots either to the left or to the right of its preferred time slot .* we vary the number of non - essential devices willing to be flexible . all the devices which declare themselves as non - flexiblewill then be treated as essential loads and will start exactly at their preferred time slots . through this study , results can influence the stakeholders involved in this system .the grid can define incentives by measuring the deviation of a given schedule from the perfectly flat demand profile while also keeping in view the gc schedule for given load conditions .similarly a customer can through feedback from its deviation of a given schedule from the uc schedule , readjust its preference conditions .let denote the set of all essential tasks .we assume that the electricity consumption data of these essential tasks on an hourly basis are known .let denote the consumption of electricity by all the essential tasks to be performed during the time slot ( maybe hour or half hour etc ) .let denote the set of all non - essential tasks .the electricity consumption of these non - essential tasks is also assumed to be known .for a non - essential task , let denote its total energy consumption .let denote the total time required to complete non - essential task .we allow for non - essential tasks to require several time slots to complete , and once the we decide to carry out this task at time then we can not stop it until it is completed .let denotes the best operating time for task . since we have to finish all the non - essential tasks within time slots, therefore we assume that ( to allow task to finish by time ) .let denote the portion of non - essential load scheduled at time .similarly , let contain the per time slot load of non - essential device .it should be noted that if device is schedule in time slot then , the objective of this schedule is to achieve perfect load balancing for the grid .this schedule re - distributes the essential as well as flexible load equally in all time slots .let us denote the perfectly flat schedule by .it can be obtained as follows : once again note that this schedule is not a practical schedule for the given set of essential and shiftable loads .however this schedule represents the ideal situation for the grid , and merely serve as benchmark purposes .the objective of this schedule is to carry out all the essential and non - essential tasks at their specified best time slots .this schedule can be determined by treating the non - essential tasks like essential load at the specified time slots .e.g. if for task the best time slot is and then , let us denote this schedule by .we determine and then the total scheduled load during time slot is given as , this is a practical schedule , representing the current status quo and the most convenience for the users .we can obtain a range of schedules between the above two extreme schedules by changing the number of devices declaring themself as flexible and also by defining the number of time slot deviations they are willing to tolerate .if all the devices declare them as non - flexible then we will obtain schedule ( uc schedule ) . on the other hand if all the all non essential devices declare them as flexible and are willing to tolerate maximum possible time slot deviation then such a schedule , though not perfectly flat ( due to the presence of essential loads in each time slot ) will be the best schedule for the grid for a given set of loads .let denote the set of devices which declare themselves as flexible .similarly let denote the time slot deviation that device is willing to tolerate .it means that we aim to schedule non - essential task within time slots of its preferred start time .this deviation can either be to the left or to the right of the preferred time slot .we assume here that in terms of inconvenience , the scheduling of a device time slots before its preferred time slot is equivalent to the inconvenience caused by scheduling the same device time slots after its preferred time slot .since ] . similarly if then we can only perform task before andschedule it in interval ] where and .all the non - essential devices have and they have to be scheduled exactly at time slot and completed after time slots .we then treat all such devices as essential load , determine ( as explained in the description of the uc schedule ) and then update the essential load accordingly i.e. we can now formulate the scheduling problem as follows ( we refer this problem as ) , subject to , \label{sch1_const2}\ ] ] eq ( [ sch1_const1 ] ) indicates that the total energy consumed by all the non - essential tasks should be equal to their total required energy . eq ( [ sch1_const2 ] ) says that if non - essential task starts at time then it should be finished at time without interruption .the start time of flexible devices can lie in the interval ] .in this section we discuss the solution of the above scheduling problems and design practical scheduling algorithms .the optimal solution of the above problem ( including all the special cases ) in general depends on the sequence or order in which we consider non - essential loads .we illustrate this fact by following simple example .+ * example : * consider time slots .the essential load is given as .there are two 100% shiftable loads with demands per time slot given as and .there are two possible permutations , load 1 followed by load 2 or load 2 followed by load 1 . in the first case , the final load per time slot is , \{4,3,5 } with a peak load of 5 in third time slot . for the second case when load 2 is scheduled before load 1we obtain two possible schedules , \{7,3,2 } or \{2,3,7 } both of which are optimal for this order and give a peak load of 7 in both schedules .thus , in order to reduce the peak , we should schedule load 1 before scheduling load 2 . therefore the sequence in which we consider non - essential loads can not be ignored .we now prove that the above problem is np hard problem .[ np ] the defined problem is np - hard .we consider the special case of the defined problem , where we restrict that , and .we then prove that the special case is np - hard by a induction from the multi - processor scheduling problem , which is a well - know np - hard problem in the strong sense .multi - processor scheduling problem : we are given identical machines in and jobs in .job has a processing time .the objective of multi - processor scheduling problem is to assign jobs to the machines so as to minimize the maximum load of the machines .given an instance of multi - processor scheduling problem , we can construct an instance of the decision version of the above special case of the defined problem in polynomial time as follows .let there be time slots that can be scheduled for tasks , there be shiftable tasks , and be the power consumed for the -th task .then , the load of the tasks scheduled at time is equal to the load of the jobs assigned at the -th machine . in other words ,the objective of minimizing the maximum load at each time is is to minimize the maximum working load assigned at each machine .thus , the instance of multi - processor scheduling problem is equivalent to an instance of the special case of the defined load balancing problem .thus , by induction , the defined load balancing problem is np - hard .despite the fact that the problems are np hard we can still design an algorithm to find the optimal schedules .however the complexity of the optimal algorithm is exponential which makes it infeasible when the number of flexible devices is large .we give the optimal algorithm for problem below .let hold the total power consumption , required completion time and lower and upper limits of scheduling interval ( calculated based on specified -time slot deviations ) for non - essential tasks willing to be flexible .note that and .let denote all possible permutations of this set i.e. all possible ways of arranging the shiftable tasks in this set .the total number of permutations is .let denote the total load scheduled in time slot .update essential load according to eq ( [ ess_update ] ) for all .+ 2 . for all + 3 .initialize : .+ 4 . for all do , + 5for all ] . if task starts at time then it will be complete at .we obtain all the schedules with all possible start times in lines 5 - 9 for task . from all possible schedules we select the one which gives the minimum peak in line 10 and select this best schedule . in line 11we update the total load and repeat for the next task . finally in line 14we select the best order in which we should consider the shiftable tasks .the final schedule is given in line 15 .this is the optimal algorithm .however , the complexity of this algorithm is exponential which may not be feasible when .we discuss a special case of the above problem when all non - essential tasks have the same power consumption .e. the required number of time slots to complete each task however are different i.e. . in this casethe sequence in which we pick the tasks for scheduling becomes irrelevant .based on this observation we now develop a low complexity sub - optimal algorithm. + + 1 .initialize : and .+ 2 . for all do , + 3for all ] .once we obtain the schedule then in lines 11 - 15 we restore the loads to their actual power consumption levels .we can measure the difference between any two schedules and where denotes the load at time slot by measuring their mean square error i.e. let denote any arbitrary schedule . as defined beforelet denote the gc schedule while denote the uc schedule . then we define , where measures the deviation of any arbitrary schedule for the given set of load conditions from the gc schedule ; while measures the deviation of any arbitrary schedule from the uc schedule .the smaller the value of means that schedule is more flat ; while a small value of means that schedule is more close to the uc schedule .we consider a generalised simulation setup of residential household appliances where electricity consumption is assumed to be constant over the consumption duration and represented in kwh .we generate essential loads in each time slot as discrete uniform integer random variables , taking values between 1kwh and 5kwh .each time slot represents one hour . in addtion, we assume that there are 100 generalized devices which can be shifted .the total power consumption of each shiftable device is generated as a discrete uniformly distributed random variable taking values between 1kwh and 5kwh .the total duration of each shiftable task is generated as a discrete uniform random integer variable taking values between 1 and 5 time slots .we also assume that each shiftable device has a preferred time slot . againthis preferred time slot is generated as a discrete uniform random integer variable . in fig .[ fig0:fig ] we compare the optimal algorithm with the sub - optimal algorithm .we assume that all the non - essential devices are 100% flexible i.e. they can be scheduled at any time $ ] .for comparison we measure the mean square difference of both the schedules from a perfectly flat schedule ( gc schedule ) .since the complexity of the optimal algorithm is exponential , we restrict ourself to only 7 non - essential devices .we can see that although there is a small difference between the performance of both the schedules , the complexity reduction between the two algorithms is significant , and we will use only the sub - optimal algorithm in the following simulations . )vs number of 100% shiftable devices ] in fig .[ fig1:fig ] and fig .[ fig2:fig ] we vary the number of devices which are willing to be 100% flexible .all other devices which are not willing to be flexible are then treated as essential load and their power consumption is added to the essential load at their preferred time slots .we plot , the deviation of our proposed sub - optimal schedule from the gc schedule in fig .[ fig1:fig ] .it is obvious that as more devices become flexible this deviation decreases .however , we can observe that after 40 devices the value of does not decrease much which means that there is not much gain for the grid if more devices become flexible .the flatness level achieved by 40 devices is comparable to the flatness level achieved by 100 devices . in fig .[ fig2:fig ] we plot , the deviation of our proposed sub - optimal algorithm from the uc schedule . as more devices become flexible their scheduling is not performed at their most convenient time slots and thus users suffer more inconvenience .the level of inconvenience keeps on increasing as more devices become flexible .when 40 devices are 100% flexible the value of is 517 and the corresponding value of is 206 .similarly when all the devices are 100% flexible the value of is 1375 and that of is 154 . if we define relative inconvenience level as , and relative flatness level as .then for 40 devices % and % while for 100 devices we have % and % .thus if a user only allows 40 devices to become 100% flexible he can reduce relative inconviniec level by % while the non - flatness will only increase by % .* this results shows that there is a minimum level of customer participation in the smart grid that the grid should aim for that would maximize the gain to the operator while at the same time imposing minimal inconvenience .based on this observation , the inconvenience to the customer will not be too significant .although these results may not be representative of the system , but it does indicate a great research opportunity to reduce system wide costs at relatively small individual inconvenience . * )vs number of 100% shiftable devices , width=170,height=170 ] ) vs number of 100% shiftable devices , width=170,height=158 ] in fig .[ fig5:fig ] and fig .[ fig6:fig ] , we obtain various curves by varying the number of flexible devices . in these simulationswe assume that the total number of available flexible devices can be up to 50 .when the devices declare themselves as flexible we can schedule them according to their described x - time slot deviation levels .the load of devices declaring them as non - flexible is then added to the essential load .e.g. if 10 devices declare themselves as flexible then the load of remaining 40 devices is added to the essential load .therefore the total load in all these curves is same .as more devices become flexible , we can achieve more flatness as evident in fig .[ fig5:fig ] .however the gains in flatness diminish and are not very significant as the number of flexible devices are increased from 30 to 50 . *again , this could represent large system savings at minimal individual costs .* similarly the gains in flatness also does not increase much as the x - time slot deviation increases beyond 10 time slots . on the other hand in fig .[ fig6:fig ] we can see that increasing the number of flexible devices significantly increases the inconvenience levels of users .when 50 devices are flexible users experience much more inconvenience compared to 30 flexible devices . also increasing x - time slot deviation also increases user inconvenience . * hence , much further research is required to quantify the trade - off between benefit versus the users participation and inconvenience caused . * from these observations , in our test case , we can conclude that significant gains in flatness can be achieved by declaring a small number of devices as flexible and keeping x - time slot deviation up to 10 time slots .how this translates to larger more representative systems need further examination . ) vs x - time slot deviation ] ) vs x - time slot deviation ]in this paper we study the problem of load balancing in smart grids . we proposed algorithms to obtain schedules by varying the number of flexible devices and inconvenience levels. then we identify the level of compromise between the grid objective of load balancing and user convenience levels .we show that by allowing only a small portion of the activities to become flexible , users can contribute significantly towards load balancing due to aggregating effect .similarly by letting the scheduling of activities deviate just a few hours from their preferred time slots can also significantly impact load balancing for the grid . more practical system and load modelswill be used in the future work to quantify these results .it is also interesting to investigate what kind of incentives that can be provided by the grid to encourage the users to have their load be flexible .this research is partly supported by sutd - zju / res/02/2011 , international design center , eirp , sse - lums via faculty research startup grant , and fundamental research funds ( no .2012hgbz0640 ) .mohsenian - rad , v. w.s .wong , j. jatskevich , r. schober , optimal and autonomous incentive - based energy consumption scheduling algorithm for smart grid , " in proc ._ ieee pes conference on innovative smart grid technologies _ , jan. 2010 .s. caron and g. kesidis , incentive - based energy consumption scheduling algorithms for the smart grid , " in proc . _ 1st ieee international conference on smart grid communications ( smartgridcomm ) _ , pp .391396 2010 .s. huang , j. xiao , j. f. pekny , g. v. reklaitis , and a. l. liu , `` quantifying system - level benefits from distributed solar and energy storage , '' _ journal of energy engineering _ , vol .138 , issue .2 , jun 2012 .
the purpose of this paper is to study conflicting objectives between the grid operator and consumers in a future smart grid . traditionally , customers in electricity grids have different demand profiles and it is generally assumed that the grid has to match and satisfy the demand profiles of all its users . however , for system operators and electricity producers , it is usually most desirable , convenient and cost effective to keep electricity production at a constant rate . the temporal variability of electricity demand forces power generators , especially load following and peaking plants to constantly manipulate electricity production away from a steady operating point . these deviations from the steady operating point usually impose additional costs to the system . in this work , we assume that the grid may propose certain incentives to customers who are willing to be flexible with their demand profiles which can aid in the allowance of generating plant to operate at a steady state . in this paper we aim to compare the tradeoffs that may occur between these two stakeholders . from the customers perspectives , adhering to the proposed scheduling scheme might lead to some inconvenience . we thus quantify the customers inconvenience versus the deviations from an optimal set by the grid . finally we try to investigate the trade - off between a grid load balancing objective and the customers preferences .
some quotes in the many worlds literature suggest a belief that one can derive the canonical structure from the hamilton operator taken alone , given as an abstract linear operator in some hilbert space , without any additional structure .for example , tegmark describes the construction of a `` preferred basis '' in many worlds : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` this elegant mechanism is now well - understood and rather uncontroversial [ ] . essentially , the position basis gets singled out by the dynamics because the field equations of physics are local in this basis , not in any other basis . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this is ( as indicated by the `` essentially '' ) an oversimplification : the decoherence - based construction considered there depends not only on the dynamics ( the hamilton operator ) , but also on some `` subdivision into systems '' a tensor product structure as can easily be seen in the quoted papers by zurek : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` one more axiom should [ be ] added to postulates ( i ) - ( v ) : ( o ) the universe consists of systems . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ but some comments made by zurek suggest that he shares the belief that physics is completely defined by the hamilton operator as well : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` both the formulation of the measurement problem and its resolution through the appeal to decoherence require a universe split into systems .yet , it is far from clear how one can define systems given an overall hilbert space of everything and the total hamiltonian . '' `` [ a ] compelling explanation of what are the systems how to define them given , say , the overall hamiltonian in some suitably large hilbert space would be undoubtedly most useful . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ indeed , the problem `` how '' to define these systems seems to assume , at least implicitly , that these systems can be defined given the hamiltonian .then , based on the decoherence technique , the preferred basis can be defined as well .the following quote suggests that vaidman shares this belief too : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ i believe that the decomposition of the universe into sensible worlds is , essentially , unique .the decomposition , clearly , might differ due to coarse or fine graining , but to have essentially different decompositions would mean having a multi - meaning escher - type picture of the whole universe continuously evolving in time . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if we interpret the `` decomposition into sensible worlds '' as something based on the decoherence - constructed `` preferred basis '' , the uniqueness of this decomposition implies the uniqueness of the `` preferred basis '' as well .schlosshauer talks about __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the physical definition of the preferred basis derived from the structure of the unmodified hamiltonian as suggested by environment - induced selection _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ which also suggests that he shares the belief .last but not least , let s quote brown and wallace .they discuss the possible non - uniqueness of the `` preferred basis '' picked out by decoherence : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` granted that decoherence picks out a quasi - classical basis as preferred , what is to say that it does not also pick out a multitude of other bases very alien with respect to the bases with which we ordinarily work , perhaps , but just as preferred from the decoherence viewpoint . such a discovery would seem to undermine the objectivity of everettian branching , leaving room for the bohmian corpuscle to restore that objectivity . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ and present the following response as preferable : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` granted that we can not rule out the possibility that there might be alternative decompositions , and that this would radically affect the viability of the everett interpretation well , right now we have no reason at all to suppose that there actually are such decompositions .analogously , logically we ca nt absolutely rule out the possibility that there s a completely different way of construing the meaning of all english words , such that they mostly mean completely different things but such that speakers of english still ( mostly ) make true and relevant utterances . such a discovery would radically transform linguistics and philosophy , but we do nt have any reason to think it will actually happen , and we have much reason to suppose that it will not . to discover one sort of higher - level structure in microphysics (be it the microphysics of sound - waves or the micro - physics of the wave - function ) is pretty remarkable ; to discover several incompatible structures in the same bit of microphysics would verge on the miraculous . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the aim of this paper is to show that this miracle happens the theory of the korteweg - de vries ( kdv ) equation gives nice counterexamples for this thesis : if a potential is a solution of the kdv equation , then the operators for different appear to be unitarily equivalent , despite defining different physics .thus , the physics of canonical quantum theories is not completely defined by the hamilton operator alone .this fact seems fatal for the idea to derive a preferred basis , using decoherence techniques , from the hamilton operator taken alone .one needs an additional structure be it the tensor product structure related with the `` decomposition into systems '' or whatever else which has to be postulated .we consider the question if this construction could be , nonetheless , used as a foundation of quantum theory , as a replacement for simply postulating the configuration space .we argue that this construction of the preferred basis combines the disadvantages of postulated structures ( lack of explanatory power ) and emergent structures ( uncertainty , dependence from other structures , especially dynamics ) , and , therefore , should be rejected in favour of the canonical way to postulate the configuration space as a non - dynamical structure , as done in canonical quantum theories as well as pilot wave theories .thus , the `` derivation '' of the `` preferred basis '' based on decoherence techniques seems useless in the domain of fundamental physics .it has it s useful applications in situations where we already have ( as in the copenhagen interpretation or in pilot wave theories ) a classical part , which defines the decomposition into systems which one needs to derive a decoherence - preferred basis .some clarification of how we interpret this belief seems useful .canonical quantum theories are defined in a more or less standard way : first , one defines the kinematics by defining some hilbert space with some set of canonical operators , with commutation relation = -i\hbar\delta^j_i$ ] on , or , equivalently , to postulate some configuration space with coordinates so that and . then , in a second step , one defines the dynamics by postulating the schrdinger equation for some hamilton operator , usually of the form implicit part of the definition of the canonical operators , is their identification with classical observables .it is hard to press this part of the definition into some formal property it depends on the particular theory .the point is that the definition is only complete if we know what it means to measure the configuration it is some procedure , usually known from the corresponding classical theory .the details of this procedure do not matter in a general discussion .but in order to apply the theory to make concrete predictions , one has to know how to measure the and . without this additional informationthe physical definition of the theory is not complete .the quantum theory predicts the result of this measurement for a state as . butthis information would be useless if we do nt know how to measure . given this physical meaning of , it seems , at a first look , completely unreasonable to think that the physics is completely defined by the hamilton operator alone .nobody would apply , for example , some unitary transformation to and , but not to , and , but nonetheless claim that the physics remain unchanged .everybody knows that one has to apply the same unitary transformation to and as well to preserve physics .now , with our interpretation of the quotes , we are in no way suggesting that the authors do not recognize this .the idea that the hamilton operator alone is sufficient is a different one . given the very special form of the hamilton operator , one can imagine that it may be possible to _ reconstruct _ the configuration space more or less uniquely given as an abstract operator on some abstract hilbert space .last but not least , the straightforward ideas to construct counterexamples fail : one can apply coordinate transformations , but these do not change the physics , and leave the abstract configuration space unchanged .one can consider a canonical rotation , , but this gives a very strange nonlocal operator of type , which forbids a physical interpretation of the new in terms of a configuration .therefore , one can conclude that there are only few sufficiently nice choices of which allow a meaningful interpretation of the as configuration space coordinates , and one could hope or expect that these few choices lead , at least approximately , to physically equivalent theories . in this case, it would be indeed the hamilton operator taken alone , as an abstract operator on a hilbert space , which would be sufficient to define the physics completely .but this expectation is false , as we will show below .a really beautiful one - dimensional example of different representations follows from the theory of solutions of the korteweg - de vries ( kdv ) equation : if the function is a solution of the korteweg - de vries equation then the operators for different are unitarily equivalent . indeed , as has been found by lax ( ) , the kdv equation is equivalent to the operator equation .\ ] ] for the self - adjoint operator as one can easily check . but this type of operator equations defines , for a self - adjoint operator , a unitary evolution of .indeed , this is simply the analogon of the heisenberg equation for the `` hamilton operator '' , applied to .the unitary transformation we need is defined by the equation for results about the existence of solutions of the kdv equation see , for example , .this representation leaves and invariant but changes and by resp .this is not yet exactly our point .but , given the unitary operators , we can also consider another equivalent representation : [ hfixed ] for a given hamilton operator , with as defined by , there exist canonical operators , , so that the representation of in terms of is given by .indeed , it is sufficient to define the operators , by and . in this case , the unitary transformation , applied to the operators , gives , thus , the representation of in terms of is .it is worth to note that the hamiltonian in this theorem does not depend on .only the operators depend on .this has the consequence that the representation in terms of these -depending operators depends on too .but is , nonetheless , the same for all .thus , given the operator alone , we can not reconstruct the operators , uniquely .the operators , give equally nice candidates for canonical variables for : the representation of in these operators has the same canonical form , and the potential functions in this standard form are as nice and well - behaved as the original .two - soliton - solution of the kdv equation for different values of the evolution parameter .picture taken from , scaledwidth=80.0% ] let s clarify if the different potentials really define different physics .this seems obvious , if one looks at particular examples , like in fig .[ fig : solitons ] .the two sharply localized part of the solutions in the first and last picture of figure [ fig : solitons ] are so - called solitons , special solutions , which , taken alone , have the exact form and move with velocity .their spectrum is defined by a single eigenvalue , with an eigenfunction localized in the same domain .some superpositions of the two eigenstates would be , in one case ( first and last picture ) , clearly delocalized , in another one ( upper right picture ) they are all localized in the same region . an experiment which would allow us to measure a phase difference between the reflection coefficients of the horizontal and vertical mirrors . for reflection angles close enough to , only the one - dimensional reflection coefficient in orthogonal direction matters .thus , we can put the different one - dimensional potentials for different into the mirrors , extended trivially in the other direction , say , a localized one at the bottom , and one with two solitons on the right.,scaledwidth=60.0% ] we can also consider the scattering matrix .the inverse scattering method ( developed by , see ) for solving the kdv equation gives the following explicit result for the one - dimensional scattering matrix : one of the two coefficients of the scattering matrix , namely , appears to be an integral of motion of the kdv equation .instead , the reflection coefficient depends explicitly on . to construct an experiment which allowsthe measurement of such differences is not difficult ( see fig . [fig : reflection ] ) . if not a different scattering matrix , what else defines different physics ?the example we have given is one - dimensional . one can consider a straightforward generalization to higher dimension by considering hamilton operators of the form with different one - dimensional potentials . but all we obtain in this way are only non - interacting degrees of freedomthis seems to leave some hope for the case of non - trivially interacting hamiltonians in higher dimension .the interactions between different degrees of freedom could , possibly , allow the choice of a preferred basis .this is , essentially , the way used in the decoherence - based approach for the construction of a preferred basis .one starts with a `` decomposition into systems '' : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \(o ) the universe consists of systems. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ that means , a tensor product structure on the hilbert space of everything .in general , each factor interacts with it s environment . in particular , if has the standard form the interaction hamiltonian is .the preferred basis is , in a simplified version,[multiblock footnote omitted ] the one which is measured by the interaction hamiltonian . in case of as the interaction hamiltonian between and its environment , it is which is measured by the environment .thus , we can recover the operator on by taking into account the interaction with the environment .the one - dimensional counterexample , as well as the straightforward non - interacting examples , could be interpreted as irrelevant exceptions , which play no role in the multi - dimensional interacting case .unfortunately , this construction depends on the predefined tensor product structure .and this tensor product structure is not unique : [ th : tensorproduct ] there exists a hamilton operator in a hilbert space such that there exist different tensor product structures with canonical variables and on the factor spaces resp . so that has in all of them the standard canonical form with a non - trivial interaction potential , which depends non - trivially on .this can be easily seen in a minor variant of the straightforward multidimensional extension of the kdv example .let s start with a simple degenerated two - dimensional hamilton operator similar to theorem [ hfixed ] , we choose as well as as fixed , but as depending on .now , we define the tensor product structure we need by in these variables , the interaction potential is already nontrivial .but , it has yet the same nice standard canonical form , and as in theorem [ hfixed ] , the resulting potential is of comparable nice quality for different .the tensor product structure depends on . a nice implicit way to seethis is to use the fact that the simplified version of decoherence already allows the unique derivation of the positions as the decoherence - preferred observables , for the given tensor product structure .but , on the other hand , the result is obviously not unique .this contradiction disappears once we recognize that the tensor product structure is not unique .but the -dependence of the tensor product structure can be seen directly as well : if the tensor product structure would be the same for different , we would be able to express as a function .but an attempt to express in this way fails for a general there is no chance to get rid of the dependence on : let s emphasize again that the operator does not depend on . only the operators depend on .therefore , also the representation of in terms of these operators depends on .but the operator itself has not only the same spectrum , but is simply the same for different . as a consequence, one should give up the hope expressed by zurek : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` [ a ] compelling explanation of what are the systems how to define them given , say , the overall hamiltonian in some suitably large hilbert space would be undoubtedly most useful . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ already in the two - dimensional case there is no unique way to reconstruct a physically reasonable , nice tensor product structure , or `` decomposition into systems '' , from a given hamilton operator taken alone .but , maybe the decoherence - based reconstruction of the configuration space basis is , nonetheless , worth something ?last but not least , even if it depends on some other structure does nt it , nonetheless , explain something important about the fundamental nature of the configuration space ?we do nt think so , and the aim of this section is to explain why .note that we do not want to question at all that there are lot s of useful non - fundamental applications of this construction applications where a subdivision into systems is defined by the application .the systems in these applications will be various measurement instruments and state preparation devices , various parts of the environment , and the quantum system which is interesting in the particular application .the preferred basis , constructed in this way , is also application - dependent . while it may be very important to find such a basis for a particular application , these constructions seem irrelevant in considerations of the foundations of physics .a variant is to start from a tensor product decomposition given by the fundamental , postulated configuration space .the resulting decoherence - preferred basis may be different from the position basis and better suited for the consideration of the classical limit .but this would be a non - fundamental application as well , with no relevance for the foundations of quantum theory , which is , in this variant , postulated in the usual way .the only variant which seems relevant for fundamental physics is to replace the a - priori definition of the configuration space in canonical quantum theories as well as pilot wave theories by the decoherence - based construction of the preferred basis , based on some fundamental tensor product structure . to evaluate this replacement, we have to compare it with the standard alternative to postulate the configuration space as a predefined , non - dynamical structure .first , the above competitors depend on predefined , non - dynamical structures , which are introduced into the theory in an axiomatic way .the standard approach postulates the configuration space itself , and the decoherence - based construction postulates a tensor product structure . in this sense , they are on equal footing .but the resulting structure the configuration space is , in the standard approach , a predefined , non - dynamical object . instead , in the decoherence - based construction, it depends on dynamics .this dependence of on dynamics has at least one obvious disadvantage : we can no longer define the dynamics in the canonical way as which uses simple and natural structures on the laplace operator on , as well as the special subclass of multiplication operators in , and , in addition , some special function like .indeed , such a definition of in terms of would become circular .thus , we loose a nice and simple way to define the dynamics of the theory. we would have to define the dynamics in some other way .we can expect that this other way is more complex , and less beautiful .thus , for the emergent , derived character of we have to pay . in itself , this is not untypical for emergent objects : we have to pay some costs for the possibility to explain them . in a previous theory , they have been postulated as fundamental , simple , independent objects , with some nice , well - defined properties . in the new theory , which derives them as emergent objects , they become more complex , often with uncertain boundaries , and they depend on other objects and structures .nonetheless , the special character of the loss in our case seems untypical even for emergent objects .usually , the structures which depend on the objects which now become emergent , are or of some higher , emergent level already in the previous theory , or they become emergent together with these objects in the new theory .i do nt know of an example of an object which depends on another object in the old theory , and then the other object becomes emergent , but the object itself remains fundamental , as it would be the case for .( of course , this could happen , but it would indicate that the dependencies were wrong already in the old theory . )thus , i would characterize the costs related with the dependence of on the dynamics as higher than usual for emergent objects .let s consider now what we gain .usually , if an object becomes emergent which was previously fundamental , our gain is explanatory power .do we have such a gain in our case ? here , we have to take into account that we need another , postulated structure the tensor product structure to construct .thus , we can `` explain '' the configuration space only in a relative sense , in comparison with the unexplained , postulated tensor product structure .now , explaning one structure in terms of another may also give large explanatory power .last but not least , in some sense all our more fundamental theories are of this type they explain the previously postulated objects and structures in terms of some more fundamental , but also postulated , objects and structures .but we would talk about explanatory power only if the new , more fundamental structure has some advantages in simplicity , beauty , generality , or whatever else .what is the situation in our case ?at least i can not see any advantage .instead , i see a lot of disadvantages : first , we have simple examples of quantum theories which have natural configuration spaces but do not have a ( similarly natural ) tensor product structure : for example , finite - dimensional quantum theories with prime dimension do not have nontrivial tensor product structures .indeed , from follows , thus the only possible tensor product structure for a space with prime dimensnion is the trivial one , which is worthless because it does not allow the start of the decoherence procedure .then we have spaces of identical particles , which are factor spaces of a tensor product , but do not have their own natural to an arbitrary basis of . ]tensor product structure .last but not least , for topologically sufficiently non - trivial manifolds , starting with , there exists no decomposition into factor - manifolds , and therefore no natural tensor product structure .but for all these hilbert spaces we have natural configuration spaces . then , the tensor product structure is often less symmetric in comparison with the configuration space .the simplest example is the tensor product structure of of one - particle theory , which destroys rotational symmetry .in addition , the tensor product structure based on points in field theory requires an identification of points in different time slices , if considered as fixed over time , destroying galilean or relativistic symmetry .these arguments seem sufficient to argue that a tensor product structure , as a fundamental object , is worse than a configuration space .thus , an explanation of in terms of a more fundamental tensor product structure has no explanatory power at all . the gain which we usually want to reach by deriving structures previously considered to be fundamental , namely explanatory power ,can not be reached in this approach .moreover , this can not be hailed in some way , say , by deriving the tensor product structure from something else .the hamilton operator alone is not sufficient , as shown by our counterexample .thus , one needs some additional structure anyway .therefore , the main line of our argumentation remains intact .the construction combines only disadvantages : the lack of explanatory power of postulated objects , with the uncertainty and dependency of emergent objects .what remains intact is also the dependence of on the dynamics , and therefore the very special loss of the possibility to define the dynamics as on .what also remains is the simplicity and the general and very natural character of postulating a configuration space .therefore , whatever the new additional structure , we can not expect a large gain in explanatory power .given this situation , there seem to be no gains but only losses , in a construction which constructs the configuration space postulated in canonical quantum theories and pilot wave theories using decoherence techniques , starting from a fundamental tensor product structure ( or some replacement ) .we have shown that it is not the hamilton operator alone which defines the physics of quantum theories .in addition , one needs the canonical configuration space , or some similar structure , which connects this hamilton operator with observable configurations . as a consequence , hopes to derive the configuration space basis using decoherence techniques from a hamilton operator taken alone have to be given up .such constructions necessarily depend on some additional structure , like a `` subdivision into systems '' , which have to be postulated .the derivation of the configuration space basis from such additional structures combines the disadvantages of predefined structures ( lack of explanatory power ) and emergent structures ( dependence on dynamics ) without giving any advantages thus loses in comparison with a simply postulated configuration space .if these arguments against a decoherence - based construction of the preferred basis are of any relevance for everettians is a completely different question .they may also completely ignore the non - uniqueness and follow some reasoning like this : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ suppose that there were several such decompositions , each supporting information - processing systems . then the fact that we observe one rather than another is a fact of purely local signicance : we happen to be information - processing systems in one set of decoherent histories rather than another . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ indeed , once one introduces many worlds anyway , some more of them do not matter anymore .all what i can say about this is to recommend a further `` improvement '' in this direction to assign reality not only to the state vector of our multiverse , but to all other states as well , and to be consequent , to all hamilton operators as well .another possibility would be to throw away the `` solution of the preferred basis problem '' and instead to use the same predefined configuration space as used in canonical quantization and pilot wave theories . given the arguments in section [ sec : comparison ] , this would be an improvement . for pilot wave interpretations ,the clarification that the configuration space is necessary to fix the physics of a canonical quantum theory is clearly helpful .it weakens a quite common argument that the choice of the configuration space in pilot wave interpretations is artificial , like _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the artificial asymmetry introduced in the treatment of the two variables of a canonically conjugated pair characterizes this form of theory as artificial metaphysics .( , as quoted by ) , `` the bohmian corpuscle picks out by fiat a preferred basis ( position ) '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ instead , recognizing that the configuration space is part of the definition of the physics gives more power to an old argument in favour of the pilot wave approach , made already by de broglie at the solvay conference 1927 : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` it seems a little paradoxical to construct a configuration space with the coordinates of points which do not exist . ''_ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _thanks ch . roth for correcting my poor english .de broglie , l. , la nouvelle dynamique des quanta , in `` electrons et photons : rapports et discussions du cinquieme conseil de physique '' , ed .j. bordet , gauthier - villars , paris , 105 - 132 ( 1928 ) , english translation in : bacciagaluppi , g. , valentini , a. : `` quantum theory at the crossroads : reconsidering the 1927 solvay conference '' , cambridge university press , and http://arxiv.org/abs/arxiv:quant-ph/0609184[arxiv:quant-ph/0609184 ] ( 2006 ) freire jr ., o. : science and exile : david bohm , the hot times of the cold war , and his struggle for a new interpretation of quantum mechanics , historical studies on the physical and biological sciences 36(1 ) , 1 - 34 , http://arxiv.org/abs/arxiv:quant-ph/0508184[arxiv:quant-ph/0508184 ] ( 2005 ) wolfgang pauli , remarques sur le problme des paramtres cachs dans la mcanique quantique et sur la thorie de londe pilote , in andr george , ed . , louis de broglie physicien et penseur ( paris , 1953 ) , 33 - 42 schlosshauer , m. : decoherence , the measurement problem , and interpretations of quantum mechanics , rev .76 , 1267 - 1305 ( 2004 ) http://arxiv.org/abs/arxiv:quant-ph/0312059[arxiv:quant-ph/0312059 ] vaidman , l. : on schizophrenic experiences of the neutron or why we should believe in the many - worlds interpretation of quantum theory , international studies in philosophy of science 12 , 245 - 261 http://arxiv.org/abs/arxiv:quant-ph/9609006[arxiv:quant-ph/9609006 ] ( 1998 ) brown , h.r ., wallace , d. : solving the measurement problem : de broglie - bohm loses out to everett , foundations of physics , vol .35 , no . 4 , 517 ( 2005 ) http://arxiv.org/abs/arxiv:quant-ph/0403094[arxiv:quant-ph/0403094 ]zurek , w.h .: decoherence , einselection , and the existential interpretation , philos .london , ser . a 356 , 1793 - 1821 , http://arxiv.org/abs/arxiv:quant-ph/9805065[arxiv:quant-ph/9805065 ] ( 1998 ) zurek , w.h . : relative states and the environment : einselection , envariance , quantum darwinism , and the existential interpretation , http://arxiv.org/abs/arxiv:0707.2832[arxiv:0707.2832 ] and los alamos preprint laur 07 - 4568 ( 2007 )
in the many worlds community there seems to exist a belief that the physics of quantum theory is completely defined by it s hamilton operator given in an abstract hilbert space , especially that the position basis may be derived from it as preferred using decoherence techniques . we show , by an explicit example of non - uniqueness , taken from the theory of the kdv equation , that the hamilton operator alone is not sufficient to fix the physics . we need the canonical operators , as well . as a consequence , it is not possible to derive a `` preferred basis '' from the hamilton operator alone , without postulating some additional structure like a `` decomposition into systems '' . we argue that this makes such a derivation useless for fundamental physics .
one of the most important tasks for a quantum computer would be to efficiently obtain eigenvalues and eigenvectors of high - dimensional matrices . it has been suggested that the quantum phase estimation algorithm ( pea ) can be used to obtain eigenvalues of a hermitian matrix or hamiltonian . for a quantum system with a hamiltonian , a phase factor , which encodes the information of eigenvalues of ,is generated via unitary evolution . by evaluating the phase , we can obtain the eigenvalues of .the conventional pea consists of four steps : preparing an initial approximated eigenstate of the hamiltonian , implementing unitary evolution operation , performing the inverse quantum fourier transform ( qft ) , and measuring binary digits of the index qubits .the pea is at the heart of a variety of quantum algorithms , including shor s factoring algorithm .a number of applications of pea have been developed , including generating eigenstates associated with an operator tm , evaluating eigenvalues of differential operators , and it has been generalized using adaptive measurement theory to achieve a quantum - enhanced measurement precision at the heisenberg limit .the pea with delays considering the effects of dynamical phases has also been discussed . the implementation of an iterative quantum phase estimation algorithm with a single ancillary qubit is suggested as a benchmark for multi - qubit implementations .the pea has also been applied in quantum chemistry to obtain eigenenergies of molecular systems .this application has been demonstrated in a recent experiment .moreover , several proposals have been made to estimate the phase of a quantum circuit , and the use of phase estimations for various algorithms , including factoring and searching .the conventional pea is only designed for finding eigenvalues of either a hermitian or a unitary matrix . in this paper, we propose a measurement - based phase estimation algorithm ( mpea ) to evaluate eigenvalues of _non_-hermitian matrices .this provides a potentially useful generalization of the conventional pea .our proposal uses ideas from conventional pea , frequent measurement , and techniques in one - qubit state tomography .this proposal can be used to design quantum algorithms apart from those based on the standard unitary circuit model .the proposed quantum algorithm is designed for systems with large dimension , when the corresponding classical algorithms for obtaining the eigenvalues of the non - unitary matrices become so expensive that it is impossible to implement on a classical computer .the structure of this work is as follows : in sec .[ const ] we introduce how to construct a non - hermitian evolution matrix for a quantum system . in sec .[ mpea ] , we present the measurement - based phase estimation algorithm , introducing two methods for obtaining the complex eigenvalues of the non - hermitian evolution matrix .we give two examples for the application of mpea and discuss how to construct hamiltonian for performing the controlled unitary operation in sec .[ example ] . in sec .[ diss ] , we discuss the success probability of the algorithm and the efficiency of constructing the non - hermitian matrix .we close with a conclusion section .now we describe how to construct non - unitary matrices on a quantum system .a bipartite system , composed of subsystems _ a _ and _ b _ , evolves under hamiltonian is the hamiltonian of subsystem _ a_(_b _ ) and is their interaction .we prepare the initial state of subsystem _ a _ in its pure state and the initial state of subsystem _ b _ in an arbitrary state .then at time , the state of the system is .let the system evolve under the hamiltonian for a time interval , if subsystem _ a _ is subject to a projective measurement applied at time interval , this is equivalent to driving subsystem _b _ with an evolution matrix evolution matrix is in general neither unitary nor hermitian . the hamiltonian of the whole quantum system can be spanned as : eigenenergies , and the corresponding eigenvectors } and \{ } of hilbert spaces of subsystems _ a _ and _ b _ , which are of dimensions and , respectively , and . using the bases for _a _ and _ b _ , we have the evolution matrix on subsystem _ b _ , after the measurement performed on subsystem _ a _ at time interval , becomes more generally , we can construct different evolution matrices by performing measurements on subsystem _ a _ with different time intervals and/or different measurement bases .for example , by sequentially performing projective measurements with time intervals , , , an evolution matrix constructed .we can also combine unitary evolution matrices with the non - unitary transformations on subsystem _ b _ to construct some desired evolution matrices .for the bipartite system , set the initial state of the system in a separable state let the system evolve under the hamiltonian in eq .( ) .then after performing successful projective measurements on subsystem _ a _ with time intervals , the evolution on the hilbert space of subsystem _ b _ is driven by ^{m} ] is dominated by a single term , provided the largest eigenvalue is unique , discrete , and non - degenerate . in the limit of large and finite , tends to a pure state , independent of the initial ( mixed ) state of subsystem _b_. the final state of dominated by , and this outcome is found with probability state of subsystem _ b _ evolves to , after performing a number of operations of . then we can evaluate by resolving the phase of the state .if we prepare the initial state of subsystem _ b _ in a pure initial state that is close to an eigenstate of the matrix , the state of the subsystem _ b _ can evolve to other eigenstates of . then we can also obtain the corresponding eigenvalues of .based on the above analysis , we suggest a measurement - based phase estimation algorithm for evaluation of the eigenvalues of the matrix . as in the circuit shown in fig .( ) , three quantum registers are prepared . from top to bottom : an index register , a target register and an interacting register .the index register is a single qubit , which is used as control qubit and to readout the final results for the eigenvalues ; the target register is used to represent the state of subsystem _ b _ ; and the interacting register represents the state subsystem _the initial state of the circuit is prepared in the state subsystem _ a _ in a pure state , and subsystem _ b _ in state .the construction of the controlled evolution matrix on the target register is achieved by implementing the controlled unitary ( - ) transformation for the whole quantum system and successfully performing the projective measurement on the interacting register with time interval .note here for the unitary transformation , we set such that projective measurements are performed successfully on subsystem at the time interval , while the unitary transformation of the whole system evolves for time period . after performing successful periodic measurements on the interacting register with time intervals , as shown in fig .( ) , the state of the system is transformed to ^{m}\rho _ { b}\bigl[v_{b}^{\dagger } ( \tau ) % \bigr]^{m}\biggr\ } \nonumber \\ & & \otimes |\varphi _{ a}\rangle \langle \varphi _ { a}|.\end{aligned}\]]the dominant term of this is % qubit is dominated by .\]]in general , is a complex number and can be written as .\]]we can obtain by resolving the phase factor two approaches can be used to resolve : using single - qubit quantum state tomography ( qst ) , and using the measured quantum fourier transform ( mqft ) combined with projective measurements on a single qubit .the details of these two approaches are given below .quantum state tomography can fully characterize the quantum state of a particle or particles through a series of measurements in different bases . in the approach using qst to resolve the eigenvalue of the matrix , we prepare a large number of identical copies of the state on the index qubit , as shown in eq .( ) , by running the mpea circuit a number of times .then the value of can be obtained by determining the index qubit state . the state of the index qubit in eq .( ) can be written as |1\rangle \bigr ) .\]]in the qst approach , we perform a projective measurement on the index qubit in the basis to obtain the probability of finding the index qubit in state , thus obtaining the value of . with the knowledge of , then perform a rotation around the -axis and a measurement in the basis of the pauli matrix on the index qubit , we can obtain the observable thus obtain the value of .the measurement errors of qst , from counting statistics , obey the central limit theorem . to obtain more accurate results, we have to prepare a larger ensemble of the single qubit states . )shows the circuit for the mpea using qst . from top to bottom , an index register , a target register and an interacting register are prepared in the states , , and , respectively .the index register is a single qubit used as control qubit ; the target register is used to represent the state of subsystem _ b _ ; and the interacting register is used to represent the state of subsystem _ a , _ which interacts with _ b_. part ( ) shows the circuit for performing projective measurements with period . in the circuit , the unitary transformation , we set such that projective measurements are performed successfully on subsystem _ a _ separated by a time interval , while the unitary transformation of the whole system evolves for time . ] in the second approach , we use the techniques of measured quantum fourier transform and projective measurements to resolve the eigenvalue of the matrix .the phases which encode the eigenvalues of are in general complex numbers ; the inverse qft can be used to resolve the real part of the phase .the imaginary part of the phase factor , , can be obtained by performing single - qubit projective measurements .the details of this method are discussed below . in order to resolve up to binary digits using the inverse qft, one has to construct a series of controlled evolution matrices , - , in successive binary powers , from to . in the mpea , this is done by implementing the - operation on the whole system and performing a series of periodic measurements separated by time intervals for , respectively .the - operation evolves for a time , during which all the measurements are performed successfully on the interacting register .then we can obtain a series of controlled transformation matrices in binary powers , -^{2^{k}}, ] .then we apply the mqft technique to resolve the real part of the phase factor .we therefore obtain the eigenvalue % successful measurements , and the fidelity , , for the target register to be in state , are shown in fig . . the fidelity is defined as is close to one , the success probability is determined by .( ) and fidelity ( ) for versus , the number of successful measurement , for the jaynes - cummings model . ]if we prepare the target register in a pure initial state that is close to an eigenstate of the matrix , by applying mpea , the state of the target register can evolve to other eigenstates of . then we can also obtain the corresponding eigenvalues of .for example , applying mpea to the above system and preparing the target register in state , by performing projective measurements with on the interacting register , the state of the target register would remain in the state .we can retrieve the real part of the phase factor of the corresponding eigenvalue up to an accuracy of , and binary digits , respectively , and obtain the eigenvalues of the matrices as ] , and $ ] , assuming we have already obtained the imaginary part of the eigenvalue of through projective measurements .the true eigenvalue is , which is quite close . to implement a controlled unitary evolution on the mpea circuit , we set the control qubit as a single spin and label it as subsystem _c_. thus , the controlled hamiltonian of the whole system becomes \biggr\}.\end{aligned}\]]this hamiltonian contains three - body interactions and can not be implemented directly .one could decompose the three - body interaction into two - body interactions and then implement the two - body interaction . in general , an arbitrary unitary matrix can be decomposed into tensor products of unitary matrices of and , which correspond to two- and single - qubit operations respectively , and can be implemented on a universal quantum computer . for another example , we consider the axial symmetry model .this is relevant for quantum information processing in solid state and atomic systems .the quantum system is composed of two subsystems _ a _ and _ b _ , where _ b _ contains two non - interacting spins , and subsystem _ a _ contains __ _ _ a single spin interacting with subsystem _ b_. the hamiltonian for the whole system is ,\]]where and are the pauli operators . by performing projective measurements on subsystem_ a _ in the basis of the -eigenvector , then in the basis , we obtain on subsystem _b_. if we prepare the initial state of the target register in state , then the fidelity of the target register to be in state is after performing a number of successful measurements on the interacting register .for the case and , the corresponding eigenvalue is .the success probability of the successful measurement on the interacting register versus the number of measurements on the interacting register is shown in fig . . from that figure, we can see that even for successful measurements , we can still have a success probability of . for versus , the number of successful measurements , by using the -eigenstate as the measurement basis for the axial symmetry model . ]on a quantum computer , a unitary matrix can be efficiently represented , i.e. , for a unitary matrix of dimension , only qubits are needed to represent it on a quantum computer . in this paper , we have tried to represent a non - unitary matrix on a quantum system by performing periodic projective measurements . whether an arbitrary matrix can be constructedusing this technique still remains an open problem , and this would be a subject for future study . in the conventional pea ,the phase factor is resolved through a quantum fourier transform . to resolve the binary expansion of the phase , up to binary digits, one has to implement controlled unitary transformations in successive binary powers : - , -- . in the mpea approach of using mqft combined with projective measurements to obtain the eigenvalues of , we need to implement the controlled transformations in successive binary powers : - , -- , and followed by the corresponding mqft circuit as shown in fig . . the controlled transformations - ,- , , - , are achieved by implementing the controlled hamiltonian on the whole system only once and during a time until the successful measurements on the interacting register finish , and performing measurements , i.e. , a series of periodic measurements ( each one separated by the time interval ) for times on the interacting register , respectively .the success probability of the mpea is , where is the fidelity of the state on the target register to be in the eigenstate of after performing successful measurements , and is the probability of performing successful measurements on the interacting register . note that depends on , and also on the initial guess of the state on the target register as shown in eq .( ) .since is close to one as the number of successful measurements increases , the success probability is determined by .it must be emphasized that the present quantum algorithm is designed for systems with large ( the dimension of subsystem ) , when the corresponding classical algorithms for obtaining the eigenvalues of become so expensive that it is impossible to implement on a classical computer .the efficiency of our algorithm _ does _ depend on .note that the success probability of projective measurements on the interacting register decreases exponentially in terms of when .this is not an essential obstacle because this exponential decrease can be overcome by running the algorithm for a number of times to prepare a large but _ fixed _ number of copies of the index qubit state as shown in eq .( ) . in the qst approachto obtain , the measurement errors of qst obey the central limit theorem .accurate results can be obtained by preparing a larger ensemble of the single qubit states .the tomographic estimation converges with statistical error that decreases as , where is the number of copies prepared in the qst and is not relevant to .also , in the approach of using single - qubit qst to obtain the eigenvalues of , we prepare a number of copies of the index qubit state as shown in eq .if we have a good initial guess of the eigenstate of , then , as shown in the second example , we can still obtain a high success probability ( ) for the algorithm and this does not require a large .the other eigenvalues of can be obtained by setting the initial state of the target register in a pure state .if the overlap of the initial guess of the eigenstate with the real eigenstate is not exponentially small and is a fixed number , the success probability , , for preparing a index qubit state as shown in eq .( ) is not exponentially small .then each copy of the index qubit state can be prepared in a polynomial number of trials .another issue that needs to be addressed is the efficiency for implementing the projective measurement , which is linked to the efficiency of constructing the non - unitary matrix , therefore connected to the efficiency of the algorithm .since the measurement is a non - unitary process , it can not be implemented deterministically .also , a number of projective measurements are required in mpea , and thus the overall efficiency of the algorithm might be affected . to deal with this problem , we can design a scheme such that subsystem can have a simple structure , containing either a single qubit or a few qubits , by controlling the interaction between the subsystems .then the implementation of the measurement on subsystem will be simple .the measurement performed on does not depend on the qubit number of the subsystem , on which the matrix is constructed .therefore , the measurement on can avoid the exponential scaling with respect to the size of subsystem .note that the corresponding classical algorithms scale as .we have presented a measurement - based quantum phase estimation algorithm to obtain the eigenvalues and the corresponding eigenvectors of non - unitary matrices . in mpea, we implement the unitary transformation of the whole system only once ; the non - unitary matrix is constructed as the evolution matrix on the target register . by performing periodic projective measurements on the interacting register ,the state of the target register is driven automatically to a pure state of the transformation matrix . using single - qubit state tomography and mqft combined with single - qubit projective measurements ,we can obtain the complex eigenvalues of the non - unitary matrix .the success probability of the algorithm and the efficiency of constructing the matrix have been discussed .this algorithm can be used to study open quantum system and in developing other new quantum algorithms .fn acknowledges partial support from darpa , air force office for scientific research , the laboratory of physical sciences , national security agency , army research office , national science foundation grant no .0726909 , jsps - rfbr contract no .09 - 02 - 92114 , grant - in - aid for scientific research ( s ) , mext kakenhi on quantum cybernetics , and funding program for innovative r&d on s&t ( first ) .law thanks the support of the ikerbasque foundation .
we propose a quantum algorithm for finding eigenvalues of non - unitary matrices . we show how to construct , through interactions in a quantum system and projective measurements , a non - hermitian or non - unitary matrix and obtain its eigenvalues and eigenvectors . this proposal combines ideas of frequent measurement , measured quantum fourier transform , and quantum state tomography . it provides a generalization of the conventional phase estimation algorithm , which is limited to hermitian or unitary matrices .
the purpose of this paper is to propose and analyze a method based on the riccati transformation for solving a time dependent hamilton - jacobi - bellman equation arising from a stochastic dynamic optimal allocation problem on a finite time horizon , in which our aim is to maximize the expected value of the terminal utility subject to constraints on portfolio composition .investment problems with state constraints were considered and analyzed by zariphopoulou , where the purpose was to maximize the total expected discounted utility of consumption for the optimal portfolio investment consisting of a risky and a risk - free asset , over an infinite and finite time horizon .it was shown that the value function of the underlying stochastic control problem is the unique smooth solution to the corresponding hjb equation and the optimal consumption and portfolio are presented in a feedback form .she furthermore showed that the value function is a constrained viscosity solution of the associated hjb equation .classical methods for solving hjb equations are discussed by benton in . in , musiela andzariphopoulou applied the power - like transformation in order to linearize the non - linear pde for the value function in the case of an exponential utility function . in the seminal paper karatzas _investigated a similar problem of consumption - investment optimization where the problem is to maximize total expected discounted utility of consumption over time horizon ] , is a given terminal utility function and a given initial state condition of at .the function mapping represents an unknown control function governing the underlying stochastic process . here for denotes the restriction of the control function to the time interval .we assume that is driven by the stochastic differential equation where denotes the standard brownian motion and the functions and are the drift and volatility functions depending on the control function .the parameter represents a constant inflow rate of property to the system whereas is the interest rate .many european pension systems use , representing regular contribution rate to the saver s pension account as a prescribed percentage of their salary .for example , in slovakia , in bulgaria , in sweden ( c.f . ) . throughout the paperwe shall assume that the control parameter belongs to the compact simplex where .it should be noted that the process is a logarithmic transformation of a stochastic process driven by the sde : where with .it is known from the theory of stochastic dynamic programming that the so - called value function \ ] ] subject to the terminal condition can be used for solving the stochastic dynamic optimization problem ( [ maxproblem ] ) ( cf .bertsekas , fleming and soner or bardi and dolcetta ) .if the process is driven by ( [ processx ] ) , then the value function satisfies the hamilton - jacobi - bellman ( hjb ) equation for all subject to the terminal condition ( see e.g. macov and evovi or ishimura and evovi ) . as a typical example leading to the stochastic dynamic optimization problem ( [ maxproblem ] ) in which the underlying stochastic process satisfies sde ( [ processx ] ) one can consider a problem of dynamic portfolio optimization in which the assets are labeled as and associated with price processes , each of them following a geometric brownian motion ( cf .merton , browne , bielecki and pliska or songzhe ) .the value of a portfolio with weights is denoted by .it can be shown that satisfies ( [ processyeps ] ) .the assumption corresponds to the situation in which borrowing of assets is not allowed ( ) and .we have and with and where .the terminal function represents the predetermined terminal utility function of the investor .[ rem : merton ] in the case of zero inflow , assumption ( [ processyeps ] ) made on the stochastic process is related to the well - known merton s model for optimal consumption and portfolio selection ( cf .merton ) .however , for merton s model , one has to consider a larger set of constraints for control function .namely , the simplex has to be replaced by a larger set .it is worth to note that all results concerning smoothness of the value function ( see theorem [ smootheness ] ) as well as those regarding existence and uniqueness of classical solutions ( see theorem [ existence ] ) and numerical discretization scheme remain true when is replaced by .following the methodology of the riccati transformation first proposed by abe and ishimura in and later studied by ishimura _ , xia , or macov and evovi for problems without inequality constraints , and further analyzed by ishimura and evovi , we introduce the following transformation : [ rem : ara ] the function can be viewed as the coefficient of absolute risk aversion for the value function , representing the intermediate utility function of an investor at a time ] .this assumption is clearly satisfied for if we consider a function which is an increasing and concave function in the variable .we discuss more on this assumption in section [ sec : existence ] .now , problem ( [ eq_hjb ] ) can be rewritten as follows : where is the value function of the following parametric optimization problem : if the variance function is strictly convex and linear ( as discussed in section [ sec : motivation ] ) , problem ( [ eq_alpha_def ] ) belongs to a class of parametric convex optimization problems ( cf . ) .suppose that the value function satisfies ( [ eq_hjbtransf ] ) and the function is defined as in ( [ eq_varphi ] ) .then is a solution to the cauchy problem for the quasi - linear parabolic equation : = 0 , \quad x\in{\mathbb{r } } , t\in[0,t ) , \label{eq_pdephi_1 } \\ & & \varphi(x , t ) = 1 - u^{\prime\prime}(x)/u^\prime(x ) , \quad x\in{\mathbb{r}}. \nonumber\end{aligned}\ ] ] the statement can be easily shown by differentiating ( [ eq_varphi ] ) with respect to and calculating derivatives , , from ( [ eq_hjbtransf ] ) .indeed , as , we have let us denote then and therefore \partial_x v , \\ \partial^2_x\partial_t v & = & [ \partial^2_x g + \partial_x(g ( 1-\varphi ) ) + ( \partial_x g + g ( 1-\varphi ) ) ( 1-\varphi ) ] \partial_x v .\end{aligned}\ ] ] hence and ] .but it means that fulfills the fully nonlinear equation : \partial_x v = 0\ , , \quad v(x , t)=u(x ) .\label{hjb - nonlin}\ ] ] in other words , satisfies hjb equation ( [ eq_hjbtransf ] ) .consequently , it is a solution to hjb equation ( [ eq_hjb ] ) .moreover , equation ( [ hjb - nonlin ] ) is a fully nonlinear parabolic equation which is monototone in its principal part .this way one can deduce that the solution to ( [ hjb - nonlin ] ) is unique . in summary , we have shown that we can replace solving hjb equation ( [ eq_hjb ] ) by solving the auxiliary quasi - linear equation ( [ eq_pdephi_1 ] ) .[ equivalence ] let be a solution to the cauchy problem ( [ eq_pdephi_1 ] ) .then the function given by ( [ transform ] ) is a solution to hjb equation ( [ eq_hjb ] ) .moreover , .the advantage of transforming ( [ eq_hjb ] ) to ( [ eq_hjbtransf])([eq_alpha_def ] ) is that we can define and compute the function in advance as a result of the underlying parametric optimization problem ( either analytically or numerically ) .this can be then plugged into the quasi - linear equation ( [ eq_pdephi_1 ] ) which can be solved for , instead of solving the original fully nonlinear hjb equation ( [ eq_hjbtransf ] ) as well as ( [ eq_hjb ] ) . in this way we do not calculate the value function itself . on the other hand , it is only the optimal feedback strategy which is of investor s interest and therefore is not important , in fact .the optimal strategy can be computed as the unique optimal solution to the quadratic optimization problem ( [ eq_alpha_def ] ) for the parameter values .in the case of the example of a portfolio consisting of assets , we denote the vector of expected asset returns and the covariance matrix of returns which we assume to be symmetric and positive definite . for the portfolio return and variancewe have and . for , ( [ eq_alpha_def ] )becomes a problem of parametric quadratic convex programming over the compact convex simplex . in this section, we shall discuss qualitative properties of the value function for this case . by denote the space of all functions defined on whose -th derivative is lipschitz continuous . by denote the derivative of w.r . to .[ smootheness ] let be positive definite and .then the optimal value function defined as in ( [ eq_alpha_def_quad ] ) is a continuous function .moreover , is a strictly increasing function and where is the unique minimizer of ( [ eq_alpha_def_quad ] ) for .the function is locally lipschitz continuous .first , we notice that the mapping is continuous , which can be deduced directly from basic properties of strictly convex functions minimized over the compact convex set .let us denote the objective function in problem ( [ eq_alpha_def_quad ] ) . since is a continuous function on the compact set , we have in implies the existence of a unique minimizer to ( [ eq_alpha_def_quad ] ) .moreover , is continuous in due to continuity of . applying the general envelope theorem due to milgrom and segal ( * ? ? ?* theorem 2 ) the function is differentiable on the set .next , we prove that .the function is linear in for any .therefore it is absolutely continuous in for any .again , applying ( * ? ? ?* theorem 2 ) , we obtain therefore , which is strictly positive on .hence is a continuous and increasing function for .local lipschitz continuity of now follows from the general result proved by klatte in ( see also aubin ) . indeed , according to ( * ? ? ?* theorem 2 ) the minimizer function is locally lipschitz continuous in .hence the derivative is locally lipschitz , as well .equation ( [ eq_pdephi_1 ] ) is a strictly parabolic pde , i.e. there exist positive real numbers , such that for the diffusion coefficient of equation ( [ eq_pdephi_1 ] ) the following inequalities hold : these inequalities follow directly from ( [ eq_alphader_vzorec ] ) , which is a quadratic positive definite form on a compact set . with regard to ( [ eq_alphader_vzorec ] ) the function attains its maximum and minimum . [ example - dax ] an illustrative example of the value function having discontinuous second derivative is based on real market data and it is depicted in fig . [fig : alphader_dax ] . in this examplewe consider the german dax index consisting of 30 stocks . based on historical data from august 2010 to april 2012we have computed the covariance matrix and the vector of mean returns .one can observe that there are at least two points of discontinuity of the second derivative . and its second derivative for the portfolio of the german dax 30 index , computed from historical data , august 2010april 2012 .source : finance.yahoo.com , title="fig:",scaledwidth=45.0% ] and its second derivative for the portfolio of the german dax 30 index , computed from historical data , august 2010april 2012 .source : finance.yahoo.com , title="fig:",scaledwidth=45.0% ] in this section we discuss further smoothness properties of the value function in the variable , for the case specified at the beginning of section [ sec : multiportf ] .we furthermore show that the function is locally a rational function which is concave on an open set .let us denote the set then and varies over all subsets of active indices , . here denotes the number of elements of the set .since is continuous , the set is open .first , let us consider the case .if we introduce the lagrange function then the optimal solution and the lagrange multiplier are given by : hence where can be expressed as follows : after straightforward calculations we conclude the inequality follows from the cauchy - schwartz inequality .notice that unless the vectors and are linearly dependent .now , if for some subset of active indices , then the quadratic minimization problem ( [ eq_alpha_def ] ) can be reduced to a lower dimensional simplex . hence the function is smooth on and therefore and are given by : for any where and and are constants calculated using the same formulas as in ( [ a1b ] ) and ( [ eq : abc ] ) , where data ( columns and rows ) from and corresponding to the active indices in the particular set are removed .[ pocastiachder ] the function defined in ( [ eq_alpha_def ] ) is a smooth function on the open set .it is given by ( [ a1 ] ) for and by ( [ a2 ] ) for where , respectively .there is a useful information that can be extracted from the shape of . for illustration ,let us observe points of discontinuity of depicted in fig .[ fig : alphader_dax ] for the example of the german dax 30 index .the intervals between the points of discontinuities correspond to the sets . for the portfolio of the german dax 30 indexwe obtain the sets of active indices , corresponding to the continuity intervals as summarized in tab .[ tab : activeindices ] .high values of represent high risk - aversion of the investor .there is only one single asset present with a nonzero weight ( equal to one ) in the first interval .this asset is the most risky one and with highest expected return .indeed , for lowest values of , investor s risk aversion is low and therefore they do not hesitate to undergo high risk for the sake of gaining high return ..sets of active indices for the german dax 30 index .the assets are labeled by 1 - adidas , 15 - fresenius , 16 - fres medical , 21 - linde , 23 - merck , 27 - sap , 30 - volkswagen . [ cols="<,<",options="header " , ] in section [ ex : alpha_discontin ] we showed that the sets of active indices can be identified directly from the function .moreover , based on proposition [ th : comppsi ] , there is an upper bound on investor s coefficient of absolute risk aversion given by . when the utility function is given as in ( [ eq : ara2 ] ) , we have and so for all and .hence , only the interval ] , the investor knows the set } \{i\ |\ \hat{\theta}_i(\varphi ) > 0\}$ ] , i.e. the set of assets which will be entering the optimal portfolio with a nonzero weight . to identify the set on a particular interval ,it is enough to calculate the optimal in one single point from the given interval .we proposed and analyzed a method of the riccati transformation for solving a class of hamilton - jacobi - bellman equations arising from a problem of optimal portfolio construction .we derived a quasi - linear backward parabolic equation for the coefficient of relative risk aversion corresponding to the value function - a solution to the original hjb equation . using schauder s theory we showed existence and uniqueness of classical hlder smooth solutions .we also derived useful qualitative properties of the value function of the auxiliary parametric quadratic programming problem after the transformation .a fully implicit iterative numerical scheme based on finite volume approximation has been proposed and numerically tested .we also provided a practical example of the german dax 30 index portfolio optimization .we are thankful to professor milan hamala for stimulating discussions on parametric quadratic programming .this research was supported by the vega project 1/2429/12 ( s.k . ) and eu grant program fp7-people-2012-itn strike - novel methods in computational finance , no .304617 ( d. . ) .bielecki , t.r . ,pliska , s.r . and sheu , s.j .: risk sensitive portfolio management with cox ingersoll ross interest rates : the hjb equation ._ siam j. control and optimization _ , * 44*(5 ) ( 2005 ) , 18111843 .ishimura , n. , koleva , m. n. and vulkov , l. g. : numerical solution of a nonlinear evolution equation for the risk preference , _ lecture notes in computer science _6046 , springer - verlag new york , heidelberg , 2011 , 445 - 452 .ishimura , n. and evovi , d. : on traveling wave solutions to a hamilton - jacobi - bellman equation with inequality constraints , _ japan journal of industrial and applied mathematics _ * 30*(1 ) ( 2013 ) , 5167 .ishimura , n. and nakamura , m. : risk preference under stochastic environment . in : bmei2011 - proceedings 2011 international conference on business management and electronic information , vol . 1 , 2011 , article number 5917024 , 668670 .klatte , d. : on the lipschitz behavior of optimal solutions in parametric problems of quadratic optimization and linear complementarity , _ optimization : a journal of mathematical programming and operations research _ * 16*(6 ) ( 1985 ) , 819831 .koleva , m. and vulkov , l. : quasilinearization numerical scheme for fully nonlinear parabolic problems with applications in models of mathematical finance , _ mathematical and computer modelling _ * 57 * 2013 , 2564 - 2575 .ktik , p. and mikula , k. : finite volume schemes for solving nonlinear partial differential equations in financial mathematics . in : finite volumes for complex applications vi problems & perspectives , springer proceedings in mathematics , 2011 , vol .4(1 ) , 643651 .ladyzhenskaya , o.a . ,solonnikov , v.a . anduraltseva , n.n . :linear and quasi - linear equations of parabolic type .translations of mathematical monographs 23 , providence , ri : american mathematical society ( ams ) xi , 1968 , 648 pp .macov , z. and evovi , d. : weakly nonlinear analysis of the hamilton - jacobi - bellman equation arising from pension saving management , _ international journal of numerical analysis and modeling _ * 7*(4 ) ( 2010 ) , 619638 .melicherk , i. and ungvarsk , c. : _ pension reform in slovakia : perspectives of the fiscal debt and pension level . _ finance a vr - czech journal of economics and finance , * 54 * ( 910 ) ( 2004 ) , 391404 .peyrl , h. , herzog , f. and geering , h.p . :numerical solution of the hamilton - jacobi - bellman equation for stochastic optimal control problems . in : _wseas int .conf . on dynamical systems and control _ , venice , italy , november 2 - 4 , 2005 , 489497 .
in this paper we propose and analyze a method based on the riccati transformation for solving the evolutionary hamilton - jacobi - bellman equation arising from the stochastic dynamic optimal allocation problem . we show how the fully nonlinear hamilton - jacobi - bellman equation can be transformed into a quasi - linear parabolic equation whose diffusion function is obtained as the value function of certain parametric convex optimization problem . although the diffusion function need not be sufficiently smooth , we are able to prove existence , uniqueness and derive useful bounds of classical hlder smooth solutions . we furthermore construct a fully implicit iterative numerical scheme based on finite volume approximation of the governing equation . a numerical solution is compared to a semi - explicit traveling wave solution by means of the convergence ratio of the method . we compute optimal strategies for a portfolio investment problem motivated by the german dax 30 index as an example of application of the method .
a basic ( or _ vanilla _ ) option is a financial product which provides the holder of the option with the right to buy or sell a specified quantity of an underlying asset at a fixed price on or before the expiration date of the option .there are many more complex options ( called _ exotic _ options ) in use today ; these are often of more practical interest and are harder to deal with . in many scenarios of practical interest andas we shall assume in this article , the value of the underlying asset can be described by a diffusion process ; in a complete market , the value of the option can be expressed as the expectation under the risk neutral probability of a functional of the paths of the underlying diffusion process . in generalthese expectations can not be calculated analytically .the monte carlo ( mc ) method is a standard approach used to approximate these quantities and it has been extensively used in option pricing since .subsequently , a wide variety of monte carlo approaches have been applied ( provides a thorough introduction ) .+ the importance of monte carlo for option pricing against other numerical approaches is its ability to deal with high - dimensional integrals .this is either in the time parameter of the option ( path - dependent options ) or in the dimension of the underlying ( basket of options ) , and more generally in both .however , it has been noted in the option pricing literature that standard monte carlo estimates can suffer from high variability .it has been seen in that , in many situations of practical interest , sequential monte carlo ( smc ) approaches can vastly improve over more standard monte carlo techniques . + sequential monte carlo methods are a general class of methods to sample from a sequence of distributions of increasing dimensions which have extensively been used in engineering , statistics , physics and other domains . provides an introduction and shows how essentially all methods for particle filtering can be interpreted as some special instances of a generic smc algorithm .smc methods make use of a sequence of proposal densities to sequentially approximate the targets via a collection of samples , termed particles . in most scenariosit is not possible to use the distribution of interest as a proposal .therefore , one must correct for the discrepancy between proposal and target via importance weights . in the majority of cases of practical interest , the variance of these importance weights increases with algorithmic time .this can , to some extent , be dealt with a resampling procedure consisted of sampling with replacement from the current weighted samples and resetting them to ( adaptive resampling ) .the variability of the weights is often measured by the effective sample size ( ess ) .several convergence results , as grows , have been proved .smc methods have also recently been proven to be stable in certain high - dimensional contexts .+ the main contributions of this paper are as follows .we develop the formal framework of _ weighting functions _ ; this technique has already been used , implicitly or explicitly , in . exploiting this framework ,we develop tailored methods for pricing of barrier options in high dimensional settings .it is also applied to the pricing of target accrual redemption note ( tarn ) which are another widely traded kind of path dependent options that are notoriously difficult to accurately value . on the theoretical side , we provide with a proof of the unbiasedness of the smcestimates when an adaptive resampling scheme is used .+ this paper is structured as follows . in section[ sec : options ] we provide background details on option pricing . in section [ sec : smc ] we give a basic summary of smc methods . in section [ sec : weighting ] we give the weighting functions framework and its application in the context of our option pricing problems . in section [ sec : numerics ] our methods are illustrated numerically .the appendix gives the proof of unbiasedness of our smc estimate in the adaptive resampling case .+ in the remainder of this article , we use the notation to denote the -dimensional euclidean space and . a normal distribution with mean and variance is denoted by and its density at is denoted by by . denotes the -dimensional identity matrix . denotes expectation .options come in two basic kinds - call and put .call options give the right to buy and put options give the right to sell . in this context , there are two main kinds of options - american and european .american options can be exercised at any time prior to expiration whereas european options can be exercised only at expiration .we focus on european options in this paper .european call / put options are known as vanilla options since they are relatively simple in structure .an exotic option is an option which has features making it more complex than commonly traded vanilla options .path - dependent options are an example , in which case the payoff depends on the value of the underlying at some ( or all ) time points prior to the expiration date .we consider two kinds of path - dependent options in this paper , namely barrier options and tarn s , which we shall describe shortly .consider a collection of underlying assets ; this is also known as a basket .we denote by the value of the assets in the basket . is typically modelled by a diffusion process .one such process is a black - scholes model with a drift and a volatility where is the drift function , is the volatility and denotes a brownian motion in with mean and covariance matrix .it is reasonable to assume that is normalized , that is , for all ; this assumption is valid because the scale factor can be included in the volatility term .there is an interest rate , which can depend on time as well .+ in general , it is hard to analytically work with except in simple scenarios .this has lead to several discretization methods being available in literature ( ) with varying levels of accuracy and complexity .one of the most widely used discretization methods is the euler - maruyama discretization and we work with it in this paper ; however other discretization schemes could also be used which could lead to a lower bias .consider discretized time points . by letting the logarithm and writing in place of , the euler - maruyama discretization of is for , where , denotes the origin in , and and to denote and respectively . ] .+ we do not focus on the level of discretization here .given a particular discretization , we apply our methods to it .the methods developed here could however be used in a multilevel setup ( as in ) ; we do not explore this further in this paper .+ we assume the drift in order to keep things simple . is a constant other than , then it is trivial to extend the methods we propose .if it is a function of the asset value , we could do things similar to what we do in the local volatility model considered later .] we work with two cases , one where the volatility is a constant , and another where it depends on the price of the underlying .we now describe two kinds of path - dependent options and we shall later demonstrate our methods on these .a barrier option is an exotic derivative , typically an option , on the underlying asset(s ) whose value(s ) on reaching pre - set barrier levels either springs the option into existence or extinguishes an already existing option .barrier options exist for baskets as well and the barrier conditions may in general be defined as a function of the underlying assets .for example , the function of the underlying assets could be the mean or could be the maximum of their values .there are two kinds of barrier options : * when the option springs into existence if a function of the underlying asset values breaches prespecified barriers , it is referred to as being ` knocked - in ' , and * when the option is extinguished if a function of the underlying asset values breaches prespecified barriers , it is referred to as being ` knocked - out ' .we consider knocked - out options . these are options that are ` alive ' as long as the values of the underlying satisfy barrier conditions at some ( or all ) time points prior to the expiration .if the barrier condition is breached , the option ` dies ' leading to a zero payoff .if the option is still ` alive ' at the expiration date , then it gives a payoff that is akin to either a call or a put option ( depending on the option type ) .+ we consider a basket of underlying assets and a barrier option based on these .barrier options are hard to price using standard mc in the sense that most ( if not all ) of the particles lead to a zero payoff and this contributes to a high variance of the final estimate . because of the sequential nature of the evolution of the asset values over time , this is a natural example of a setting where smc methods can be applied ; indeed talks about how smc methods can be used in this context .we extend their method and show that one can obtain significant gains by choosing the s in section 5 of their paper even by heuristic methods . a target accumulation redemption note ( tarn )provides a capped sum of payments over a period with the possibility of early termination ( knockout ) determined by target levels imposed on the accumulated amounts .a certain amount of payment is made on a series of cash flow dates ( referred to as fixing dates ) until the target level is breached .the payoff function of a tarn is path dependent in that the payment on a fixing date depends on the spot value of the asset as well as on the accumulated payment amount up to the fixing date .typically , commercial software solutions for pricing tarn s are based on the mc method .+ there are different versions of tarn products used in fx trading . for simplicity , we consider here a specific form of tarn s .consider a sequence of fixing dates and a function .the function is decomposed into its positive and negative parts as .gain and loss processes are defined as follows : these are the amounts of positive and negative cashflows respectively .there are two cashflow cutoffs and . stopping times and defined as these are the first times when the positive and negative cash flows cross their cut - offs respectively .the overall stopping time is defined as which is the first time either the positive or the negative cash flows cross a cut - off .the price of the tarn is the expected value of the overall cash flow , ] is estimated by if is of the form where is zero over most of the space and non - zero only on a small subset , most of the s would be zero .this subsequently leads to a high variance of the resulting estimate .+ more specifically , since is the sample space , we write where and if is much larger than , simulating independent realizations will lead to most of them lying in .our goal is to use smc to simulate more particles from . in order to do that, we consider a sequence of positive potential functions , , such that and write h(x_{1:n}).\ ] ] + the goal is to choose the s such that they guide the particles towards being in through the weighting and resampling steps of smc .+ when this is done on a path space , the resulting algorithm is sometimes known as tempering . considers this and shows how one can construct an artificial sequence of intermediate target densities on the path space which guide particles towards regions of interest .doing it on the path space however makes it computationally expensive and we do not work on the path space .we consider a discretely monitored barrier option monitored on a series of monitoring dates and a sequence of lower and upper barriers and respectively .we suppose that the barrier conditions are that all the underlying asset values lie inside their respective barriers at the monitoring times .this is a simplistic assumption and makes it easier for us to demonstrate our methods ; more complicated barrier conditions could also be used .+ for ease of notation , we remove the from the barriers and replace it simply by .let for and let denote the payoff function at time .the sequence of random variables then forms a markov chain .the price of the barrier option is } \footnote{we have assumed here that the interest rate is 0 .if the interest rate was , then there would be a factor of multiplied with . this is a constant and affects the variance of the estimate only upto a ( known ) scale factor.},\ ] ] where and denote the -th components of and respectively . in this case , the authors in introduce an smc algorithm to estimate the price .the proposal density is chosen to be density with respect to the underlying discretization . for each time interval they simulate forward till and then resample particles that breach the barrier condition at time from among the particles that are still inside the barrier .their algorithm is algorithm [ algo : smc_barrier ] .it is commented that they do not use an adaptive version of resampling and instead always resample. set initial weights and initial estimate of normalizing constant . sample .sample and set .compute unnormalized weights , update estimate of normalizing constant , compute normalized weights , resample to obtain equally - weighted particles and set .the estimated price is moreover , since essentially only the normalizing constant is being computed , there is no issue of path degeneracy .. this causes estimates based on the entire paths being unreliable . ]resampling paths outside the barriers at the monitoring times from paths which are inside the barriers improves the efficiency of the estimator with respect to the standard mc estimator .it is remarked that for constant volatility , in a black - scholes context , one can sample the price to ensure , in one step , that the process always survives ; see .+ if , however , very few particles satisfy the barrier condition at time , then this estimate will also have a high variance .this can happen for example if : the dimension is high ; the barrier condition is a narrow one , i.e , and are close to each other ; the volatility is high ; the time intervals are large .our goal is to introduce a sequence of positive weighting functions so that they resolve this issue . in order to do so ,we write \label{eq : h_n } \nonumber \\ & = & h_{0}({\mathbf s}_{0 } ) \times { { \ensuremath{\operatorname{e}}}\left [ \prod_{n=1}^{n } g_{n}(x_{n } ) \right ] } , \label{eq : price_potential_functions}\end{aligned}\ ] ] where the s , , are the sequence of positive weighting functions . in algorithm[ algo : smc_barrier ] , they were simply 1 .we attempt to choose them more carefully in order to approximate the optimal importance sampling densities .the algorithm is algorithm [ algo : smc_barrier_wt_fn ] .+ set initial weights and initial estimate of normalizing constant .sample , compute unnormalized weights update estimate of normalizing constant , compute normalized weights , resample to obtain equally - weighted particles and set , set .sample and set , compute unnormalized weights update estimate of normalizing constant , compute normalized weights , resample to obtain equally - weighted particles and set , set .the estimated price is path degeneracy is again not an issue because we are still essentially estimating the normalizing constant .this estimate is unbiased and a proof is provided in the appendix .+ since is the indicator of being inside the barriers at time , paths which are outside the barriers at the monitoring times are discarded with probability 1 in this case as well .however unlike in algorithm [ algo : smc_barrier ] , here we seek to give higher weights to particles which we think have a higher chance of being inside the barriers at the monitoring times .what is being sought while using the weighting functions is an approximation to the optimal importance sampling density , that is , the density of ( at time ) conditional on it surviving at times , . in the case of the barrier options being considered , this corresponds to .this is aachieved by giving higher weights to particles which have a higher chance of surviving at times .for example , particles which are far away from the barriers have a lower chance of survival than particles which are closer to the barriers .this is the intuitive idea behind our choice of weighting functions , and this is illustrated in section [ numerics : barrier ] .we consider a tarn based on a single underlying asset .since the main problem arises when the function is discontinuous , we consider a discontinuous to illustrate our methods .let + this function has two big jumps at and .the negative and positive cashflow cutoffs are and .we recall that we had earlier assumed the interest rate is .this is now justified .the main reason why standard mc can not be used efficiently in this scenario is as follows . using mc ,most of the particles stay inside for the first five fixing dates .this leads to the contribution of the particle in the mc estimate being . however , an occasional particle escapes within the first five fixing dates and contributes a value that is significantly different from because of the big jumps in .this causes the variance of the mc estimate to be high .even if the interest rate was positive , this difficulty would still remain .therefore for simplicity we assume the interest rate to be .+ in order to go back to the previous notation , let and define .then let be the new payoff function , where we have written in place of . by defining in this way , the problem has been transformed into the format that was being using before . in this case , a particle lies in if ( and only if ) it stays within for the first five fixing dates . if , for example , the volatility is low or the time intervals are small . it is noted that this is the opposite of the barrier option case ( in which we considered the time intervals and volatility to be large ) .we again consider a sequence of positive weighting functions and write \times \frac { g(s_{1:n } ) } { h_{t_{5 } } ( s_{t_{5 } } ) } , \ ] ] where ; we are basically doing the same thing as in the barrier option case .the goal is again to guide particles towards being in , and we show that this can be achieved through the usage of some simple weighting functions in algorithm [ algo : smc_barrier_wt_fn ] .in this section we demonstrate numerically the benefit of using weighting functions . in order to compare the standard deviations of algorithms [ algo : smc_barrier ] and [ algo : smc_barrier_wt_fn ] , we run them 100 times with particles in each run .we then look at the standard deviations of the 100 estimates and report the relative standard deviations ( the ratio of the standard deviations ) .we consider a barrier option whose asset values evolve independently of each other .this translates to .the option type is call .we assume that we can simulate forward one day at a time .a common monitoring strategy is to monitor the underlying assets after every days for a total of time periods . in that case , for and .we choose and .algorithm [ algo : smc_barrier ] resamples at the end of days and so resets the system of particles by choosing all resampled particles being inside the barriers .therefore the gain that we expect by using algorithm [ algo : smc_barrier_wt_fn ] can only be before the system is reset and this is why we choose . in what follows, we refer to algorithm [ algo : smc_barrier ] as ` mc ' and algorithm [ algo : smc_barrier_wt_fn ] as ` smc ' .+ the algorithms are run on different values of the dimension and the volatility . recalling , the targeted density at any time is proportional to , where denotes the density of .this implies that the targeted marginal density at time is proportional to , where denotes the marginal density of at time .we denote the targeted density at time by .since our basket consists of independent assets , we choose the marginal targeted densities ( at different time points ) to be the product of ( unnormalized ) densities .that is , is such that , where . in the constant volatility case ,the marginal density of at time is known and is denoted by ; this is simply a product of gaussians .therefore the weighting functions are ] by and we estimate the normalizng constant by .define : where + observe that define the likelihod ratio as and let \nonumber \\ & = & \frac { \overline{v}_{1 } \cdots \overline{v}_{\tau_{r } } \overline{v}_{\tau_{r+1 } } } { z_{n } } \sum_{{m}=1}^{{m } } \frac { v_{n } \left ( x_{1:n}^{({m})}\right ) } { { m}\overline{v}_{n } } } { \psi \left ( x_{1:n}^{({m } ) } \right ) \nonumber \\ & = & \frac { z_{n}^{{m } } } { z_{n } } \widehat{\psi}_{or}. \nonumber \end{aligned}\ ] ] the third equality is because .define iteratvely as and , where we recall that is the resampled index of the -th particle at the -th resampling step .let these are the -fields generated by the random variables associated with the particles just before and just after the -th resampling step respectively .let and define for , } } , \ ] ] where denotes expectation under the proposal density . then } } $ ]let h_{\tau_{s-1}}^{({m } ) } , \nonumber \\z_{2s}^{({m } ) } & = & \widetilde{f}_{\tau_{s } } \left ( \overline{x}_{1:\tau_{s}}^{({m } ) } \right ) h_{\tau_{s}}^{({m } ) } - \sum_{j=1}^{{m } } v_{\tau_{s}}^{(j ) } \widetilde{f}_{\tau_{s } } \left ( x_{1:\tau_{s}}^{(j ) } \right ) \widetilde{h}_{\tau_{s}}^{(j)}. \nonumber \end{aligned}\ ] ] it follows from the proposition that } & = & \psi_{n } \nonumber \\ \rightarrow { { \ensuremath{\operatorname{e}}}\left [ \frac { z_{n}^{{m } } } { z_{n } } \widehat{\psi}_{or } \right ] } & = & \psi_{n } \nonumber \\ \rightarrow { { \ensuremath{\operatorname{e}}}\left [ z_{n}^{{m } } \widehat{\psi}_{or } \right ] } & = & z_{n } \psi_{n}. \nonumber\end{aligned}\ ] ] taking yields the desired result .we observe that \nonumber \\ & & + \hspace{0.1 in } \sum_{s=1}^{r+1 } \left [ \widetilde{f}_{\tau_{s } } \left ( x_{1:\tau_{s}}^{({m } ) } \right ) - \widetilde{f}_{\tau_{s-1 } } \left ( \overline{x}_{1:\tau_{s-1}}^{({m } ) } \right ) \right ] h_{\tau_{s-1}}^{({m } ) } \nonumber \\ & = & - \widetilde{f}_{\tau_{0 } } \left ( \overline{x}_{\tau_{0}}^{({m } ) } \right ) h_{\tau_{0}}^{({m } ) } + \sum_{s=1}^{r+1 } \widetilde{f}_{\tau_{s } } \left ( x_{1:\tau_{s}}^{({m } ) } \right ) h_{\tau_{s-1}}^{({m } ) } - \sum_{s=1}^{r } \sum_{j=1}^{{m } } v_{\tau_{s}}^{(j ) } \widetilde{f}_{\tau_{s } } \left ( x_{1:\tau_{s}}^{(j ) } \right ) \widetilde{h}_{\tau_{s}}^{(j ) } \nonumber\end{aligned}\ ] ] thus , the second inequality is from [ eq:3.new ] , and the last equality is because and } } = \psi \left ( x_{1:\tau_{r+1 } } \right ) l_{n } \left ( x_{1:\tau_{r+1 } } \right ) \nonumber \\\rightarrow \widetilde{f}_{\tau_{r+1 } } \left ( x_{1:\tau_{r+1}}^{({m } ) } \right ) h_{\tau_{r}}^{({m } ) } & = & l_{n } \left ( x_{1:n}^{({m } ) } \right ) \psi \left ( x_{1:n}^{({m } ) } \right ) h^{({m})}_{\widetilde{\tau}(n ) } \hspace{0.2 in } \textrm{as } x_{1:\tau_{r+1}}^{({m } ) } = x_{1:n}^{({m } ) } \textrm { and } \widetilde{\tau}(n ) = \tau_{r}. \nonumber\end{aligned}\ ] ] } & = & { { \ensuremath{\operatorname{e}}}_{{m}}\left[\left .\widetilde{f}_{\tau_{1 } } \left ( \overline{x}_{1:\tau_{1}}^{({m } ) } \right ) h_{\tau_{1}}^{({m } ) } - \sum_{j=1}^{{m } } v_{\tau_{1}}^{(j ) } \widetilde{f}_{\tau_{1 } } \left ( x_{1:\tau_{1}}^{(j ) } \right ) \widetilde{h}_{\tau_{1}}^{(j ) } \ , \right| { \mathcal{f}}_{1 } \right ] } \nonumber \\ & = & { { \ensuremath{\operatorname{e}}}_{{m}}\left[\left .\widetilde{f}_{\tau_{1 } } \left ( \overline{x}_{1:\tau_{1}}^{({m } ) } \right ) h_{\tau_{1}}^{({m } ) } - \frac{1}{{m } } \sum_{j=1}^{{m } } \widetilde{f}_{\tau_{1 } } \left ( x_{\tau_{1}}^{(j ) } \right ) h_{\tau_{0}}^{(j ) } \ , \right| { \mathcal{f}}_{1 } \right ] } \hspace{0.3 in } \textrm{from } \eqref{eq:3.new } \nonumber \\ & = & { { \ensuremath{\operatorname{e } } } _ { { m}}\left[\left .\widetilde{f}_{\tau_{1 } } \left ( \overline{x}_{1:\tau_{1}}^{({m } ) } \right ) h_{\tau_{1}}^{({m } ) } - \frac{1}{{m } } \sum_{j=1}^{{m } } \widetilde{f}_{\tau_{1 } } \left ( x_{\tau_{1}}^{(j ) } \right ) \ , \right| { \mathcal{f}}_{1 } \right ] } \hspace{0.57 in } \textrm{as } h_{0}^{(j ) } = 1 \nonumber \\ & = & 0 . \nonumber\end{aligned}\ ] ] the last equality is because the conditional distribution of given is that of i.i.d .random vectors which take the value with probability .also , } & = & { \ensuremath{\operatorname{e}}}\left [ \left \ { \widetilde{f}_{\tau_{2 } } \left ( x_{1:\tau_{2}}^{({m } ) } \right ) - \widetilde{f}_{\tau_{1 } } \left ( \overline{x}_{1:\tau_{1}}^{({m } ) } \right ) \right \ } h_{\tau_{1}}^{({m } ) } \bigg| { { \mathcal{f}}_{2 } } \right ] \nonumber \\ & = & { \ensuremath{\operatorname{e}}}\left [ \left \ { \widetilde{f}_{\tau_{2 } } \left ( x_{1:\tau_{2}}^{({m } ) } \right ) - \widetilde{f}_{\tau_{1 } } \left ( \overline{x}_{1:\tau_{1}}^{({m } ) } \right ) \right \ } \bigg| { { \mathcal{f}}_{2 } } \right ] h_{\tau_{1}}^{({m } ) } \nonumber \\ & = & 0 . \nonumber\end{aligned}\ ] ] the last equality is because } & = & { \ensuremath{\operatorname{e}}}_{{m } } \left [ { \ensuremath{\operatorname{e}}}_{q } \left ( \psi ( x_{1:n } ) l_{n}(x_{1:n } ) \bigg| x_{1:\tau_{s } } = x_{1:\tau_{s}}^{({m } ) } \right ) \bigg| { \mathcal{f}}_{2(s-1 ) } \right ] \nonumber \\ & = & { \ensuremath{\operatorname{e}}}_{q } \left ( \psi \left ( x_{1:n } \right ) l_{n } \left ( x_{1:n } \right ) \bigg| x_{1:\tau_{s-1 } } = x_{1:\tau_{s-1}}^{({m } ) } \right ) \nonumber \\ & = & \widetilde{f}_{\tau_{s-1 } } \left ( \overline{x}_{1:\tau_{s-1}}^{({m } ) } \right ) .\nonumber \end{aligned}\ ] ] the last equality is by the tower property of conditional expectations .proceeding in this way , it is seen that is a martingale difference sequence .
pricing options is an important problem in financial engineering . in many scenarios of practical interest , financial option prices associated to an underlying asset reduces to computing an expectation w.r.t . a diffusion process . in general , these expectations can not be calculated analytically , and one way to approximate these quantities is via the monte carlo method ; monte carlo methods have been used to price options since at least the 1970 s . it has been seen in that sequential monte carlo ( smc ) methods are a natural tool to apply in this context and can vastly improve over standard monte carlo . in this article , in a similar spirit to we show that one can achieve significant gains by using smc methods by constructing a sequence of artificial target densities over time . in particular , we approximate the optimal importance sampling distribution in the smc algorithm by using a sequence of weighting functions . this is demonstrated on two examples , barrier options and target accrual redemption notes ( tarn s ) . we also provide a proof of unbiasedness of our smc estimate . + * key words : * diffusions ; sequential monte carlo ; option pricing + * ams subject classification : * primary 91g60 ; secondary 65c05 . * some contributions to sequential monte carlo methods for option pricing * by deborshee sen , ajay jasra & yan zhou department of statistics & applied probability , national university of singapore , singapore , 117546 , sg . email:`deborshee.sen.nus.edu ; staja.edu.sg ; stazhou.edu.sg ` + aj was supported by a singapore ministry of education academic research fund tier 1 grant ( r-155 - 000 - 156 - 112 ) and is affiliated with the rmi and cqf at nus . yz was supported by a singapore ministry of education academic research fund tier 2 grant ( r-155 - 000 - 143 - 112 ) .
the virmos ( visible and infrared multi - object spectrograph ) project consists of two spectrographs with enhanced survey capabilities to be installed on two unit telescopes of eso very large telescope ( chile ) : vimos ( 0.37 - 1 ) and nirmos ( 0.9 - 1.85 ) , each one having a large field of view ( 14x16 ) split into 4 quadrants and a high multiplexing factor ( up to approximately 800 spectra per exposure ) . to easily exploit such a potential a dedicated tool ,the virmos mask preparation software ( mps ) , has been implemented .it provides the astronomer with tools for the selection of the objects to be spectroscopically observed , and for automatic slit positioning .the output of mps is used to build the slit masks to be mounted in the instrument for the spectroscopic observations .at a limiting magnitude , the density of objects in the sky is such that more than 1000 galaxies are visible in a vimos quadrant . of course , not all these objects can be spectroscopically observed , as some requirements imposed by data quality have to be taken into account when placing slits : the minimum slit length will depend on the object size , since the slit must contain some area of `` pure sky '' to allow for a reliable sky subtraction ; spectra must not overlap either along the dispersion or the spatial direction ; as each first order spectrum is coupled with a second order spectrum which will contaminate the first order spectrum of the slit above , a good sky subtraction can be performed only if slits are aligned in columns ( same spatial coordinate ) and , within the same column , have the same length .all these factor lead to a theoretical maximum number of spectra per quadrant of approximately 200 .another requirement for mps is set by the very good vlt seeing which allows the use of slits widths of 0.3 - 0.4 arcsec .such narrow slits imply an extremely precise slit positioning , with maximum uncertainties of the order of 0.1 arcsec .thus the need for some ( 1 - 2 per quadrant ) manually selected reference objects ( possibly bright and point - like ) to be used for mask alignment .moreover , the user must have the possibility to manually choose some particularly interesting sources to be included ( compulsory objects ) and some others to be excluded ( forbidden objects ) from the spectroscopic sample .also a tool for manual definition of curved or tilted slits , to better follow the shape of particularly interesting objects , has to be provided . the mps works starting from a vimos image , to which a catalogue of objects is associated .the catalogue can be derived from the image itself or from some other astronomical data - set . in this second case ,a way to correlate the celestial coordinates of the objects in the catalogue with image coordinates is to be provided .some catalog handling capabilities , to allow for the selection of classes of sources among which to operate the choice of spectroscopic targets , some image display and catalogue overlay capabilities have to be provided by the package .as mps will be distributed to the astronomical community , it should be based on some already known package ( not yet another system ) .it was therefore decided to base the mps gui on the skycat tool distributed by eso ( see ` http://archive.eso.org:8080/skycat/ ` ) .this tool allows astronomers to couple vimos images and catalogues on which to operate selections of objects over which to place slits .a new panel for catalogue display , for object selection ( reference , compulsory , forbidden object ) has been implemented . for each type of catalogueobjects a different overlay symbol has been defined .a dedicated zoom panel allows the definition of curved / tilted slits .curved slits are defined by fitting a bezier curve to a set of points chosen by the user by clicking on the zoom display .the fitted curve is then automatically plotted .the slit width is chosen through a scale widget .tilted slits can be defined as curved slits and then straightened .if the astronomer wants to have slits of a width different from the one chosen for the automatic slit placements , he can define them as tilted slits and then align them to the other automatically placed slits .the core of mask preparation software is the slit positioning optimization code ( spoc ) . given a catalog of objects, spoc maximizes the number of observable objects in a single exposure and computes the corresponding slit positions .spoc places slits on the field of view taking into account : special objects ( reference , compulsory , forbidden ) , special slits ( curved , tilted or user s dimension defined ) , spectral first order superposition , spectral higher order superposition and sky region parameter ( the minimum amount of sky to be added to an object size when defining a slit ) .the issue to be solved is a combinatory computational problem . because of the constraint of slits aligned in the dispersion direction ,the problem can be slightly simplified : the quadrant area can be considered as a sum of strips which are not necessarily of the same width in the spatial direction .slits within the same strip have the same length and the alignment of orders is fully ensured .the problem is thus reduced to be mono - dimensional .it is easy to show that the number of combination is roughly given by : the slit length ( or strip width ) can vary from a minimum of 4 arcsec ( 20 pixels , i.e. twice the minimum sky region required for the sky subtraction ) to a maximum of 30 arcsec ( 150 pixels , limit imposed by the slit laser cutting machine ) .the average number of strips can be estimated as the spatial direction size of the fov divided by the most probable slit length : assuming the latter to be 50 pixels ( 10 arcsec ) , we would have strips .the number of combinations would then be : .computing these many combinations would correspond to years of cpu work !the problem is similar to the well known traveling salesman problem : in the standard approach , this is solved by randomly extracting a `` reasonable '' number of combinations and maximizing over this subsample . in our case ,due to computational time , the `` reasonable '' number of combinations can not be higher than , so small with respect to the total number of combination that the result is not guaranteed to be near the real maximum .our approach has been to consider only the most `` probable '' combinations , i.e. the ones that have the highest probability to maximize the solution .step 1 : for each spatial coordinate , we can vary the strip width from the given minimum to the given maximum , count how many objects we can place in the strip , and build the diagram of the number of slits in a strip divided by the strip width as a function of the strip width . for each spatial coordinate ,only the strip widths corresponding to peaks in this histogram are worth considering , as they correspond to local maxima of the number of slits per strip .the exact positioning of the peaks varies for each spatial coordinate , but the shape of the function remains the same .the position of the peaks can be easily found in no more than 6 - 7 trials ( using a partition exchange method ) .step 2 for each spatial coordinate we have k ( where k is the number of peaks ) possible strips , each with its own length and number of slits . although the number of combinations to be tested is decreased it is still too big in terms of computational time .step 3 a further reduction can be obtained if , instead of considering all the strips simultaneously , we consider sequentially m subsets of n consecutive strips , which together cover the whole fov . at this point , we should vary n ( and consequently m ) to find the best solution . in practice ,when n is higher than 8 - 10 , nothing changes in terms of number of observable objects . for n=10 , thus m=4 ( i.e. ) ,the number of combination is reduced to only , which means a few seconds of cpu work . unfortunately , as a consequence of the optimization process, small size objects are favored against the big ones . a second ,less optimized algorithm , has been implemented within spoc .this alternative algorithm does not optimize all strips simultaneously but builds the function strip by strip without considering object sizes , and takes only the maximum of the distribution .then it enlarges each strip width by taking into account object sizes . in this waythe number of placed slits decreases by a few percent but the object dimension bias disappears .a dedicated panel for spoc set up has been implemented within skycat .trough this panel , users can select the grism , the slit width , and the sky region parameter , the number of masks to be obtained for the given field , and the type of spoc maximization . the slit catalogue produced by spoccan be loaded as a normal skycat catalog with overlay symbols defined for all kinds of objects , and it is also possible to plot the slit and spectrum overlay for all spoc catalog objects .
the main scientific characteristic of vimos ( visible multi object spectrograph , to be placed on eso very large telescope ) is its high multiplexing capability , allowing astronomers to obtain up to 800 spectra per exposure . to fully exploit such a potential a dedicated tool , the vimos mask preparation software ( mps ) , has been designed . the mps provides the astronomer with tools for the selection of the objects to be spectroscopically observed , including interactive object selection , handling of curved slits , and an algorithm for automatic slit positioning that derives the most effective solution in terms of number of object selected per field . the slit positioning algorithm must take into account both the initial list of user s preferred objects , and the constraints imposed either by the instrument characteristics or by the requirement of having easily reduceable data . the number of possible slit combinations in a field is in any case very high ( ) , and the task of slits maximization can not be solved through a purely combinational approach . we have introduced an innovative approach , based on the analysis of the function ( number of slits)/(slit length ) vs. ( slit length ) . the algorithm has been fully tested with good results , and it will be distributed to the astronomical community for the observation preparation phase .
the method of _ stable random projections_ , as an efficient tool for computing pairwise distances in massive high - dimensional data , provides a promising mechanism to tackle some of the challenges in modern machine learning . in this paper, we provide an easy - to - implement algorithm for _ stable random projections _ which is both statistically accurate and computationally efficient .we denote a data matrix by , i.e. , data points in dimensions .data sets in modern applications exhibit important characteristics which impose tremendous challenges in machine learning : * modern data sets with or even points are not uncommon in supervised learning , e.g. , in image / text classification , ranking algorithms for search engines , etc . in the unsupervised domain ( e.g. , web clustering , ads clickthroughs ,word / term associations ) , can be even much larger .* modern data sets are often of ultra high - dimensions ( ) , sometimes in the order of millions ( or even higher ) , e.g. , image , text , genome ( e.g. , snp ) , etc .for example , in image analysis , may be if using pixels as features , or million if using color histograms as features .* modern data sets are sometimes collected in a dynamic streaming fashion . *large - scale data are often heavy - tailed , e.g. , image and text data .some large - scale data are dense , such as image and genome data . even for data sets which are sparse , such as text, the absolute number of non - zeros may be still large .for example , if one queries machine learning " ( a not - too - common term ) in google.com , the total number of pagehits is about 3 million . in other words ,if one builds a term - doc matrix at web scale , although the matrix is sparse , most rows will contain large numbers ( e.g. , millions ) of non - zero entries .many learning algorithms require a similarity matrix computed from pairwise distances of the data matrix .examples include clustering , nearest neighbors , multidimensional scaling , and kernel svm ( support vector machines ) .the similarity matrix requires storage space and computing time .this study focuses on the distance ( ) .consider two vectors , ( e.g. , the leading two rows in ) , the distance between and is note that , strictly speaking , the distance should be defined as .because the power operation is the same for all pairs , it often makes no difference whether we use or just ; and hence we focus on . the radial basis kernel ( e.g. , for svm )is constructed from : when , this is the gaussian radial basis kernel . here can be viewed as a _ tuning _ parameter .for example , in their histogram - based image classification project using svm , reported that and achieved good performance .for heavy - tailed data , tuning has the similar effect as term - weighting the original data , often a critical step in a lot of applications . for popular kernel svm solvers including the _ sequential minimal optimization ( smo _ ) algorithm ,storing and computing kernels is the major bottleneck .three computational challenges were summarized in : * * _ computing kernels is expensive _ * * * _ computing full kernel matrix is wasteful _ * efficient svm solvers often do not need to evaluate all pairwise kernels . * * _ kernel matrix does not fit in memory _ * storing the kernel matrix at the memory cost is challenging when , and is not realistic for , because consumes at least gbs memory .a popular strategy in large - scale learning is to evaluate distances * * on the fly** .that is , instead of loading the similarity matrix in memory at the cost of , one can load the original data matrix at the cost of and recompute pairwise distances on - demand .this strategy is apparently problematic when is not too small . for high - dimensional data , either loading the data matrix in memory is unrealistic or computing distances on - demand becomes too expensive .those challenges are not unique to kernel svm ; they are general issues in distanced - based learning algorithms .the method of _ stable random projections _ provides a promising scheme by reducing the dimension to a small ( e.g. , ) , to facilitate compact data storage and efficient distance computations .the basic procedure of _ stable random projections _ is to multiply by a random matrix ( ) , which is generated by sampling each entry i.i.d . from a symmetric stable distribution .the resultant matrix is much smaller than and hence it may fit in memory .suppose a stable random variable , where is the scale parameter .then its characteristic function ( fourier transform of the density function ) is which does not have a closed - form inverse except for ( normal ) or ( cauchy ) .note that when , corresponds to `` '' ( not `` '' ) in a normal .corresponding to the leading two rows in , , , the leading two rows in are , .the entries of the difference , for to , are i.i.d .samples from a stable distribution with the scale parameter being the distance , due to properties of fourier transforms .for example , when , a weighted sum of i.i.d .standard normals is also normal with the scale parameter ( i.e. , variance ) being the sum of squares of all weights .once we obtain the stable samples , one can discard the original matrix and the remaining task is to estimate the scale parameter for each pair . + some applications of _ stable random projections _are summarized as follows : * * _ computing all pairwise distances _ * the cost of computing all pairwise distances of , , is significantly reduced to . * * _ estimating distances online _* for , it is challenging or unrealistic to materialize all pairwise distances in .thus , in applications such as online learning , databases , search engines , and online recommendation systems , it is often more efficient if we store in the memory and estimate any distance _ on the fly _ if needed . estimating distances online is the standard strategy in large - scale kernel learning . with _ stable random projections _ , this simple strategy becomes effective in high - dimensional data .* * _ learning with dynamic streaming data _ * in reality , the data matrix may be updated overtime .in fact , with streaming data arriving at high - rate , the `` data matrix '' may be never stored and hence all operations ( such as clustering and classification ) must be conducted on the fly .the method of _ stable random projections _ provides a scheme to compute and update distances on the fly in one - pass of the data ; see relevant papers ( e.g. , ) for more details on this important and fast - developing subject . * * _ estimating entropy _ * the entropy distance is a useful statistic .a workshop in nips03 ( www.menem.com/~ilya/pages/nips03 ) focused on entropy estimation .a recent practical algorithm is simply using the difference between the and distances , where , , and the distances were estimated by _stable random projections_. if one tunes the distances for many different ( e.g. , ) , then _stable random projections _ will be even more desirable as a cost - saving device .recall that the method of _ stable random projections _ boils down to a statistical estimation problem . that is , estimating the scale parameter from i.i.d .samples , to .we consider that a good estimator should have the following desirable properties : * ( asymptotically ) unbiased and small variance . * computationally efficient . *exponential decrease of error ( tail ) probabilities .the _ arithmetic mean _estimator is good for . when , the task is less straightforward because ( 1 ) no explicit density of exists unless or ; and ( 2 ) only when . initially reported in arxiv in 2006 , proposed the _ geometric mean _ estimator where is the gamma function , and the _ harmonic mean _estimator more recently , proposed the _ fractional power _estimator where all three estimators are unbiased or asymptotically ( as ) unbiased .figure [ fig_efficiency ] compares their asymptotic variances in terms of the cramr - rao efficiency , which is the ratio of the smallest possible asymptotic variance over the asymptotic variance of the estimator , as .the _ geometric mean _ estimator , exhibits tail bounds in exponential forms , i.e. , the errors decrease exponentially fast : the _ harmonic mean _ estimator , , works well for small , and has exponential tail bounds for . the _ fractional power _estimator , , has smaller asymptotic variance than both the _ geometric mean _ and _ harmonic mean _ estimators .however , it does not have exponential tail bounds , due to the restriction in its definition . as shown in , it only has finite moments slightly higher than the order , when approaches 2 ( because ) , meaning that large errors may have a good chance to occur .we will demonstrate this by simulations . in the definitions of , and , all three estimators require evaluating fractional powers , e.g. , .this operation is relatively expensive , especially if we need to conduct this tens of billions of times ( e.g. , ) .for example , reported that , although the radial basis kernel ( [ eqn_rbs_kernel ] ) with achieved good performance , it was not preferred because evaluating the square root was too expensive .we propose the _ optimal quantile _ estimator , using the smallest : where is chosen to minimize the asymptotic variance .this estimator is computationally attractive because * selecting * should be much less expensive than evaluating fractional powers .if we are interested in instead , then we do not even need to evaluate any fractional powers .as mentioned , in many cases using either or makes no difference and is often preferred because it avoids taking power. the radial basis kernel ( [ eqn_rbs_kernel ] ) requires .thus this study focuses on . on the other hand ,if we can estimate directly , for example , using ( [ eqn_d_oq ] ) without the power , we might as well just use if permitted . in casewe do not need to evaluate any fractional power , our estimator will be even more computationally efficient .in addition to the computational advantages , this estimator also has good theoretical properties , in terms of both the variances and tail probabilities : 1 .figure [ fig_efficiency ] illustrates that , compared with the _ geometric mean _ estimator , its asymptotic variance is about the same when , and is considerably smaller when . compared with the _ fractional power _estimator , it has smaller asymptotic variance when .in fact , as will be shown by simulations , when the sample size is not too large , its mean square errors are considerably smaller than the _ fractional power _estimator when .the _ optimal quantile _estimator exhibits tail bounds in exponential forms .this theoretical contribution is practically important , for selecting the sample size . in learning theory ,the generalization bounds are often loose . in our case , however , the bounds are tight because the distribution is specified. the next section will be devoted to analyzing the _ optimal quantile _ estimator .recall the goal is to estimate from , where , i.i.d .since the distribution belongs to the scale family , one can estimate the scale parameter from quantiles . due to symmetry , it is natural to consider the absolute values : which is best understood by the fact that if , then , or more obviously , if , then . by properties of order statistics , any -quantile will provide an asymptotically unbiased estimator .lemma [ lem_var_q ] provides the asymptotic variance of .[ lem_var_q ] denote and the probability density function and the cumulative density function of , respectively .the asymptotic variance of defined in ( [ eqn_quantile ] ) is where .* proof : * see appendix [ proof_lem_var_q ] . .we choose so that the asymptotic variance ( [ eqn_var_q ] ) is minimized , i.e. , the convexity of is important .graphically , is a convex function of , i.e. , a unique minimum exists . an algebraic proof , however , is difficult .nevertheless , we can obtain analytical solutions when and .[ lem_convexity ] when or , the function defined in ( [ eqn_g ] ) is a convex function of .when , the optimal .when , is the solution to .* proof : * see appendix [ proof_lem_convexity ] . .it is also easy to show that when , .we denote the _ optimal quantile _ estimator by , which is same as . for general , we resort to numerical solutions , as presented in figure [ fig_opt_quantile ] .although ( i.e. , ) is asymptotically ( as ) unbiased , it is seriously biased for small .thus , it is practically important to remove the bias . the unbiased version of the _ optimal quantile _ estimator is where is the expectation of at . for , , or , we can evaluate the expectations ( i.e. , integrals ) analytically or by numerical integrations . for general , as the probability density is not available , the task is difficult and prone to numerical instability . on the other hand ,since the monte - carlo simulation is a popular alternative for evaluating difficult integrals , a practical solution is to simulate the expectations , as presented in figure [ fig_bias ] .figure [ fig_bias ] illustrates that , meaning that this correction also reduces variance while removing bias ( because ) .for example , when and , , which is significant , because implies a difference in terms of variance , and even more considerable in terms of the mean square errors mse = variance + bias . can be tabulated for small , and absorbed into other coefficients , i.e. , this does not increase the computational cost at run time .we fix as reported in figure [ fig_bias ] . the simulations in section [ sec_simulations ]directly used those fixed values .figure [ fig_compu_ratio ] compares the computational costs of the _ geometric mean _ , the _ fractional power _ , and the _ optimal quantile _ estimators .the _ harmonic mean _estimator was not included as it costs very similarly to the _ fractional power _ estimator .we used the build - in function `` pow''in gcc for evaluating the fractional powers .we implemented a `` quick select '' algorithm , which is similar to quick sort and requires on average linear time . for simplicity ,our implementation used recursions and the middle element as pivot . also , to ensure fairness , for all estimators , coefficients which are functions of and/or were pre - computed .normalized by the computing time of , we observe that relative computational efficiency does not strongly depend on .we do observe that the ratio of computing time of over that of increases consistently with increasing .this is because in the definition of ( and hence also ) , it is required to evaluate the fractional power once , which contributes to the total computing time more significantly at smaller .figure [ fig_compu_ratio ] illustrates that , ( a ) the _ geometric mean _ estimator and the _ fractional power _estimator are similar in terms of computational efficiency ; ( b ) the _ optimal quantile _ estimator is nearly one order of magnitude more computationally efficient than the _ geometric mean _ and _ fractional power _ estimators .because we implemented a `` nave '' version of `` quick select '' using recursions and simple pivoting , the actual improvement may be more significant .also , if applications require only , then no fractional power operations are needed for and the improvement will be even more considerable .error ( tail ) bounds are essential for determining .the variance alone is not sufficient for that purpose .if an estimator of , say , is normally distributed , , the variance suffices for choosing because its error ( tail ) probability is determined by . in general, a reasonable estimator will be asymptotically normal , for small enough and large enough . for a finite and a fixed , however , the normal approximation may be ( very ) poor .this is especially true for the _ fractional power _estimator , .thus , for a good motivation , lemma [ lem_bounds ] provides the error ( tail ) probability bounds of for any , not just the optimal quantile .[ lem_bounds ] denote and its probability density function by and cumulative function by .given , i.i.d ., to . using in ( [ eqn_quantile ] ) , then as * proof : * see appendix [ proof_lem_bounds ] . the limit in ( [ eqn_g_rl_limit ] ) as is precisely twice the asymptotic variance factor of in ( [ eqn_var_q ] ) , consistent with the normality approximation mentioned previously .this explains why we express the constants as .( [ eqn_g_rl_limit ] ) also indicates that the tail bounds achieve the `` optimal rate '' for this estimator , in the language of large deviation theory . by the bonferroni bound ,it is easy to determine the sample size [ lem_jl_quantile ] using with , any pairwise distance among points can be approximated within a factor with probability .it suffices to let , where , are defined in lemma [ lem_bounds ] .the bonferroni bound can be unnecessarily conservative .it is often reasonable to replace by , meaning that except for a fraction of pairs , any distance can be approximated within a factor with probability .figure [ fig_bounds ] plots the error bound constants for , for both the recommended _ optimal quantile _estimator and the baseline _ sample median _estimator .although we choose based on the asymptotic variance , it turns out also exhibits ( much ) better tail behaviors ( i.e. , smaller constants ) than , at least in the range of . consider ( recall we suggest replacing by ) , with , , and .because around , we obtain , which is still a relatively large number ( although the original dimension might be ) .if we choose , then approximately .it is possible might be still conservative , for three reasons : ( a ) the tail bounds , although sharp , " are still upper bounds ; ( b ) using is conservative because is usually much smaller than ; ( c ) this type of tail bounds is based on relative error , which may be stringent for small ( ) distances .in fact , some earlier studies on _ normal random projections _ ( i.e. , ) empirically demonstrated that appeared sufficient .we resort to simulations for comparing the finite sample variances of various estimators and assessing the more precise error ( tail ) probabilities .one advantage of _ stable random projections _ is that we know the ( manually generated ) distributions and the only source of errors is from the random number generations .thus , we can simply rely on simulations to evaluate the estimators without using real data .in fact , after projections , the projected data follow exactly the stable distribution , regardless of the original real data distribution . without loss of generality ,we simulate samples from and estimate the scale parameter ( i.e. , 1 ) from the samples . repeating the procedure times , we can reliably evaluate the mean square errors ( mse ) and tail probabilities . as illustrated in figure [ fig_simu_mse ] , in terms of the mse , the _ optimal quantile _estimator outperforms both the _ geometric mean _ and _ fractional power _estimators when and .the _ fractional power _ estimator does not appear to be very suitable for , especially for close to 2 , even when the sample size is not too small ( e.g. , ) . for , however , the _ fractional power _estimator has good performance in terms of mse , even for small .figure [ fig_simu_tail ] presents the simulated right tail probabilities , , illustrating that when , the _ fractional power _ estimator can exhibit very bad tail behaviors . for ,the _ fractional power _ estimator demonstrates good performance at least for the probability range in the simulations .thus , figure [ fig_simu_tail ] demonstrates that the _ optimal quantile _ estimator consistently outperforms the _ fractional power _ and the _ geometric mean _estimators when .there have been many studies of _ normal random projections _ in machine learning , for dimension reduction in the norm , e.g. , , highlighted by the johnson - lindenstrauss ( jl ) lemma , which says suffices when using normal ( or normal - like , e.g. , ) projection methods .the method of _ stable random projections _ is applicable for computing the distances ( ) , not just for .* lemma 1 , lemma 2 , theorem 3 ) suggested the _ median _( i.e. , quantile ) estimator for and argued that the sample complexity bound should be ( in their study ) .their bound was not provided in an explicit form and required an `` is small enough '' argument . for , (* lemma 4 ) only provided a conceptual algorithm , which `` is not uniform . '' in this study , we prove the bounds for any -quantile and any ( not just ) , in explicit exponential forms , with no unknown constants and no restriction that `` is small enough . ''the quantile estimator for stable distributions was proposed in statistics quite some time ago , e.g. , . mainly focused on and recommended using quantiles ( mainly for the sake of smaller bias ) . focused on and recommended quantiles .this study considers all and recommends based on the minimum asymptotic variance . because the bias can be easily removed ( at least in the practical sense ) , it appears not necessary to use other quantiles only for the sake of smaller bias .tail bounds , which are useful for choosing and based on confidence intervals , were not available in .finally , one might ask if there might be better estimators . for , proposed using a linear combination of quantiles ( with carefully chosen coefficients ) to obtain an asymptotically optimal estimator for the cauchy scale parameter . while it is possible to extend their result to general ( requiring some non - trivial work ) , whether or not it will be practically better than the _ optimal quantile _ estimator is unclear because the extreme quantiles severely affect the tail probabilities and finite - sample variances and hence some kind of truncation ( i.e. , discarding some samples at extreme quantiles ) is necessary . also , exponential tail bounds of the linear combination of quantiles may not exist or may not be feasible to derive .in addition , the _ optimal quantile _ estimator is computationally more efficient .many machine learning algorithms operate on the training data only through pairwise distances .computing , storing , updating and retrieving the `` matrix '' of pairwise distances is challenging in applications involving massive , high - dimensional , and possibly streaming , data .for example , the pairwise distance matrix can not fit in memory when the number of observations exceeds ( or even ) .the method of _ stable random projections _ provides an efficient mechanism for computing pairwise distances using low memory , by transforming the original high - dimensional data into _ sketches _ , i.e. , a small number of samples from -stable distributions , which are much easier to store and retrieve .this method provides a uniform scheme for computing the pairwise distances for all . choosing an appropriate is often critical to the performance of learning algorithms . in principle, we can tune algorithms for many distances ; and _ stable random projections _ can provide an efficient tool . to recover the original distances ,we face an estimation task . compared with previous estimators based on the _ geometric mean _ , _ the harmonic mean _ , or the _ fractional power _, the proposed _ optimal quantile _estimator exhibits two advantages .firstly , the _ optimal quantile _ estimator is nearly one order of magnitude more efficient than other estimators ( e.g. , reducing the training time from one week to one day ) .secondly , the _ optimal quantile _ estimator is considerably more accurate when , in terms of both the variances and error ( tail ) probabilities .note that corresponds to a convex norm ( satisfying the triangle inequality ) , which might be another motivation for using distances with .one theoretical contribution is the explicit tail bounds for general quantile estimators and consequently the sample complexity bound .those bounds may guide practitioners in choosing , the number of projections .the ( practically useful ) bounds are expressed in terms of the probability functions and hence they might be not as convenient for further theoretical analysis . also , we should mention that the bounds do not recover the optimal bound of the _ arithmetic mean _ estimator when , because the _ arithmetic mean _ estimator is statistically optimal at but the _ optimal quantile _ estimator is not . while we believe that applying _stable random projections _ in machine learning has become straightforward , there are interesting theoretical issues for future research .for example , how theoretical properties of learning algorithms may be affected if the approximated ( instead of exact ) distances are used ?denote and the probability density function and the cumulative density function of , respectively .similarly we use and for . due to symmetry , the following relations hold let and .then , following known statistical results , e.g. , ( * ? ? ?* theorem 9.2 ) , the asymptotic variance of should be by delta method , "i.e. , , first , consider . in this case , it suffices to study . because for , it is easy to see that , and . thus , , i.e. , is convex and so is .since , we know .+ next we consider , using a fact that as , converges to , where stands for an exponential distribution with mean 1 .denote and .the sample quantile estimator becomes in this case , it is straightforward to show that is a convex function of and the minimum is attained by solving , i.e. , . given i.i.d .samples , , to .let , to .denote by the cumulative density of , and by the empirical cumulative density of , to .it is the basic fact about order statistics that follows a binomial , i.e. , . for simplicity ,we replace by , by , and by , in this proof .consider the general quantile estimator defined in ( [ eqn_quantile ] ) .for , ( again , denote ) , where and .thus ^k%\\\notag & % = & \exp\left ( % -k\left(-(1-q)\log\left(1-f\left ( \left((1+\epsilon)\right)^{1/\alpha}w;1\right)\right ) - q \log \left(f\left ( \left((1+\epsilon)\right)^{1/\alpha}w;1\right)\right ) + ( 1-q)\log ( 1-q ) + q\log(q)\right)\right)\\\notag = \exp\left(-k\frac{\epsilon^2}{g_{r , q}}\right).\end{aligned}\ ] ] where for , where and .thus , ^k%\\\notag & = \exp\left(-k\frac{\epsilon^2}{g_{l , q}}\right),\end{aligned}\ ] ] where chernoff , h. , gastwirth , j.l . , johns , m.v .: asymptotic distribution of linear combinations of functions of order statistics with applications to estimation .the annals of mathematical statistics * 38*(1 ) ( 1967 ) 5272
the method of _ stable random projections _ is a tool for efficiently computing the distances using low memory , where is a tuning parameter . the method boils down to a statistical estimation task and various estimators have been proposed , based on the _ geometric mean _ , the _ harmonic mean _ , and the _ fractional power _ etc . this study proposes the * _ optimal quantile _ * estimator , whose main operation is * _ selecting _ * , which is considerably less expensive than taking fractional power , the main operation in previous estimators . our experiments report that the _ optimal quantile _ estimator is nearly one order of magnitude more computationally efficient than previous estimators . for large - scale learning tasks in which storing and computing pairwise distances is a serious bottleneck , this estimator should be desirable . in addition to its computational advantages , the _ optimal quantile _ estimator exhibits nice theoretical properties . it is more accurate than previous estimators when . we derive its theoretical error bounds and establish the explicit ( i.e. , no hidden constants ) sample complexity bound .
the discovery of turbo codes and the re - discovery of low - density parity - check ( ldpc ) codes have , over the night , closed the theory - practice gap of the shannon capacity limit on additive white gaussian noise ( awgn ) channels .they have also revolutionized the coding research with a new paradigm of _ iterative probabilistic inference _ , commonly dubbed the _ soft - iterative _ paradigm . since their success with turbo codes and ldpc codes ,the soft - iterative paradigm has become a vital tool in widespread applications in communication and signal processing . a complex communication system comprised of layers of functional blocksthat were previously individually or sequentially tackled , can now use a `` soft - iterative '' treatment , close in spirit to that of the turbo or ldpc decoder , to achieve quality performance with manageable complexity .celebrated applications include , for example , iterative demodulation and decoding , turbo equalization ( also known as iterative decoding and equalization ) , and multi - user detection , and iterative sensing and decision fusion for sensor networks .the significance and wide popularity of the soft - iterative algorithm has caused a considerable amount of study on its behavior , performance and convergence .present models and methodologies for analyzing estimation and decoding methods may be roughly grouped into the following categories : 1 ) . the most straight - forward way to evaluate the performance of an estimation / detection / decoding method ,be it soft - iterative or otherwise , is through monte carlo simulations .the result is very accurate , but the simulation is usually lengthy , tedious , and not scaling well .additionally , simulations do not shed much insight into why the performance is so and how the performance might be improved .classic analytical methods come from the perspectives of maximum likelihood ( ml ) , maximum _ a posteriori _ ( map ) probability , or minimum square error ( mse ) .they inevitably assume that the subject method is optimal and always deciding on the candidate that has the largest probability , maximum likelihood ratio , or the minimal hamming / euclidean distance to what s been observed .they produce useful performance bounds , but may present a non - negligible gap to the true performance of the practical , iterative estimator at hand .3 ) . powerful iterative analytical methods , notably the _ density evolution _ ( de ) and the _ extrinsic information transfer _ ( exit ) charts , were developed in the last decade .these methods faithfully capture the iterative trajectory of many estimators / decoders , and have unveiled several fundamental and intriguing properties of the system ( e.g. the `` convergence property '' and the `` area property '' ) . however , several underlying assumptions thereof , including the ergodicity assumption , the neighborhood independence assumption and the gausianity assumption , make them suitable mostly for evaluating the _ asymptotic _ behavior ( i.e. infinite block size ) .many real systems have limited lengths of a few hundred to a few thousand ( bits ) , and the accuracy and usefulness of these methods can become limited in such cases . 4 ) .to tackle the hard problem of iterative analysis for short - length signal sequence , researchers have also developed several interesting concepts and ideas , including _ pseudo codewords _ , _ stopping sets _ and _ trapping sets _they boast some of the most accurate performance predictions at short lengths .the drawback , however , is that efficient and systematic ways to identify and quantify these metrics are not readily existent , and hence , one may have to rely on computer - aided search of some type , causing daunting complexity .the majority of the existing methods , as summarized above , largely stem from a statistical and/or information theoretical root .they have significantly advanced the field , but are also confronted with challenges and limitations , as they try to use statistical metrics and tools that are based on ensemble averages ( such as mean , variance , entropy , mutual information ) to predict and control the iterative process of a large - dimension , highly - dynamical , and apparently - random signal sequence . the notion that iterative probabilistic inference algorithms can be viewed as complex dynamical systems - presents an interesting departure from the existing ensemble - average based methods , and brings up new ways of evaluating the soft - iterative algorithm on individual blocks .generally speaking , the performance of a system should be assessed in the _ average _ sense , such as the bit error rate ( ber ) averaged over hundreds of thousands of blocks . at the same time , however , it also makes sense to evaluate the _ per - block _ performance , namely , the number ( or the percentage ) of errors in individual blocks . per - block error rate reflects error bursts and/or the worst - case situation , and can be of interest in several applications .for example , in multimedia transmission , a modest number of bit blip errors in an image may cause only a minor quality degradation that is hard to perceive by human eyes , whereas excessive errors will cause the image to be badly distorted and unusable . in magnetic recording systems , a reed solomon ( rs ) outer code is generally employed after the channel - coded partial - response ( pr ) channel , to clear up the residual errors left by the equalizer / decoder .it does not really matter that every block has errors ( after the equalizer / decoder ) ; as long as the per - block error rate is within the error correction capability of the rs wrap , zero - error is achievable for the entire system .hence , the specific issue we investigate here is how the per - block performance improves with the number of iterations .common wisdom has it that more iterations can not hurt , i.e. , a larger number of iterations may not necessarily lead to ( worthy or noticeable ) performance gains , but it can not degrade the performance either .this apparent truth , as verified by the numerous studies reported on the bit error rate ( ber ) simulations , density evolution ( de ) analysis , and extrinsic information transfer ( exit ) charts , holds in the context of _ average _ performance . in terms of the per - block performance , however , our studies reveal that it is not only possible , but also quite likely , for an individual block to encounter an `` fluctuating '' decoding state , such that the number of errors in that block keeps bouncing up and down ( rather than monotonically decreasing ) with the iterations .what this phenomenon , thereafter referred to as the _ z - crease _ phenomenon , implies in practice is that a larger number of iterations are not always beneficial , and that the right timing may play a more important role .if the decoder stops at an unlucky iteration , it may actually generate far more errors than if it stopped several iterations earlier .it is worth noting that z - crease is not special ; it is actually a _universal _ phenomenon that vastly exists in iterative decoding and estimation systems .we have examined a variety of different systems , including low - density parity - check ( ldpc ) codes with message - passing decoding , turbo codes with turbo decoding , product accumulate ( pa ) codes with iterative pa decoding , and convolutionally - coded inter - symbol interference ( isi ) channel with turbo equalization . in all of these systems ,we have observed the z - crease phenomenon .to study the per - block system behavior and reveal the z - crease phenomenon , our approach is to treat the iterative estimation / decoding system a high - dimensional nonlinear dynamical system parameterized by a set of parameters , to further transform it to a one - dimensional state space with a single parameter , and examine the time evolution of the states at specific signal - to - noise ratios ( snr ) . in this paper , we take a popular turbo code as an example , and report here the observation of a wide range of phenomena characteristic to nonlinear dynamical systems , including several new phenomena not reported previously .a communication or signal processing system generally consists of many processing blocks inter - connected in parallel , in serial , or in hybrid , each fulfilling a specific task .since the solution space of an `` integrated '' process is the kronecker product of all the constituent solution spaces , to launch an overall optimal solution usually induces prohibitive complexity .a more feasible solution is to apply iterative algorithms , which allow constituent sub - units to perform local process and to iterative exchange and refine processed `` messages '' , thus achieving a solution considerably better than that from sequential processing with a manageable complexity .iterative algorithms are by nature probabilistic inference based , where the `` messages '' to be processed and communicated represent the reliability or confidence level of a digital decision , commonly formulated as _ log - likelihood ratios ( llr ) _ ; but they can also be modeled as ( nonlinear ) dynamical systems . to help model all the variants of iterative algorithms in a universal mathematical formulation ,we have summarized some the properties assumed to be features of these algorithms : ( i ) an iterative algorithm is a dynamical system with a large number of dimensions , possibly depending on many parameters , and distances along trajectories increase ( decrease ) polynomially , sub - polynomially or exponentially .( ii ) it is formed by two or more units interacting with each other ; each unit responds to messages received from the others in a nonlinear manner .( iii ) the system is in general hierarchical : a message may be treated in several different levels ( units ) before reaching the center of action .( iv ) the system in its evolution may be adaptive , i.e. with memory .( v ) local interactions may have global effect : they may produce considerable global change in the system over the time , e.g. the `` wave effect '' in the decoding of an irregular ldpc code .we start by evaluating the turbo decoder , which is useful in its own right , and whose information theoretical analysis has reached a good level of maturity . a typical rate-1/3 turbo code , depicted in fig .[ figure:1turbo ] , is formed of two constituent recursive systematic convolutional ( rsc ) codes , concatenated in parallel through a pseudo - random interleaver .it encodes a block of binary bits into a codeword of binary bits , ] be the noise , induced by the physical channel or transmitter / receiver circuitry , and let be the noisy observation available at the decoder . exploiting the geometric uniformity of the codeword space of a turbo code ( or any practical ecc ) , we can model the turbo decoder as a discrete - time dynamical system in constant evolution : where the superscript denotes the number of half iterations , , and are the parameters of the dynamical system , and and are nonlinear functions describing the constituent rsc sub - decoders , reflecting in general an implementation of the bahl - cocke - jelinek - raviv ( bcjr ) decoding algorithm , the soft viterbi algorithm ( sova ) , or their variations . when takes on a value of a few thousand or larger , as in a practical scenario , this -dimensional -parametrized nonlinear dynamical system becomes too complex to characterize or visualize . to make the problem tractable, we propose to `` project '' these dimensions to one or a few `` critical '' ones .borrowing insight developed from conventional decoder analysis and after performing a careful evaluation , we propose to project the parameters into a single parameter , the ( approximated ) signal - to - noise ratio ( snr ) , \ , ||^2 ] where .the nonlinear dynamical system remains in constant evolution in the -dimensional space with parameters ( as a real turbo decoder does ) , but characterizing the system using reduced dimensions drastically simplifies the analysis , enabling a better visualization and understanding of the further behavior .[ figure:1turbo ]consider noise samples represented by = [ z_1 , z_2 , . .. , z_{3k}]$ ] .different vectors of noise samples are said to have the same noise realization , if they have the same fixed ratios between consecutive sample values , , , , .thus for a given noise realization , the noise vector is completely determined by the ( approximated ) snr . in general , should be chosen sufficiently large to make a close approximation of the true channel snr .extensive simulations are performed in our preliminary study . a whole range of phenomena known to occur in nonlinear dynamical systems , including fixed points , bifurcations , oscillatory behavior ,period - doubling , limited cycles , chaos and transient chaos , are observed in the iterative decoding process as increases ( fig .[ fig:2a][fig:2i ] ) .some of these phenomena were noted in previous studies , but we report interesting new discoveries . for each motion type ,we provide two pictures : a wave picture illustrating the change of mean magnitude of llrs and the minimum magnitude of llrs ( y - axis ) as a function of the number of half iterations ( x - axis ) , and a trajectory picture presenting the phase trajectory from one half iteration to the next . in some cases ,we also present a third picture of a zoomed - in trajectory after 500 half iterations db , indecisive fixed point .( left : wave picture ; right : trajectory picture.),title="fig:",width=172 ] db , indecisive fixed point .( left : wave picture ; right : trajectory picture.),title="fig:",width=172 ] the iterative process inevitably starts with and ends at a fixed point .the former , occurring at an asymptotically low snr ( such as db in our experiment , fig .[ fig:2a ] ) , is termed an _ indecisive fixed point _ , and is associated with an unacceptably high error probability ( in our experiment ) . the latter ,occurring at an asymptotically high snr ( e.g. , db , fig . [ fig:2i ] ) , denotes a successful decoding convergence to a zero - error _ unequivocal fixed point_. db , periodical fixed point losing stability.,title="fig:",width=172 ] db , periodical fixed point losing stability.,title="fig:",width=172 ] [ fig:2b ] db , periodical fixed point losing stability.,title="fig:",width=172 ] db , periodical fixed point losing stability.,title="fig:",width=172 ] db , periodical fixed point losing stability.,width=172 ] db , periodical fixed point losing stability.,title="fig:",width=172 ] db , periodical fixed point losing stability.,title="fig:",width=172 ] db , periodical fixed point losing stability.,width=172 ] db , chaos.,title="fig:",width=172 ] db , chaos.,title="fig:",width=172 ] db , chaos.,title="fig:",width=172 ] + db , chaos.,title="fig:",width=172 ] db , chaos.,title="fig:",width=172 ] db , chaos.,width=172 ] db , chaos.,title="fig:",width=172 ] db , chaos.,title="fig:",width=172 ] between the two asymptotic ends are a myriad spectrum of bifurcations , some of which correspond well to the ostensible concepts and phenomena from the information theory , and others appear foreign and await an indepth study . as increases from to db ( fig .[ fig:2b ] , the system remains trapped to an indecisive fixed point , but the long convergence time indicates the stability of the indecisive fixed point begins to break down . at snr of db , the indecisive fixed point undergoes a flip bifurcation and a stable _ periodical fixed point _ with period of 5 is formed ( fig . [ fig:2c ] ) .further increasing to db leads to an increased period from 5 to 10 , showing the _ period - doubling _ phenomenon ( fig .[ fig:2d ] ) .a closer inspection , shown in the zoomed - in trajectory picture , indicates that the motion is not exactly repetitive , but follows an approximate periodic orbit . as been verified in the experiment here ( as well as other complex systems ), the period will continue to double without bound as increases .the `` discrete '' orbit eventually becomes continuous at db , presenting a _ limited cycle _ or _ limited ring _ a closed curve homeomorphic to a circle ( fig .[ fig:2e ] ) . as snr further increases , the limited ring loses its stability , and converges ones again to an _ indecisive fixed point _ at db ( fig .[ fig:2f ] ) .this is rather surprising and is the first time that this type of indecisive fixed points has been observed for turbo decoders .unlike the fixed points at both asymptotic ends , here the phase trajectory oscillates with diminishing amplitude and it takes longer to converge .db , periodic fixed point.,title="fig:",width=172 ] db , periodic fixed point.,title="fig:",width=172 ] db , periodic fixed point.,width=172 ] db , periodic fixed point.,title="fig:",width=172 ] db , periodic fixed point.,title="fig:",width=172 ] db , periodic fixed point.,title="fig:",width=172 ] db , periodic fixed point.,title="fig:",width=172 ] db , periodic fixed point.,width=172 ] db , unequivocal fixed point.,title="fig:",width=172 ] db , unequivocal fixed point.,title="fig:",width=172 ] db , unequivocal fixed point.,title="fig:",width=172 ] db , unequivocal fixed point.,title="fig:",width=172 ] next at db , the fixed point undergoes neimark - sacker bifurcation through which the phase trajectory goes into an invariant set and , after a transient period , becomes _ chaos _ ( fig .[ fig:2 g ] ) .the previous limited studies have suggested that after chaos will be transient chaos and then the convergence to an unequivocal fixed point .it is intriguing indeed to report there actually exist a rich variety of motion types between chaos and the asymptotic unequivocal fixed point .they include : a _ quasi - periodic fixed point _ at db ( fig .[ fig:2h ] ) , _ transient chaos _ at db ( fig .[ fig:2i ] ) , a _ periodic fixed point _ at db ( fig .[ fig:2j ] ) , another _ transient chaos _ with a short transient lifetime at db ( fig . [ fig:2k ] ) , and eventually the zero - error _ unequivocal fixed point _[ fig:2l ] ) .repeated tests on a large sample of random noise realizations show that although different realizations produce different bifurcation diagrams , the entire snr range nonexclusively falls apart into three regions : a low - snr region corresponding to stable indecisive fixed points , a transition region known in the communication jargon as the _ waterfall region _ in which bifurcations occur , and a high - snr region corresponding to stable unequivocal fixed points .it can be proven that ( 1 ) for an iterative estimator / decoder that is probabilistic inference based , given any noise realization and positive number , there exists an snr threshold , such that for any snr , the iterative algorithm converges , with a probability greater than , to a unique and stable indecisive fixed point ; ( 2 ) likewise , there exists an snr threshold , such that for any snr , the iterative estimator / decoder starting with an unbiased initialization converges , with a probability greater than , to a stable unequivocal fixed point that corresponds to zero decoding errors . also of interestis the rich variety of fixed points we observed in the waterfall region , none of which were reported previously .these fixed points behave much like chaotic ( sensitive ) non - hyperbolic attractors .( chaos is a special class of aperiodic , nonlinear dynamical phenomenon , and is characterized by a prominent feature of `` sensitivity to initial conditions . ''this feature , commonly known as the `` butter - fly effect '' , states that a small perturbation to the initial state would lead to huge and drastically different changes later on . ) in comparison , the fixed points at the two extreme ends of snr are hyperbolic attractors , where distances along trajectories decrease exponentially in complementary dimensions in the ambient space .these newly observed fixed points , some or all of which may or may not occur depending sensitively on the specific noise realization , are usually associated with a few detection errors .they appear to provide support , from the dynamical system perspectives , for the information theoretic conjecture that there exists one or more pseudo codewords in the vicinity of a correct codeword ( i.e. a few bits of hamming distance away ) .they may also correspond well to the coding concept of _ stopping set _ and _ trapping set _ , which characterize an high - snr undesirable convergence in the bsc ( binary symmetric channel ) decoding model and the gaussian decoding model , respectively .it is particularly worth noting that in almost all the motion stages , the mean and the minimum magnitude of llrs fluctuate in a rather notable manner as the number of iteration increases ( see the wave pictures ) . since the mean magnitude of llris shown to relate fairly well with the percentage of errors occurred in each frame , it is therefore reasonable to predict that the per - block error number will also fluctuate with iterations .for example , for a short block of 1024 bits , we have observed that the number of errors in a particularly block can easily vary between 80 and 120 , in a `` quasi - periodic '' z - crease manner . in other words , it is highly likely that an early lucky iteration may save both complexity and less of erroneous bits than a longer , unlucky iteration . since where the decoder stopsalso makes a difference in per - block performance , questions arise as how to stop at the right iteration , and how much benefit there is .we propose the following rule of thumb to detect z - crease : \i ) the minimum magnitude of llr , , is a very accurate indicator of whether or not the iterative decoder has successfully converged to the unequivocal attractor ( i.e. the correct codeword ) . in a correct convergence ( i.e. when the attractor is the unequivocal fixed point or the unequivocal chaos attractor ) , the minimum magnitude and the mean magnitude of llr will both increases with iterations .otherwise , the iterative process is trapped in some local minimum ( which corresponds to the indecisive fixed point , the quasi - periodical cycles , and the indecisive chaos ) . in such as case , the average magnitude of llr may continue to increase with iterations at a decent pace , but the minimum magnitude will remain at a very low value , sending a clear signal of unsuccessful convergence .it is thus convenient to set a threshold for to indicate decoding success .\ii ) the z - crease is most prominent ( with large error fluctuation ) in the quasi - periodical cycle and the indecisive chaos stages .hence , it is beneficial to detect z - crease phenomenon as early as possible and to terminate decoding at the earliest `` best '' iteration .since the z - crease of the bit errors is almost always accompanied with a z - crease of , we suggest using to detect the z - crease of errors . here is a simple but rather effective method : each local maximum point of is taken as a _ candidate point _ , which is like the `` local optimal point '' .we predict that z - crease is occurring when the value of any one candidate point is lower than the value of its previous candidate point .following these observations , we also propose a heuristic stopping criterion and suggest performing iterative decoding in the follow manner : the iterative decoder keeps track of and , and terminate when any one of the following conditions happens : 1 .when increases above a threshold .2 . when of any one candidate point is lower than of the previous candidate point .if the decoder stops at condition 1 ) , the current bit decisions and the current iteration are considered as our `` best shots . ''if the decoder stops at condition 2 ) , then we suggest the decoder trace back to the previous candidate point and use the bit decisions of that iteration as the final decision .otherwise , the decoder will proceed to reach the maximum iteration cap without voluntary stop .it should be noted that previous researchers have also used the mean magnitude of llr for early stopping purpose , but it was used in a different way that did not recognize the z - crease phenomenon . to the best of our knowledge, the minimum magnitude of llr has not been exploited previously .our stopping criterion here is most useful in improving the worst - case per - block performance , but not so much for the average performance ( averaged over lots of blocks ) .it can also cut down the iteration number by or even larger ( especially at low or when the specific frame encounters lucky deep distortion ) , without sacrificing the average performance .we report the z - crease phenomenon in soft - iterative decoding systems , and use the theory of nonlinear dynamics to justify its existence and generality .we show that while the average system error rate performance in general improves ( or , does not deteriorate ) with iterations , for individual frames , more iterations may actually do harm to the decoding decisions . analyzing the dynamical behavior of the system , we further propose a simple stopping criterion based on the minimum magnitude and the mean magnitude of llr to detect successful convergence and determine the right iteration to stop .chung , r. urbanke and t. j. richardson , `` analysis of sum - product decoding of low - density parity - check codes using a gaussian approximation , '' _ ieee trans .inf . theory _ ,657 - 670 , feb . 2001 .l. kocarev , f. lehmann , g. m. maggio , b. scanavino , z. tasev , and a. vardy , `` nonlinear dynamics of iterative decoding systems : analysis and applications , '' _ ieee trans .inf . theory _ ,pp . 166 - 1384 ,april 2006 .
iterative probabilistic inference , popularly dubbed the soft - iterative paradigm , has found great use in a wide range of communication applications , including turbo decoding and turbo equalization . the classic approach of analyzing the iterative approach inevitably use the statistical and information - theoretical tools that bear ensemble - average flavors . this paper consider the per - block error rate performance , and analyzes it using nonlinear dynamical theory . by modeling the iterative processor as a nonlinear dynamical system , we report a universal `` z - crease phenomenon : '' the zig - zag or up - and - down fluctuation rather than the monotonic decrease of the per - block errors , as the number of iteration increases . using the turbo decoder as an example , we also report several interesting motion phenomenons which were not previously reported , and which appear to correspond well with the notion of `` pseudo codewords '' and `` stopping / trapping sets . '' we further propose a heuristic stopping criterion to control z - crease and identify the best iteration . our stopping criterion is most useful for controlling the worst - case per - block errors , and helps to significantly reduce the average - iteration numbers .
for a given real let be the set of all ] .then for any such that we have \leq \inf_{h>0}\left\ { e^{-ht}\left(1-p + pe^h\right)^n\right\ } .\ ] ] furthermore , the function in the last expression is the so - called _ hoeffding bound _ ( or _ hoeffding function _ ) on tail probabilities for sums of independent , bounded random variables . throughout this paper, we will denote by a bernoulli random variable with mean and by a binomial random variable of parameters and . if two random variables have the same distribution we will write .we remark that the hoeffding bound is sharp , in the sense that the bernoulli random variables attain the bound , i.e. , \right\}^n = h(n , p , t ) , \ ] ] where is a random variable .the main ideas behind this work are hidden in the fact that = \mathbb{e}\left[e^{hb}\right],\ ] ] where is a binomial random variable of parameters and ] , where , instead of the tail ] because it fits better to our goals. a slightly looser but more widely used version of hoeffding s bound is the function , which follows from the fact that ( see , formula ( 2.3 ) ) .+ there exists quite some work dedicated to improving hoeffding s bound .see for example the work of bentkus , pinelis , siegel and talagrand , just to name a few references .let us bring the reader s to attention the following two results that are extracted from the papers of talagrand ( , theorem ) and bentkus ( , theorem ) .talagrand s paper focuses on obtaining some `` missing '' factors in hoeffding s inequality whose existence is motivated by the central limit theorem ( see , section ) .these factors are obtained by combining the bernstein - hoeffding method together with a technique ( i.e. suitable change of measure ) that is used in the proof of cramr s theorem on large deviations , yielding the following .+ [ talagr ] let be , given , real numbers from the interval .let also the random variables be independent and such that , for each .set .then , for some absolute constant , , and every real number such that , we have \leq \left\ { \theta\left(\frac{t - np}{\sqrt{np(1-p)}}\right)+ \frac{k}{\sqrt{np(1-p ) } } \right\}\cdot h(n , p , t ) , \ ] ] where is the hoeffding bound and is a non - negative function such that see for a proof of this theorem and for a precise definition of the function .in other words , talagrand s result improves upon hoeffding s by inserting a `` missing '' factor of order in the hoeffding bound . notice that talagrand s result holds true for ] and tails of binomial and poisson random variables .a crucial idea in the results of is to compare ] .then , for any positive real , , such that , we have \leq \inf_{a < t } \frac{1}{t - a } \mathbb{e}\left[\max\{0 , b - a\ } \right ] , \ ] ] where .furthermore , if is additionally assumed to be a _ positive integer _, we have \leq e \cdot\mathbb{p}\left [ b \geq t\right],\ ] ] where .the quantity on the right hand side of the first inequality is estimated in , lemma .we will see in the forthcoming sections that first statement of bentkus result is optimal in a slightly broader sense , i.e. , it is the best bound that can be obtained from the inequality \leq \frac{1}{f(t)}\mathbb{e}[f(b ) ] , \ ] ] where is a non - negative , convex and increasing function . additionally, we will improve upon the constant of the second statement .in this paper we shall be interested in employing the bernstein - hoeffding method to a larger class of generalized moments .such approaches have been already performed by bentkus , eaton , pinelis , . nevertheless , we were not able to find a systematic study of the classes of functions that are considered in our paper .we now proceed by defining a class of functions that is appropriate for the berstein - hoeffding method .let us call a function _ sub - multiplicative _ if , for all .we will denote by the set of all functions that are sub - multiplicative , increasing and convex .examples of such functions are , for fixed , , for fixed and so on .our first result shows that the bernstein - hoeffding method can be adjusted to the class .+ [ mainhoeff ] let be defined as above . let the random variables be independent and such that , for each . set ] .then , for any fixed real number , , such that , we have \leq \inf_{f\in \mathcal{f}_{ic}(t)}\frac{1}{f(t)}\mathbb{e } [ f(b ) ] , \ ] ] where is a binomial random variable and is the class of functions defined above . in section [ optopt ] we show that the functions that minimise ] .let be a fixed _ positive integer _such that .then \leq \frac{1+h}{e^h}\cdot\left ( h(n , p , t)- t(n , p , t;h ) \right ) + \left(1-\frac{1+h}{e^h}\right ) \mathbb{p}\left[b_{n , p } = t\right],\ ] ] where is the hoeffding function , is a binomial random variable of parameters and , , \ ] ] and is such that , i.e. , it is the optimal real such that = \inf_{s>0}\ ; \frac{1}{e^{st } } \mathbb{e}[e^{sb}],\ ] ] with .let us illustate that the bound of the previous result is an improvement upon hoeffding s inequality .indeed , notice that the bound of the previous theorem is \ ] ] and the later quantity is a convex combination of and ] . then , for any fixed _ positive integer _ , , such that , we have \leq \frac{t - tp}{t - np}\cdot\mathbb{p } [ b\geq t ] , \ ] ] where is a binomial random variable .note that for large , say , the previous result gives an estimate for which .however , for values of that are close to , the previous result provides estimates for which can be arbitrarily large .+ in section [ bernhoeff ] we generalise the bernstein - hoeffding method to sums of bounded , independent random variables for which the first moments are known .more precisely , for given real numbers , let be the set of all ]. then \leq \inf_{f\in \mathcal{f}_{sic } } \;\frac{1}{f(t ) } \left\{\mathbb{e}\left [ f(t_{nm})\right]\right\}^n,\ ] ] where is the random variable that takes on values in the set and , for , it satisfies = \frac{1}{n}\sum_{i=1}^{n}\binom{m}{j } \mathbb{e}\left[x_i^j(1-x_i)^{m - j}\right ] .\ ] ] to our knowledge , this is the first result that considers the performance of the method under additional information on higher moments .notice that the probability distribution of the random variable does _ not _ depend on the random variables . indeed , using the binomial formula ,it is easy to see that = \sum_{k=0}^{m - j}\binom{m - j}{k}(-1)^{m - j - k } \mu_{i , m - k}\ ] ] and so is uniquely determined by the given sequences on moments .we will refer to the random variable that takes values on the set with probability ] and binomial tails that depend on the additional information on the moments . in section [ bernhoeffgame ]we study the performance of the method on a certain class of bounded random variables that contain additional information on conditional means and/or conditional distributions .we find random variables that are larger , in the sense of convex order , than any random variable from this class and prove similar results as above that take into account the additional information .our approach is based on the notion of mixtures of random variables .additionally , we construct random variables that are _ different _ from bernstein random variables and are larger , in the sense of convex order , than any random variable from the class , consisting of all random variables in whose variance is . in particular , in section [ bernhoeffgame ]we prove the following .+ [ xitheorem ] fix positive integer and assume that , for , we are given a pair for which the class is non - empty .let be independent random variables such that , for .set ] and is such that =p ] and -a}{b - a} ] , we have \leq \mathbb{e}[f(b ) ] .\ ] ] given , we couple the random variables by setting to be either equal to with probability , or equal to with probability . it is easy to see that =x ] .this follows from the fact that is obtained by minimising the expression on the right hand side of the above inequality with respect to .proposition [ integer ] shows that , in case is a positive integer , hoeffding s bound is the best the can be obtained in a slightly broader sense , i.e. , is the best bound on ] with respect to .in this section we prove theorem [ maintwo ] and show that the hoeffding bound can be improved using a larger class of functions , namely the class , defined in the introduction . once again , theorem [ maintwo ] implies that there may be some space for improvement upon hoeffding s bound .we will employ this result and en route find a function such that < \inf_{h>0 } e^{-ht } \mathbb{e}[e^{hb } ] , \ ] ] where .hence there is indeed space for improvement upon hoeffding s bound .the proof of theorem [ maintwo ] will require some well - known results and the following notion of ordering between random variables ( see ) .+ let and be two random variables such that \leq \mathbb{e}[f(y ) ] , \ ; \text{for all convex functions } \ ; f:\mathbb{r}\rightarrow \mathbb{r},\ ] ] provided the expectations exist . then is said to be smaller than in the _ convex order _, denoted .the following two lemmas are well - known ( see theorems and in and theorem in ) .the first one shows that convex order is closed under convolutions .+ [ convexone ] let be a set of independent random variables and let be another set of independent random variables .if , for , then the second lemma shows that a sum of independent bernoulli random variables is dominated , in the sense of convex order , by a certain binomial random variable .+ [ convextwo ] fix real numbers from .let be independent bernoulli random variables with .then where is a binomial random variable of parameters and .the proof of theorem [ maintwo ] is basically an extension of the proof of theorem [ mainhoeff ] .fix .since is non - negative and increasing in , markov s inequality implies that \leq \frac{1}{f(t ) } \mathbb{e}\left [ f\left(\sum_{i=1}^{n}x_i\right ) \right].\ ] ] since is convex , lemmata [ coupling ] and [ convexone ] imply that \leq \mathbb{e}\left [ f\left(\sum_{i=1}^{n}b_i\right ) \right ] , \ ] ] where ) , i=1,\ldots , n ] and fix a real number , , such that .we have shown in the previous section that , for , we have \leq \frac{1}{f(t)}\mathbb{e}\left [ f\left(b\right ) \right ] , \ ] ] where .set = \frac{1}{f(t ) } \sum_{i=0}^{n } f(i)\cdot \mathbb{p}[b = i ] .\ ] ] in this section we solve the problem of finding , where the infimum is taken over all functions .we show that the solution is related to bentkus result .we begin with an observation on the optimal function . + [ opt ] let be a function such that , where the infimum is taken over all functions .then we may assume that . if , then we set .using this result we can find functions that minimise .+ [ yuyipr ] let be such that , where the infimum is taken over all functions .then equals , for some .we may assume that and so is such that = t_n(\phi , t),\ ] ] where the infimum is taken over the set , containing all functions such that .let be the smallest positive integer that is larger than .note that , by definition , . for , define the function in other words , equals zero for and for it is a straight line starting from point and passing through the points and .note that and that ; indeed , if , then and the function would be such that \leq \mathbb{e}[\phi(b)] ] is even worse than the bound obtained by markov s inequality , hence contradicts its optimality .since the function is convex it follows that for every integer in the interval ] . since it follows that .now , finding the optimal is equivalent to finding the optimal .we are _ not _ able to find this . nevertheless , due to the following result , one can easily find using , say , a binary search algotithm .+ [ yuyiprtwo ] let the parameters be as in theorem [ maintwo ] . let be such that = \inf_{s>0 } \ ; \mathbb{e}\left[\max\{0 , s\cdot(b - t)+1\ } \right ] , \ ] ] where .then we may assume that , for some positive integer .recall that ] is linear on the interval ] and this implies that it attains its minimum at the endpoints of ] is strictly smaller than hoeffding s bound .since we assume that it follows that which in turn implies , since is an integer , that , for all .hence we can write & = & \sum_{i=0}^{n}e^{h(i - t)}\mathbb{p}[b = i ] - \sum_{i = t+1}^{n}(h(i - t)+1)\mathbb{p}[b = i ] \\ & = & \sum_{i=0}^{t-1}e^{h(i - t)}\mathbb{p}[b = i ] \\ & + & \sum_{i = t+1}^{n } \left(e^{h(i - t)}-(h(i - t ) + 1 ) \right)\mathbb{p}[b = i].\end{aligned}\ ] ] for , we have which implies that & \geq & \left ( 1- \frac{1+h}{e^h } \right ) h(n , p , t ) + \frac{1+h}{e^h}\cdot \sum_{i=0}^{t-1}e^{h(i - t)}\mathbb{p}[b = i]\\ & - & \left ( 1- \frac{1+h}{e^h } \right)\mathbb{p}\left[b_{n , p}=t\right ] . \end{aligned}\ ] ] the result follows .if is _ not _ an integer , then one may use the previous bound with replaced by since \leq \mathbb{p}\left[\sum_{i=1}^{n}x_i\geq \lfloor t\rfloor \right ] .\ ] ] this result improves upon hoeffding s bound by fitting a `` missing '' factor that is equal to .theorem [ maintwo ] allows to perform comparisons with binomial tails .let so that and .theorem [ maintwo ] implies that \leq \mathbb{e}[\psi(b)],\ ] ] where .since is a positive integer , we can write = \sum_{i = t}^{n } ( i - t+1 ) \mathbb{p}[b = i ] = \sum_{i = t}^{n}\mathbb{p}[b\geq i ] .\] ] now we use the following , well - known , estimate on binomial tails ( see feller , page 151 , formula ) : \leq \frac{i - ip}{i - np } \cdot\mathbb{p}[b = i ] , \ ; \text{for } \ ; i > np .\ ] ] therefore , \leq \sum_{i = t}^{n } \frac{i - ip}{i - np } \cdot\mathbb{p}[b = i ] \leq \frac{t - tp}{t - np } \cdot\mathbb{p}[b\geq t],\ ] ] as required . compare this result with the second statement of bentkus theorem [ ben ] , from section [ motrel ] .note that for large , say , the previous result gives an estimate for which .in a subsequent section we will show an extension of this result .we begin this section with the proof of theorem [ moments ] .the proof borrows ideas from the theory of bernstein polynomials ( see phillips , chapter ) . recall that , for a function \rightarrow \mathbb{r} ] is convex , then .\ ] ] if \rightarrow [ 0,\infty) ] , lemma [ bernst ] implies that \leq \mathbb{e}\left[b_m\left(f , x_i\right ) \right ] .\ ] ] now note that = \sum_{j=0}^{m } \binom{m}{j } \cdot\mathbb{e}\left[x_i^j(1-x_i)^{m - j } \right ] \cdot f(j / m).\ ] ] for let .\ ] ] notice also that = \sum_{k=0}^{m - j}\binom{m - j}{k}(-1)^{m - j - k}\mu_{i , m - k}\ ] ] which implies that ] .since converges uniformly to , as , we conclude that ] , provided that is sufficiently large .recall the definition of the class from the introduction .+ [ momopt ] fix positive integers , and for let be a sequence of reals such that and for which the class is non - empty .let be independent random variables such that , for , and fix ] , for convex .since the convex order is closed under convolutions , the first statement follows .the proof of the second statement is almost identical to the proof of theorem [ yuyipr ] . in the previous result we found a random variable such that , for every .note that =\mathbb{e}[x_i] ] and so may _ not _ belong to .notice that this is not the case when ; i.e. , when we consider random variables . in this case( see theorem [ maintwo ] ) we were able to find bernoulli random variables from the class such that \leq \mathbb{e}\left[f(b_i)\right] ] , for all and all increasing and convex functions ?it turns out that the answer to the question is _ no_. in order to convince the reader we will use lemma [ cohhen ] below , taken from cohen et al .let us first fix some notation .if , let be its variance . set and and let be the random variable that takes on the values and with probability and , respectively .it is easy to verify that has mean and variance .the following result is proven in cohen et al . and implies that has the maximum moments of any order , among all random variables in .+ [ cohhen ] let and let be the random variable defined above. then \leq \mathbb{e}\left[c^{k}\right] ] , for any .note that the inequality in the conclusion is strict .the previous lemma implies that \leq \mathbb{e}\left[c^{k}\right] ] ; this follows from the fact that the sequence of moments uniquely determines that random variable ( see feller , chapter vii.3 ) .therefore , taylor expansion implies that < \mathbb{e}\left[e^{hc}\right] ] , for all .denote this by .it is well known , and not so difficult to prove , ( see ) that if and only if \leq \mathbb{e}[f(v)] ] random variable .the following result is an analogue of theorem [ binbin ] that takes into account the additional information on the moments .+ fix positive integers , . for be an -tuple of real numbers such that and for which the class is non - empty .let be independent random variables such that , for .for set ^{1/j} ] and let be the interval . if , for some , then \leq \min_{1\leq s\leq j}\;\left\{\frac{(st - s+1)(1-q_s)}{s(st - s+1-nq_s)}\cdot \mathbb{p}[bi(ns , q_s)\geq st - s+1]\right\ } , \ ] ] where , for .note that ^{1/j}\}_{j=1}^{m} ] , for .since is an increasing function , and the stochastic order is closed under convolutions , we conclude that \leq \mathbb{e}\left[\max\{0 , \xi_{ns}-t+1\}\right],\ ] ] where is the independent sum of s .now , where ^{1/s}\right) ] ; where , for , we set to be the interval and ] .formally , = \mu_j , \ ; \text{for}\ ; j=1,\ldots , m\ } .\ ] ] finally , let be the class consisting of all random variables in for which = q_j ] be a convex function .fix positive integer and real numbers . for ,let be the interval and let ] and \leq \mathbb{e}[f(\xi_x)] ] .now and so lemma [ coupling ] implies that , where is the random variable that takes that values and with probabilities }{r_j - r_{j-1}} ] , the value with probability ] .note that the random variable of the previous lemma depends on the conditional probabilities , j=1,\ldots , m ] .so , in case we know the conditional probabilities and the conditional means of the random variables , we can find random variables that are larger , in the sense of convex order , than any random variable having the same conditional probabilities and means .similarly , one can find a random variable that is larger , in the sense of convex order , than any random variable fromthe class , i.e. , when we know conditional means .+ [ mixber ] let be the class defined above , corresponding to a given partition of the interval ] , for all .now let be a mixture of the random variables and ; we take to be equal to with probability and equal to with probability .note that = p ] and let \rightarrow \mathbb{r} ] and ) ] . hence & = & \sum_{j=1}^{n}\mathbb{e}\left[f(x_j)\right]\cdot \mathbb{p}\left[x\in i_j\right]\\ & \leq & p_1 \mathbb{e}\left[f(t_1)\right ] + p_m \mathbb{e}\left[f(t_m)\right ] + \sum_{j=2}^{m-1}p_j \mathbb{e}\left[f(x_j)\right]\\ & \leq & p_1 g(\mu_1 ) + p_m g(\mu_m ) + \sum_{j=2}^{m-1}p_j \mathbb{e}\left[g(x_j)\right ] .\end{aligned}\ ] ]since is linear , we have \big ) = \mathbb{e}\left[g(x_j)\right ] , j=1,\ldots , m ] . summarising , we have shown that \leq \mathbb{e}\left [ g(\xi)\right].\ ] ] once again , linearity of implies & = & \frac{\mu_m - p}{\mu_m -\mu_1 } \mathbb{e}\left[g(b_1)\right ] + \frac{p-\mu_1}{\mu_m -\mu_1 } \mathbb{e}\left[g(b_1)\right ] \\ & = & \frac{\mu_m - p}{\mu_m -\mu_1 } g(\mu_1 ) + \frac{p-\mu_1}{\mu_m -\mu_1 } g(\mu_m ) \\ & = & \frac{\mu_m - p}{\mu_m -\mu_1 } \mathbb{e}\left[f(t_1)\right ] ) + \frac{p-\mu_1}{\mu_m -\mu_1 } \mathbb{e}\left[f(t_m)\right ] ) \\ & = & \mathbb{e}\left[f(\xi ) \right ] \end{aligned}\ ] ] and the result follows .the following theorem can be regarded as an improvement upon hoeffding s in the case where one has additional information on the conditional means of the random variables .+ let the random variables be independent and such that .fix positive integer and real numbers . for ,let be the interval and let ] .let ] .assume further that there is a sequence such that the class is non - empty .then there is a such that and \leq \mathbb{e}\left[e^{h\xi}\right ] , \ ; \text{for all } \; x\in \mathcal{c}(p,\{i_j , q_j\ } ) , \ ] ] where is such that , i.e. , it is the optimal real such that = \inf_{s>0}\ ; \frac{1}{e^{st } } \mathbb{e}[e^{sb}],\ ] ] with .the random variable depends on the solution of a linear program . since = p ] from lemma [ mix ] we know that , for , there is a random variable that concentrates mass on the set such that takes the value with probability , the value with probability and , for , the value with probability .therefore , = \pi_0 e^{hr_0 } + \pi_me^{hr_m } + \sum_{j=1}^{m-1 } \pi_j e^{hr_j } , \ ] ] which implies that ] subject to the following linear constraints : , for and .let us , for convenience , change a bit our notation and set to be the class of random variables from whose variance is . throughout this sectionwe will assume that is strictly positive .hence . from proposition[ impossible ] we know that there does _ not _ exist such that , for all . from lemma [ coupling ] we know that , for all but does _ not _ belong to the class , when . in theorem [ momopt ]we have obtained , using bernstein polynomials , a random variable , that does _ not _ belong to , such that . in this sectionwe will construct another random variable having this property .more precisely , we will prove the following .+ [ xirandom ] there exists a random variable such that for all .depending on the value of and , the random variable can yield efficiently computable bounds that are sharper than existing , well - known , bounds . after stating our main results, we will provide some figures that illustrate the differences between the bounds . in order to construct will apply lemma [ mix ] to the partition ] of ] for the function \rightarrow [ 0,\infty) ] and ] .+ assume now that \geq \mathbb{p}[\xi_y = p] ] , we have and so -\mathbb{e}\big[f(\xi_y)\big ] & \leq & f(0)\cdot \left(\mathbb{p}[\xi_x = 0]-\mathbb{p}[\xi_y = 0]\right ) \\ & + & f(1)\cdot \left(\mathbb{p}[\xi_x = 1]-\mathbb{p}[\xi_y = 1]\right ) \\ & + & ( ( 1-p)\cdot f(0)+p\cdot f(1))\cdot \left(\mathbb{p}[\xi_x = p]-\mathbb{p}[\xi_y = p]\right ) \\ & = & f(0)\cdot \left(\mathbb{e}\big[1-\xi_x\big ] -\mathbb{e}\big[1-\xi_y\big ] \right ) + f(1)\cdot \left(\mathbb{e}\big[\xi_x\big ] -\mathbb{e}\big[\xi_y\big ] \right ) \\ & = & 0 , \end{aligned}\ ] ] where the last equality comes form the fact that = \mathbb{e}\big[\xi_y\big] ] . from lemma [ mix ]we know that there is a random variable such that =\mathbb{e}[\xi_x] ] and , for .in addition , concentrates mass on the set and concentrates mass on the set .assume that is equal to with probability .clearly , depends on and we now show how one can get rid of this dependence .define to be the random variable for which = \min_{y\in \mathcal{b}(p,\sigma^2)}\mathbb{p}\big[\xi_y = p\big ] .\ ] ] from lemma [ nice ] we have , for all .set ] .off course , depend on . since + ( 1-\theta_x)\mathbb{e}[x_2] ] is a decreasing function of and of . similarly , one can check that = \frac{\ell_1\ell_2}{p(\ell_1+\ell_2 ) } \quad \text{and}\quad \mathbb{p}\left[\xi_x=1\right ] = \frac{\ell_1\ell_2}{(1-p)(\ell_1+\ell_2 ) } .\ ] ] by the law of total variance we have = \theta_x \text{var}[x_1 ] + ( 1-\theta_x)\text{var}[x_2]+ \theta_x \ell_1 ^ 2 + ( 1-\theta_x)\ell_2 ^ 2.\ ] ] hence or , equivalently , . since ] it is enough to solve the following optimization problem : elementary , though quite tedious , calculations show that the optimal solution equals therefore , the required random variable has the following distribution : * if , then takes the values and with probability , and respectively . *if , then takes the values and with probability , and , respectively . * if , then takes the values and with probability , and , respectively .the proof of theorem [ xitheorem ] is an application of lemma [ xirandom ] .it is very similar to the proof of theorem [ maintwo ] and theorem [ yuyipr ] and so we briefly sketch it .the first statement can be proven in the same way as theorem [ maintwo ] .the second statement follows from the fact that and by looking at the smallest positive integer , , that is .as in theorem [ yuyipr ] , we can find an such that the function , , that is equal to for and , for , it is a straight line passing through the points and satisfies and \leq \mathbb{e}\left[f\left(\sum_i \xi_{p_i,\sigma_i}\right)\right],\ ] ] for a supposedly optimal function with .it is not easy to find a closed form of the bound given by theorem [ xitheorem ] .nevertheless , the bound can be easilly implemented .note that the previous bound concerns functions from the class .we end this section by performing some pictorial comparisons between several bounds discussed in this article . before doing so ,let us bring to the reader s attention the following , well - known , bound that is due to bennett .bennett s approach was simplified by cohen et al . . in particular , by employing the bernstein - hoeffding method to the exponential function , cohen et al .have shown the following . + fix positive integer and assume that we are given a pair for which the class is non - empty .let be independent random variables such that , for .. then \leq \left\{\left(\frac{\alpha}{\beta}\right)^{\beta } \left ( \frac{1-\alpha}{1-\beta}\right)^{1-\beta } \right\}^n,\ ] ] where and .see .+ + + our numerical experiments suggest that , when is not very small , the bound given by theorem [ xitheorem ] is tighter than bennett s bound .note that we can also apply the bound given by theorem [ momopt ] to random variables from the class ; it is not difficult to implement this bound . in order to build a concrete mental imagelet us fix the parameter and consider random variables such that in a similar way as in proposition [ yuyiprtwo ] one can show that it suffices to consider the infimum , in the bound of theorem [ xitheorem ] , over the set .we can now put the computer to work to calculate the bound .\ ] ] figure [ fig : compare ] shows comparisons between bennett s bound , the bound obtained in theorem [ momopt ] and the bound of theorem [ xitheorem ] .the abscissae in these figures correspond to the variance .notice that , when the variance is large , the bounds given by theorems [ momopt ] and [ xitheorem ] are sharper than the bennett bound . in the next section we stretch a limitation of the bernstein - hoeffding method .so far we have employed the bernstein - hoeffding method to sums of independent and _ bounded _ random variables .the reader may wonder whether the method can be employed in order to obtain bounds on deviations from the expectation for sums of independent , non - negative and _ unbounded _ random variables .we will show , in this section , that in this case the method yields a bound that is the same as the bound given by markov s inequality .let us remark that this fact was already known to hoeffding ( see the footnote in , page ) but we were not able to find a proof ; we include a proof for the sake of completeness .hence the case of non - negative and unbounded random variables requires different methods and the reader is invited to take a look at the work of samuels , , and feige for further details and references .the case of non - negative and unbounded random variables seems to be less investigated than the case of bounded random variables .talagrand ( see , page 692 , comment ) already mentions that it is unclear how to improve hoeffding s inequality without the assumption that the random variables are bounded from above .let us fix some notation .+ for given , let the class of non - negative random variables whose mean equals .formally , =\mu\ } .\ ] ] now , for , fix and .if , then one can estimate \leq \frac{1}{f(t ) } \mathbb{e}\left [ f\left(\sum_{i=1}^{n}x_i\right)\right],\ ] ] where is a non - negative , convex and increasing function .a crucial step in the bernstein - hoeffding method is to minimise the right hand side of the last inequality with respect to .we may assume that we minimise over those functions for which .we now show that this minimisation leads to a bound that is the same as markov s .note that markov s inequality yields \leq \frac{\sum_i \mu_i}{t}$ ] .recall the definition of the class , from the introduction , and let be the class consisting of all functions such that . in this sectionwe report the following .+ for , let be the random variable that takes the values and with probabilities and , respectively . clearly , we have .\ ] ] in a similar way as in theorem [ yuyipr ]one can show that = \inf_{\varepsilon \in [ 0,t ) } \mathbb{e}\left[\max\left\{0 , \frac{\sum_i y_i -\varepsilon}{t-\varepsilon}\right\}\right].\ ] ] since , a similar argument as in proposition [ yuyiprtwo ] shows that the optimal in the right hand side of the last equation is equal to .therefore , =\frac{1}{t } \mathbb{e}\left [ \sum_{i=1}^{n } y_i \right ] = \frac{1}{t } \sum_{i=1}^{n } \mu_i\ ] ] and the result follows .hence , in the case of non - negative and unbounded random variables , the method can not yield a bound that is better than markov s bound . +* acknowledgements * the authors are supported by erc starting grant 240186 `` migrant , mining graphs and networks : a theory - based approach '' .we are grateful to xiequan fan for several valuable suggestions and comments .a. cohen , y. rabinovich , a. schuster , h. shachnai , ( 1999 ) ._ optimal bounds on tail probabilities : a study of an approach _ , advances in randomized parallel computing , comb .optim . , vol .5 , kluwer acad . publ . , p. 124 .s. samuels , ( 1968 ) _ more on a chebyshev - type inequality for sums of independent random variables _ , purdue stat .mimeo , ser .s. samuels , ( 1969 ) ._ the markov inequality for sums of independent random variables _ , annals of math .40 ( 6 ) , p. 19801984 .
we show that the bernstein - hoeffding method can be employed to a larger class of generalized moments . this class includes the exponential moments whose properties play a key role in the proof of a well - known inequality of wassily hoeffding , for sums of independent and bounded random variables whose mean is assumed to be known . as a result we can generalise and improve upon this inequality . we show that hoeffding s inequality is optimal in a broader sense . our approach allows to obtain `` missing '' factors in hoeffding s inequality whose existence is motivated by the central limit theorem . the later result is a rather weaker version of a theorem that is due to michel talagrand . using ideas from the theory of bernstein polynomials , we show that the bernstein - hoeffding method can be adapted to the case in which one has information on higher moments of the random variables . moreover , we consider the performance of the method under additional information on the conditional distribution of the random variables and , finally , we show that the method reduces to markov s inequality when employed to non - negative and unbounded random variables . _ keywords _ : hoeffding s inequality , convex orders , bernstein polynomials
buildings across the world contribute significantly to total energy usage .figure [ fig : country_proportion ] shows the contribution of buildings to energy consumption in india , usa , china , korea and australia .rapid rate construction of buildings presses the need to look into improving energy efficiency in buildings with a goal of decreasing overall energy footprint .thus , towards the vision of sustainability in the age of dwindling natural resources , buildings need to be made more energy efficient . measure twice and cut once"- so goes the old adage . with this adage in mind andthe goal of understanding building energy efficiency , collecting building energy data assumes prime importance .traditionally , building energy data included * monthly * electricity bills collected * manually * by the utility companies and other * sporadic * data such as energy audits .owing to the manual and sporadic nature of this collected data , the available data is very sparse and provides limited insights into building energy efficiency .buoyed by the success of data sets such as mnist in instigating machine vision research , previous study suggests that research in building energy domain can be spurred by availability of data sets . while previously deemed improbable , collection of such data sets has become increasingly common due to low cost sensing devices enabling * high resolution * and * automated * data collection .furthermore , governments and utilities across the world have started rolling out smart meters with an aim of developing a smart grid . beyond the envisioned applications by the government and the utilities ,this data can also be used for developing an understanding into building energy .smart meters typically report electricity consumption to the utility and to the end consumer at rates ranging from a reading every second to a reading every hour .commercial buildings are also increasingly managed with building management systems ( bms ) which control different sub - systems such as lighting and hvac .bms are computer based control systems for controlling and monitoring various building systems such as hvac and lighting and are typically used to manage commercial buildings .bms sense several spatially distributed points across the building to monitor parameters required for control action .buildings are also commonly equipped with ambient sensors for monitoring parameters such as light , temperature and humidity .these ambient sensors are often coupled with security systems to raise intruder alarm or to maintain healthy ambient conditions via thermostat or bms control . with the advent of smart meters , increased usage of bms and ease of availability and installation of ambient sensors , there is a now a deluge of building energy data .while collecting building energy data is easier than before , deployments can get increasingly hard to manage , especially when large number of sensors are deployed .however , previous research has highlighted that building energy data can be obtained at different spatio - temporal granularities .this highlights the need for * optimal instrumentation * that can have different connotations for spatial and temporal domain . for spatial domain ,optimal instrumentation involves monitoring at a subset of locations while still being able to accurately predict at all the desired locations .for the temporal dimension , optimal instrumentation involves sampling at a lower resolution while still being able to predict at a higher temporal resolution .desired application may choose the optimal set of sensors considering cost - accuracy tradeoffs across these different granularities .traditionally , there exist multiple building subsystems such as security , networking , hvac and lighting , each performing their own operations in isolation .however , buildings are a unified ecosystem and optimal operations would require * interconnecting * these * sub - systems*. the combined information from the different sub - systems is greater than sum of individual information from each of the systems .once the systems are interconnected , data coming from diverse systems can be used to improve upon the decision making for optimal building operations . *inferred decision making * can help in identifying inefficiencies , raising alerts and suggest optimizations .traditionally data from within the system has been used for simple decision making e.g. motion sensor based lighting control .furthermore , utility companies previously relied on customers phone calls to detect power outages . however , utility companies can now leverage smart meter data from different homes to quickly detect power outages and plan accordingly .temporal patterns in the electricity consumption data can be used to detect faulty operations , predict load profiles and optimize operations accordingly .spatial patterns in the electricity consumption can identify different electrical sub - systems and study their effect on aggregate consumption .the vast set of analysis possible further necessitates the importance of data sets collected from the real world which can be to used to simulate the effect of inferred decisions before performing optimizations .buildings constitute ecosystems involving interactions between the occupants , the physical and the cyber world . the presence of occupants , each with their individual preferences makes the ecosystem even more complex .traditionally , the control decisions for building operation are taken at a central facility level assuming certain desired operating conditions without involving the occupants in decisions regarding desired conditions .such policies are bound to be energy inefficient and may cause occupant discomfort at times . moreover, previous literature suggests that energy unaware occupant behavior can add upto one - third to a building s energy performance .however , when empowered with actionable feedback , occupants may save upto 15% energy .building occupants can also provide useful data such as comfortable temperature and light intensity levels , which can be used to optimally schedule the hvac systems .thus , various optimizations in the building energy ecosystem can be enabled by * involving occupants*. all of optimal sensing from interconnected subsystems , resulting in decisions inferred from the rich dataset while involving the occupants will overall result in * intelligent operations*. such intelligent operations already exist but miss out one or more aspects discussed previously i.e. either they do not involve the occupants or are taken at a subsystem level without accounting for data from other subsystems in operation .thus , based on literature in building energy domain , we have identified these five crust areas also called five is which are as follows : i ) instrument optimally ; ii ) interconnect sub - systems ; iii ) inferred decision making ; iv ) involve occupants and v ) intelligent operations .figure [ fig : building_energy ] summarizes the relationship amongst the 5 is of building energy which span across broad fields such as sensor networks , behavioral sciences , data science and control science. the interconnected nature of these 5 is makes the overall optimization of building energy efficiency a complex problem .for solving such a complex problem of intelligent building operations , rich data from across the diverse systems , together with smart algorithms for data inference and participatory engagement of occupants is critical .such rich datasets are now becoming a reality .further , rich research exists in from recent times that address algorithms for data inference and understanding behavioral aspects of occupants feedback .this work bring together these different aspects into a survey while pointing out the big opportunities that exist along each of the 5 is together with those existing at the system level .having briefly discussed the various facets of data driven energy efficient buildings , we now discuss an example scenario covering all of these aspects .we base our theme around the classic paper on pervasive computing .john goes to bed at 11 pm in the night .he sets the alarm on his smartphone at 5 am when he intends to go for a jog .however , his health has not been at the best since the last few days affecting his sleep .he decides to inform his alarm clock that it should wake him up at 5 am if he is able to sleep by 11:30 , else wake him up after he has completed 6 hours of his sleep .john s smart surround sensors capture his sleep patterns and communicate to the alarm to ring at 6 am .his sleep disturbances , coughing are also captured , archived and emailed to john for sending to his doctor .john goes to the refrigerator to grab some cold water .the refrigerator understands that the person who has come to pick up water is john .john s heath system reminded his refrigerator that john has been advised to refrain from cold water .john gets ready and leaves home for office at 9 am .the security system , door and motion sensor register this movement .this information is communicated to john s thermostat which starts ramping down .his lights also turn off automatically sensing his absence .john has a meeting at 10 am and thus he would only go to his cabinet to keep his bag .based on the number of expected attendees , the thermostat in the meeting room starts ramping up , in order to achieve the desired temperature in time for the meeting .john s washing machine at his home is scheduled to run for 1 hour before john returns in the evening .his washing machine interacts with the grid and predicts the best time to run when the load on the grid is low is noon time when the electricity prices are low . around 3 pm, winds carry away the clouds and there is bright sunshine .john s solar system starts producing electricity .part of this dc produce is directly fed to dc appliances in john s home and the surplus is stored in a battery .john figured that he can save more money by locally consuming his generated solar energy as opposed to selling it to the grid . in the meanwhile , some of the artificial lights in john s office turn off in lieu of the sunlight available .john gets an email from the nilm system installed at his home about his disaggregated monthly consumption .the system recommends that the tungsten based lighting in his home is very inefficient and eating up 30% of his bill .if john would replace the same with more efficient led based lighting , his overall bill would go down by 15% .considering the cost of replacement , his roi period would be less than 6 months , after which he would be able to save 10 usd a month .we now analyse the above scenario and see how each small piece in this giant picture is a reality today .systems such as isleep have explored using smartphones and motion sensors for sleep quantity and quality detection . in order to wake up john after he has completed 6 hours of sleep, his smartphone detects his sleep pattern .his smartphone contains an app which detects his cough patterns and uploads the most critical data to his doctor .when john wakes up and goes to the refrigerator , systems for energy apportionment ascertain that it is john and not someone else who is trying to draw cold water .the refrigerator is connected to john s smartphone over ip and informs him that cold water could be injurious to his health .several years of research in occupancy detection using variety of sensing modalities are able to detect that john s home is unoccupied when john leaves for his office .smarter occupancy driven thermostats have been proposed in recent literature which control and save energy based on occupancy prediction and external temperature , when john leaves for his office . by interconnecting soft sensors such as office chat client and meeting software , the thermostat in john s personal cabinet does not ramp up since he has a meeting in the board room .conference room management sensor system in the boardroom is alert to the number of occupants and drives the hvac accordingly .meanwhile in john s home his washing machine turns on at 1 pm .since this is an off - peak period , electricity is available at much cheaper rates at this time in comparison to the rest of the day .john scheduled his washing machine to run for one hour when the electricity would be cheap , which was enabled by demand response strategy . due to availability of sunlight after 3 pm , dc appliances in john s home switch from the utility and consume raw dc power from the solar panel .the lighting control system in john s office also senses the bright sunshine and saves power while maintaining comfortable lighting levels inside the office .the smart meter installed at john s home is regularly collecting his electricity data and periodically sends john the disaggregated breakdown of power by appliances .while many of these individual systems have been explored in the past in isolation and often in research settings , their application to the real world and their interconnection to complete the picture remains largely untested .the stepping stone to making such complex scenario a reality is the high resolution multimodal data collected from each of these subsystems , interconnected together , with algorithms that can do efficient operations while accounting for john s preferences . data is the new oil " .data science has brought about a paradigm shift in the way problems are solved in myriad applications . from social networks to astronomy and subatomic physics , data has brought a new revolution enabling answering important questions .we now briefly discuss how data can play a pivotal role towards the development of energy efficient building .traditionally , building energy data was collected only for billing purposes .figure [ fig : bill ] shows the electricity bills of residential apartments in london , uk and delhi , india .usually , data for such bills is collected once a month manually by the utility companies .the bill shown in figure [ fig : delhi ] only provides the units of energy consumed during the billing period .commercial entities owing to their heavy consumption are often priced on a time of day based pricing .thus , in addition to the total units consumed , commercial entities are also provided with units consumed in peak and non - peak hours . in both of these cases ,the consumption information is coarse and provides limited actionable insights . with the advent of the smart meter andsome of the other sensors discussed in section [ sec : introduction ] , more building energy data is now available than ever before .while the conventional process followed by utilities would generate a single reading per home per month , smart meters collecting data once every 15 minutes would be collecting 3000 times more data .smart meters are often capable of collecting data at higher rates of once every minute which can amount to 240 tb of data collected from 5 million homes a month .figure [ fig : big_data ] contrasts the volume of this smart meter data collected with other applications which have greatly benefited by the data deluge .the availability of such building energy data enables one to answer several important questions , some of which are presented below : * consumers can be informed in real time about their consumption .this may be seen analogous to the cell phone alerts we get after every data transaction or call made .this may greatly help the end users to not only understand their usage but also act on it to conserve energy .* utilities earlier used to rely on receiving phone calls from aggrieved customers for detecting outages . however , with smart meter data readily available , utilities can leverage it for speedier outage detection and allocate resources efficiently to reduce downtime .* figure [ fig : bill_tod ] shows the time of day energy consumption of a building in iiit delhi campus .when such detailed information is made available to the end users , they can take initiatives towards shifting their loads to normal or off - peak hours , when electricity is cheaper .such measures also benefit the utility as the peak demand is reduced . *figure [ fig : historical_bill ] shows the trend in energy consumption from a home in new delhi .this decreasing energy usage can be attributed to decreasing temperatures and hence decreasing use of air conditioners .developing an understanding into the correlations between weather and energy usage can allow better hvac control .such information is present in the uk bill ( figure [ fig : london ] ) .however , without detailed analysis the same does not translate into actionable savings . towards the realization of many of the above mentioned questions , building energy data requirements need to be categorized as per the application scenario .we firstly discuss the different temporal and spatial resolutions at which such data can be collected .* temporal resolution : * in accordance with the intended application , building energy data is collected at different temporal resolutions .energy audits are performed once every few years and provide key insights into several aspects of buildings including energy efficiency and comfort levels .such audits are costly and require significant instrumentation . at a lower resolution of once a monthutility companies collect total energy consumption .as discussed previously this provides limited insights to the end user .commercial entities often monitor their power factor daily as they are liable to be penalized if the power factor drops below a certain threshold .most of the above considered data collection is largely manual and sporadic .monitoring at higher resolutions of 1 hour or lesser requires automated mechanisms such as installation of smart meters . with the national smart meter roll outs across many countries as discussed in section [ sec : introduction ] and initiatives such as greenbutton , collecting and accessing electricity datahas become much easier for the end consumers .certain applications such as non - intrusive load monitoring ( nilm ) may require high frequency data with sampling rates of more than a thousand samples every second .we summarize the temporal variations in building energy data collection in table [ tab : temporal ] ..temporal variations in building energy data collection [ cols="^,^",options="header " , ] we specifically discuss in detail about the well studied problem of non - intrusive load monitoring ( nilm ) and discuss how it spans across the 5 is . non - intrusive load monitoring ( nilm ) or energy disaggregation is the process of breaking down the energy measurement observed at a single point of sensing into constituent loads .figure [ fig : nilm ] shows disaggregated consumption across a day for different appliances from the iawe data set . in 2006 , darby et al . suggested that providing detailed electricity information feedback to end users can lead to 5 - 15% electricity usage reduction by behavioral change .recently , chakravarty et al . performed a study across more than 300 users in california and observed mean reduction of approx .15% when disaggregated information and real time electricity information is provided to end users . with the advent of smart metering infrastructure ,as discussed in section [ sec : introduction ] , nilm has had a lot of renewed interest .a number of startups which provide itemized electrical usage to households over cloud based services have emerged recently .the recent interest has also led to the release of data sets meant to aid nilm research . of these redd ,blued , ampds , uk - dale , eco , sust - data , greend , combed , berds and pecan have been specifically released for nilm like applications .apart from these data sets , other data sets such as tracebase , he s , smart * and iawe can be used for several other building applications , including nilm .we now discuss the nilm problem across the five is .* instrument optimally : * nilm involves breaking down the aggregate meter data measured at a single point into constituent appliances or sub - meters lower in the electrical tree .thus , instrumentation for nilm involves measuring electrical parameters at both the aggregate and the sub - metered level ( desired for ground truth ) .the aggregate power readings are typically measured using smart meters . in special cases , when high frequency data ( more than 1 khz ) is required , sophisticated data acquisition systems ( daqs )sub - metered power data is typically measured either at the circuit level using current transformer ( ct ) based sensors or at appliance level using appliance sensors .* interconnect sub - systems : * while in the classic nilm problem , interconnection across different data streams is not studied , some studies use the extra information from other modalities to improve disaggregation .previous work indicate the correlation between energy usage and external temperature .berges et al . correlate occupancy sensors with electrical data to identify potential savings in unoccupied rooms .similarly , other ambient sensors can also be interconnected with electrical sensors to obtain additional insights into the disaggregation problem .* inferred decision making : * the vanilla use case of the nilm implementation provides an itemized breakdown of the electrical load .this inference problem can be viewed as an inverse classification problem .previous work has related this problem to source separation which is a well studied problem in sound processing .hart et al . , in their seminal work on nilm , proposed a simplistic combinatorial based and a simple edge detection based nilm algorithm .both these algorithms form the foundation behind many of the state - of - the - art algorithms .markovian analogues of combinatorial optimization formulation led to factorial hidden markov among other hidden markov models .several nilm approaches have been proposed in the recent past and a rich overview has been captured in several recent work as well .it must be pointed that most of the prior research includes * supervised * methods in a * centralized * * offline * setup .* involve occupants : * as discussed earlier in this section , providing itemized feedback to end users has shown to reduce their end consumption .thus , the occupants can be involved by not only providing them with itemized billing , but also providing actionable suggestions on top .another interesting occupant involvement may arise from devising novel techniques for ground truth collection , which eliminates the need for appliance metering .* intelligent operations : * real time control actions to save electricity based on disaggregated information require complicated interactions with the control systems .current literature is thin on the aspect of automated control beyond the usual feedback that nilm systems provide . in this sectionwe briefly mention some of the challenges and opportunities in nilm research . * * need for extensive deployment : * for certifying an algorithm on a previously unseen home , appliance level data must be collected from that home .* * computationally expensive approaches : * many approaches are computationally expensive and thus model only the high energy consuming appliances . owing to this , detailed information about low energy consuming appliances is often not made available .the computationally intractable approaches also limit the application in the real world setting . ** supervised methods require sub - metered data : * supervised nilm approaches train on the sub - metered data and create a model for each appliance .this requires ground truth instrumentation , motivating the need for development of novel unsupervised learning mechanisms .optimizing building energy usage remains an area of concern in light of dwindling natural resources . due to this concern , efforts are being concentrated to develop an understanding into buildings .these efforts have led to a deluge of data coming from a variety of sensors .this availability of data is changing the way we consume our electricity information . in this paperwe highlighted some of the applications enabled by this data availability , such as early power outage detection , peak load reduction , electricity consumption reduction .based on our literature survey of research in building energy domain , we identified five crust areas enabling data centric energy efficient buildings : i ) instrument optimally ii ) interconnect sub - systems iii ) inferred decisions iv ) involve occupants and v)intelligent operations . across each of these five areaswe present the state - of - the - art , the core challenges and the opportunities in the field .finally , we categorize different building energy applications as per these five is and discuss non - intrusive load monitoring , a well studied problem in building energy domain , in greater detail .
buildings across the world contribute significantly to the overall energy consumption and are thus stakeholders in grid operations . towards the development of a smart grid , utilities and governments across the world are encouraging smart meter deployments . high resolution ( often at every 15 minutes ) data from these smart meters can be used to understand and optimize energy consumptions in buildings . in addition to smart meters , buildings are also increasingly managed with building management systems ( bms ) which control different sub - systems such as lighting and heating , ventilation , and air conditioning ( hvac ) . with the advent of these smart meters , increased usage of bms and easy availability and widespread installation of ambient sensors , there is a deluge of building energy data . this data has been leveraged for a variety of applications such as demand response , appliance fault detection and optimizing hvac schedules . beyond the traditional use of such data sets , they can be put to effective use towards making buildings smarter and hence driving every possible bit of energy efficiency . effective use of this data entails several critical areas from sensing to decision making and participatory involvement of occupants . picking from wide literature in building energy efficiency , we identify five crust areas ( also referred to as 5 is ) for realizing data driven energy efficiency in buildings : i ) instrument optimally ; ii ) interconnect sub - systems ; iii ) inferred decision making ; iv ) involve occupants and v ) intelligent operations . we classify prior work as per these 5 is and discuss challenges , opportunities and applications across them . building upon these 5 is we discuss a well studied problem in building energy efficiency - non - intrusive load monitoring ( nilm ) and how research in this area spans across the 5 is . = 1
density - functional theory ( dft ) constitutes one of the most popular methods in quantum chemistry .the foundations of dft rest in particular on three contributions : first , the hohenberg kohn ( hk ) theorems established a one - to - one mapping between a set of scalar potentials and a set of ground - state densities as well as a variation principle based on the density . here , the density is the charge density ( strictly , the negative of the charge density in units of the elementary electron charge ) .second , the levy lieb constrained - search expression provided a formal but explicit expression for the intrinsic energy ( the universal density functional ) and clarified significant fundamental points .third , lieb further generalized the universal functional to a convex functional represented in terms of a legendre fenchel transform . from a mathematical point of view, lieb s formulation is particularly attractive as it allows application of convex analysis to establish several properties of the intrinsic energy functional .additionally , lieb s framework has made feasible practical calculations of approximations to the exact intrinsic energy functional and adiabatic connection curves , enabling detailed comparisons of the properties of approximate and near - exact density functionals to be made .standard dft , involving universal energy functionals of only the charge density , is limited to the treatment of physical systems that may be represented as eigenstates of hamiltonians that differ only in their scalar potentials . to treat systems subject to an external magnetic field, it is necessary to introduce an additional dependence on the magnetic field or its associated vector potential into the hamiltonian .consequently , a dependence on a corresponding variable apart from charge density is needed in the universal energy functional . in magnetic - fielddensity functional theory ( b - dft ) this is resolved by constructing a family of density functionals one for each external magnetic field . in the present work, we consider the alternative current density - functional theory ( cdft ) , where the additional variable is either the paramagnetic current density or the physical current density .we restrict our attention to non - relativistic formulations and most of the discussion will for simplicity not be concerned with densities or density - contributions arising from spin - degrees of freedom .we term the variables on which the energy functionals explicitly depend the _ basic variables _ and make a distinction between _ basic densities _ and _ basic potentials_. many choices of basic densities are conceivable ; we require only that the choices result in useful density - functional theories .our perspective thus differs from that in recent works on cdft by pan and sahni , who restrict the term _ basic variable _ to variables that admit an hk theorem .although it appears naturally in the generic framework introduced by ayers and fuentealba , the possibility of choosing basic potentials other than the standard electromagnetic potentials and fields has not previously been explored in detail . by farthe most developed form of cdft is that due to vignale and rasolt , who use the charge and paramagnetic current densities as basic variables . for these variables , a kohn sham approach has been formulated with an associated adiabatic - connection , virial and scaling relations analogous to standard kohn sham dft .in addition , optimized - effective - potential ( oep ) approaches based on this formulation of cdft have been presented to treat non - collinear magnetism and extensions to time - dependent cdft have been considered . however , in cdft based on the charge and paramagnetic current densities as basic variables , no hk - type theorem exists and the consequences of this have been extensively discussed in the literature . in the present work, we examine this question for cdft in some detail , demonstrating how convex analysis of the underlying universal density functional can be a significant aid in clarifying the relationship between basic variables of cdft and the potentials .a cdft featuring the gauge - invariant physical current density ( rather than the paramagnetic current density ) as a basic variable is appealing from a physical perspective and is therefore also considered here .specifically , we examine the formulations due to diener and pan and sahni .we begin in sec .[ sec : prelim ] by introducing notation related to sets of basic potentials , basic densities , and mappings between them . in sec .[ sec : cdft ] , we consider cdfts that use the charge and paramagnetic current densities as basic variables in particular , sec .[ sec : restoreconc ] establishes the concavity of a universal density functional based on these variables and sec .[ subsec : numf ] outlines the opportunities that this formulation affords for numerical studies of this functional .next , in sec .[ secphyscdft ] , the use of the charge and physical current densities as basic variables is considered and two previous formulations are examined . our concluding remarks are presented in sec .[ sec : conc ] .before discussing cdft , we briefly review standard dft , with emphasis on lieb s treatment based on convex conjugation . the concepts and techniques of convex analysis introduced hereare well suited to the study of dft and will later be used in our discussion of cdft .some background is also given in the appendix .we consider a system of electrons with an electronic hamiltonian of the form ( in atomic units ) = \frac{1}{2 } \sum_k p_k^2 + \sum_k v(\mathbf{r}_k ) + w , \label{hamv}\ ] ] where is the canonical momentum operator of electron , is the external potential at position , and is the two - electron coulomb repulsion operator .the state of the system is described by a density matrix , which is a convex combination of normalized -electron pure - state density matrices where the wave functions are antisymmetric in the space and spin coordinates of the electrons . the electron density associated with such density matricesis given by where the volume element is , i.e. , the integration is over all spin and spatial coordinates except .the ground - state energy is obtained from the rayleigh ritz variation principle , = \inf_{\gamma } { \mathrm{tr } ( { \gamma h[v ] } ) } , \ ] ] where the minimization is over all -electron density matrices .an infimum rather than a minimum is taken in eq . since may or may not support an -electron ground state .the set of potentials that support one or more -electron ground states ( and for which therefore the infimum is attained ) is denoted by ; the potentials in are sometimes said to be -representable .conversely , a density that is an ensemble ground - state density for some potential is said to be ( ensemble ) -representable ; the set of -representable densities is denoted by . for convenience , we shall also refer to -representable potentials and -representable densities as ground - state potentials and densities , respectively . in the constrained - search formalism of dft, we write the rayleigh ritz variation principle as an hk variation principle , = \inf_{\rho \in \mathcal{i}_n } \left ( f[\rho ] + ( \rho | v ) \right ) , \label{eqhk}\ ] ] where is the set of -representable densities that is , the set of the nonnegative densities with and with a finite von weizscker kinetic energy. the lieb constrained - search functional is given by = \inf_{\gamma\mapsto \rho } { \mathrm{tr}({\gamma h[0 ] } ) } , \label{fll}\ ] ] where the notation indicates that the minimization is restricted to density matrices that reproduce the density .if is not -representable , no such exists , and =+\infty ] is upper semi - continuous and concave in and therefore may be represented by its conjugate function : lieb s universal density functional ] and ] ( ] happens to be non - concave , it can not be represented by an expression like that in eq ., even by allowing to be non - convex .therefore , no universal functional with a linear potential pairing can exist for non - concave energies ( such as those of excited states of the same symmetry as the ground state ) .a cdft with the paramagnetic current as a basic density was considered in the seminal work of vignale and rasolt . in their formulation of cdft ,the basic potentials are the standard electromagnetic potentials and the basic densities are the charge density and paramagnetic current density .we shall here first review their theory and then discuss an alternative formalism , based on a redefinition of the basic scalar potential .we consider electrons subject to time - independent external electromagnetic fields and , represented by the scalar potential and the vector potential , respectively . for potentials , we introduce the equivalence relation which defines equivalence classes of potentials that differ only by a static gauge transformation , thereby representing the same external fields .we note that a general gauge transformation of and is given by and , for some arbitrary gauge function .if is to remain static after the transformation , we must require that , where is constant .it follows that a general time - independent gauge transformation is given by and , where the constant and the function are independent .therefore , the equivalence relation in eq . holds if and only if there exists a constant and a sufficiently well - behaved gauge function such that and . in the presence of a vector potential , the electronic hamiltonian in eq .is modified by replacing the canonical momentum operator by the mechanical ( kinetic ) momentum operator , yielding = \frac{1}{2 } \sum_k \pi_k^2 + \sum_k v(\mathbf{r}_k ) + w. \label{ham}\ ] ] we have here omitted the spin - dependent term , , from the hamiltonian . by analogy with eq ., the rayleigh ritz variation principle in the presence of a vector potential is given by = \inf_{\gamma } { \mathrm{tr } ( { \gamma h[v,\mathbf{a } ] } ) } , \ ] ] where the minimization is over density matrices containing electrons , see eq . .an infimum rather than a minimum is taken to ensure that the energy is well defined also when does not support a ground state .we denote the set of all potentials that support a ground state with this hamiltonian by \text{\ has a g.s.}\}\ ] ] and also introduce the related set \text{\ has a g.s.}\}\ ] ] in preparation of a reparameterization of the scalar potential that will be introduced later . in the presence of a vector potential , the ensemble ground - state charge densities are as before given by eq . . regarding the induced currents , we distinguish between the paramagnetic current density and the physical current densitythe former is defined as the paramagnetic current density is gauge - dependent and unobservable .the physical current density is given by and satisfies the relation .unlike the paramagnetic current , the physical current is gauge - invariant .finally , the sets of paramagnetic and physical -representable ground - state densities are denoted by \ } , \\\mathcal{b}_n & = \{(\rho,\mathbf{j } ) | ( \rho,\mathbf{j } ) \text{\ is g.s .den.\ of some\ } h[v,\mathbf{a}]\},\end{aligned}\ ] ] where both mixed and pure states are allowed . the hk theorem of standard dft states that the ground - state density determines the scalar potential up to a constant shift .hence , two potentials that differ by more than a constant shift can not give rise to the same ground - state density .this fact establishes a mapping from ground - state densities to potentials .vignale and rasolt established that two different potentials with different ground - state wave functions can not give rise to the same paramagnetic ground - state density . however, this does not establish an analogue of the hk theorem for cdft since different potentials can map to the same density in via the same wave function , = \psi[v_2,\mathbf{a}_2] ] and let be a ground state of ] and is also a ground state of ] in eq .is its concavity in , which established duality with the universal density functional ] , the energy functional ] is apparent in , for example , any diamagnetic ground state at vanishing external magnetic field .such a ground state has a negative definite magnetizability tensor and , when restricted to weak uniform magnetic fields , the energy is a convex function in and therefore in . in more detail , consider a one - electron system confined to the ( two - dimensional ) -plane , subject both to a uniform magnetic field along the -axis and to a harmonic - oscillator potential .parameterizing the scalar and vector potentials under consideration as we obtain the following hamiltonian & = \frac{1}{2 } p^2 + \frac{1}{2 } b l_z + v_{\text{ho}}(\mathbf{r};k ) + \frac{1}{2 } a_{\perp}(\mathbf{r};b)^2 \\ & = \frac{1}{2 } p^2 + \frac{1}{2 } b l_z + \frac{1}{8 } \left(4k + b^2\right ) ( x^2+y^2 ) , \end{split}\ ] ] where is a good quantum number . for and some finite interval ,the ground state has and application of a magnetic field has exactly the same effect as the introduction of a harmonic - oscillator potential .for these potentials , the ground - state energy is = e[v_{\text{ho } } + \tfrac{1}{2 } a_{\perp}^2 , \mathbf{0 } ] = \sqrt{k + \tfrac{1}{4 } b^2}. \label{eq : example - energy}\ ] ] note that the right - hand side is concave in and convex in .hence , on the restricted set of potentials spanned by and , the functional ] can not be represented by a conjugate functional in the manner of eq . .however , this does not preclude a constrained - search formulation of cdft , as discussed in the next subsection . rewriting the rayleigh ritz variation principle in eq . by analogy with the constrained - search approach of standard dft in eq ., we obtain an hk - type variation principle for a system in the presence of a scalar and vector potential , = \inf_{\rho,\mathbf{j}_{\text{p } } } \left [ f_{\text{vr}}[\rho,\mathbf{j}_{\text{p } } ] + ( \rho|v+\tfrac{1}{2 } a^2 ) + ( \mathbf{j}_{\text{p}}|\mathbf{a } ) \right],\ ] ] where the _ vignale rasolt constrained - search functional _ is given by = \inf_{\gamma\mapsto(\rho,\mathbf{j}_{\text{p } } ) } { \mathrm{tr}({\gamma ( \tfrac{1}{2 } p^2 + w ) } ) } \label{fvr}\ ] ] and we have introduced the following notation for the pairing between a current density and a vector potential : like the lieb constrained - search functional given in eq . , the vignale rasoltconstrained - search functional in eq . is universal in the sense that it does not depend on the potential , only on the density .another important characterization of the lieb functional is its convexity . to examine the convexity of ,let and be arbitrary ( lebesque integrable ) functions and let , .we then obtain \\ & \leq \inf_{\substack{\gamma_1\mapsto(\rho_1,\mathbf{j}_{\text{p}1})\\\gamma_2\mapsto(\rho_2,\mathbf{j}_{\text{p}2 } ) } } { \mathrm{tr}({(\lambda\gamma_1 + \mu\gamma_2 ) ( \tfrac{1}{2 } p^2 + w ) } ) } \\ & = \lambda f_{\text{vr}}[\rho_1 , \mathbf{j}_{\text{p}1 } ] + \mu f_{\text{vr}}[\rho_2,\mathbf{j}_{\text{p}2 } ] , \end{split}\ ] ] demonstrating that is convex in .the key point in establishing the inequality above is to restrict the infimum over all density matrices to an infimum over all matrices of the form where and , thereby overestimating the infimum . given that the vignale rasolt functional is convex , it is uniquely represented by a convex conjugate functional . for the lieb functional ] .however , since ] .in the following , we identify the energy conjugate to ] is now by construction concave , allowing it to be generated from the convex intrinsic energy ] and ] .hence , to within a minus sign , the subdifferential at is the collection of all external potentials that have the same ground - state density . ] .[ fig : subdiff ] ] analogously , we consider the concave energy functional ] .the _ superdifferential _ ] of ground state densities for potentials with degeneracy .this is a convex set , a three - dimensional simplex , with vertices at , i.e. , = \operatorname{co}\{(\rho_0^i,\mathbf{j}_{\text{p},0}^i)\}. ] and commute , , k \right ] = 0 , \label{huk}\ ] ] then ] , see capelle and vignale .we note that the commutation condition in eq .is sufficient but not necessary for the existence of degenerate maximizing potentials. there may be several , independent perturbing potentials that commute with the reference hamiltonian ] and ] are the constant potentials .it follows that the subdifferential of at is the convex set = - \ { v_0(\mathbf{r } ) + c \,|\ , c \in \mathbb r \},\ ] ] which may be regarded as a one - dimensional simplex with vertices . in dft, therefore , the ground - state density determines the external potential uniquely up to an additive constant , in accordance with the hk theorem .returning to cdft , consider next two potentials and with the same ground - state density . by the convexity of the subgradient of ,all convex combinations then have the same ground - state density . recalling that , , and , the characterization of the non - uniqueness given in eq .can be expressed in terms of the ordinary scalar potential .if and give rise to the same density , then so do all potentials of the form with .however , this set is not a convex set and not a subdifferential , due to the use of the rather than variables .an advantage of the formulation of stationary conditions in cdft in terms of sub- and superdifferentials is that differentiability is not required . in general, a sufficient condition for differentiability of a function at a point is that the function is continuous at this point and has a single sub- or supergradient there ; in the absence of continuity , differentiability is not guaranteed .the ground - state energy is differentiable at all potentials that have a nondegenerate ground - state density , whereas the vignale rasolt functional is in principle nowhere differentiable since we may always add a constant term to the potential without affecting the ground - state densities .however , assuming that this is the only cause of nondifferentiability of the potentials , we may in the absence of other degeneracies write }{\delta \rho(\mathbf{r } ) } = -u(\mathbf{r } ) - c , \quad \frac{\delta f_{\text{vr}}[\rho,\mathbf{j}_{\text{p}}]}{\delta \mathbf{j}_{\text{p}}(\mathbf{r } ) } = -\mathbf{a}(\mathbf{r } ) , \label{statf}\ ] ] and }{\delta u(\mathbf{r } ) } = \rho(\mathbf{r } ) , \quad \frac{\delta \bar{e}[u,\mathbf{a}]}{\delta \mathbf{a}(\mathbf{r } ) } = \mathbf{j}_{\text{p}}(\mathbf{r } ) , \label{state}\ ] ] where is the ( non - degenerate ) ground - state density of the potential . besides allowing formal application of theorems in convex analysis to cdft , the convex formulation given above has practical value .after linear programming and optimization of quadratic functions , optimization of convex and concave functions is the mathematically most well - characterized type of optimization problem .the fact that convex and concave optimization problems have a unique global optimum ( either in the form a single point or a convex set of optimal points ) and no additional local optima is of great value when devising practical optimization methods . in standard dft, lieb s formulation of ] has proven useful in the study of functionals of interest in kohn sham theory .in particular , the modulation of the two - electron interaction operator by a parameter such that , allows us to represent the ground - state energy ] .the standard choice is , but other choices are possible .if the density supplied to ] or an accurate approximation to it , the adiabatic connection may be studied in terms the corresponding vignale rasolt functional = \sup_{u,\mathbf{a } } \left [ \bar{e}_{\lambda}[u,\mathbf{a } ] - ( \rho | u ) - ( \mathbf{j}_{\text{p } } | \mathbf{a } ) \right ] , \label{acf}\ ] ] which needs to be evaluated for a fixed density and different values of in the interval .typically , the density is the ground - state density for some external potential at .the optimization in eq .is trivial for since the optimal potential is then ; for , the optimization is non - trivial . in particular , for , the optimal potential is the kohn sham potential given by where the classical coulomb or hartree potential and the exchange correlation potential are the functional derivatives of the corresponding energy components as in standard kohn sham dft .the exchange correlation contribution to the vector potential is defined as }{\delta \mathbf{j}_{\text{p}}},\ ] ] where differentiability in relevant directions is assumed .these scalar and vector potentials then enter the cdft kohn sham equations , which may be re - written in terms of as \varphi_p = \varepsilon_p \varphi_p.\ ] ] if the spin dependent term is included in the hamiltonian of eq . with the modified interactions of eq ., then similar arguments apply .two - component spinors rather than one - particle orbitals then occur in the kohn sham equations , allowing for a treatment of non - collinear magnetism . even in the absence of external magnetic fields , violations of non - interacting -representability that is , the existence of a ground - state density of the fully interacting hamiltonian ]have been shown to be common in two - electron systems . in general , an extended kohn sham formalism , allowing for an ensemble description and fractional occupation numbers , is therefore required in cdft as well as in standard dft . to facilitate the optimization of ] is to parameterize rather than in the affine form the use of rather than eliminates the term and the associated quadratic dependence on , thereby simplifying the equations obtained upon substitution in eq . .importantly , it also ensures that all stationary points are true global maxima . to perform optimizations similar to those in refs . , all that remains is to be able to calculate the ground - state energy ] yields for the expectation value on the right - hand side { | { \psi ' } \rangle } & = e ' + { \langle { \psi ' } | } \tfrac{1}{2 } \pi^2 + v - \tfrac{1}{2 } \pi'^2 - v ' { | { \psi ' } \rangle } \\ & = e ' + { \langle { \psi ' } | } \tfrac{1}{2 } \{\boldsymbol{\pi},\mathbf{a}\ } + v - \tfrac{1}{2 } a^2 - \tfrac{1}{2 } \{\boldsymbol{\pi}',\mathbf{a}'\ } - v ' + \tfrac{1}{2 } a'^2 { | { \psi ' } \rangle}. \label{hexp } \end{split}\ ] ] it is here important to distinguish between and since the representation of the mechanical momentum operator is gauge and vector - potential dependent . to proceed ,we explicitly write out the corresponding physical current density operators , which for the two potentials are given by from these , we may calculate the physical current ( assumed to be the same in the two cases ) and its interaction with some vector potential as where it is important to use primed or unprimed quantities consistently . to ensure that we are using the correct current - density operator for in eq ., we insert the identity yielding , where we have introduced and and also used the inequality in eq . .carrying out the above argument with primed and unprimed variables interchanged , we obtain the strict inequality where the last term does not vanish since is symmetric in the primed and unprimed variables . in agreement with the hk theorem, a contradiction arises if ( and only if ) .when and differ where ( as a result of different physical fields or by a gauge transformation ) , no contradiction arises . in the argument given in ref . , the authors incorrectly identify and , leading to their eq .( 38 ) , which differs from eq . above by the replacement of with .since their term is antisymmetric rather than symmetric in the primed and unprimed variables , the authors obtain a contradiction irrespective of and , leading to the unjustified conclusion that ( determines . an earlier attempt to prove an hk - type theorem for physical currents by diener invokes an intriguing strategy for eliminating the term from eq . .the key idea is to replace the external vector potential by an effective vector potential ] . for a prescribed physical current density , the expectation value of an arbitrary wave function is then { | { \psi } \rangle } & = { \langle { \psi } | } \tfrac{1}{2 } ( p^2 - a_{\text{eff}}^2 ) + w { | { \psi } \rangle } \nonumber \\ & + ( \mathbf{j}_0|\mathbf{a } ) + ( \rho|v ) + \tfrac{1}{2}(\rho | ( \mathbf{a}_{\text{eff}}-\mathbf{a})^2 ) .\label{eqhwitheffa}\end{aligned}\ ] ] the total current evaluated using the effective momentum operator instead of the true mechanical momentum operator always evaluates to the prescribed current .diener then defines a universal density functional of the form & = \inf_{\psi\mapsto\rho } { \langle { \psi } | } h_{\text{eff}}[\mathbf{j},\psi ] { | { \psi } \rangle } , \\h_{\text{eff}}[\mathbf{j},\psi ] & = \tfrac{1}{2 } p^2 - \tfrac{1}{2 } a_{\text{eff}}[\mathbf{j},\psi]^2 + w. \label{dienerh}\end{aligned}\ ] ] by exploiting this functional , diener derives the inequality { | { \psi_0 } \rangle } \leq { \langle { \psi'_0 } | } h[v,\mathbf{a } ] { | { \psi'_0 } \rangle } - \frac{1}{2 } ( \rho_0 | \delta a^2),\ ] ] in lieu of the usual strict rayleigh ritz inequality underlying the hk proof as given in eq . .when is the ground state of potentials that differ by more than a gauge , , the above non - strict inequality is ( without further ado ) taken to be strict in diener s presentation .if a strict inequality is accepted , a standard reductio ad absurdum proof is possible because the last term of eq. cancels to yield the contradiction .we now consider two technical problems not addressed by diener .the first is that the expectation value being minimized in is not bounded from below for densities that vanish at some point .although unusual , such ground - state densities can arise for small molecules in strong magnetic fields .consider a wave function giving rise to a density that vanishes at some point in space .let us also introduce spherical coordinates about .then , for the wave functions with integer , we see that and give rise to the same density but to different paramagnetic current densities related by where is the unit vector in the direction specified by . as a result , = \mathbf{a}_{\text{eff}}[\mathbf{j } , \phi_0 ] - \frac{m \hat{\boldsymbol{\phi}}}{r \sin(\theta ) } = \mathbf{a}_0 - \frac{m \hat{\boldsymbol{\phi}}}{r \sin(\theta)}.\ ] ] calculating the expectation value of the effective hamiltonian in eq . , some terms arising from cancel terms arising from , leaving for physical currents with a nonzero last integral , for example a circular current , the expectation value can be decreased without bound , demonstrating that ] and ] of ] . however , it is unclear whether this is always possible . in lieu of a rigorous proof for an hk - type theorem for physical currents ,it is possible to proceed by conjecturing such a result and exploring its consequences . additionally , an important point is that a mapping from ground state densities to potentials may not be required for a formulation of cdft , provided that the theory can be constructed by some other means such as a constrained search or legendre fenchel transformation formalism .we shall here explore such issues in the framework introduced by pan and sahni in more detail . a complication due to the choice of the physical current asa basic variable is that constraints of the type `` '' require explicit reference to a vector potential , because the physical current is not determined by the wave function alone .care must therefore be exercised when developing a constrained - search formalism for physical currents . for each magnetic field under consideration, we fix a gauge .hence , we choose a mapping (\mathbf{r})\ ] ] from magnetic fields to magnetic vector potentials , and also fix the constant shift of scalar potentials in some way . from the conjecturethat a ground state density uniquely determines a gauge class of potentials , we may now write mappings )\ ] ] hence , we may write the scalar potential , the external magnetic field and its vector potential as functionals ] , and ] is not concave , making a full legendre fenchel transform treatment infeasible .however , as shown here , concavity can be restored by considering the conjugate variables and , where .this allows the application of convex analysis in analogy with lieb s formulation of standard dft .such a formulation is particularly natural in the context of the study of adiabatic connections for cdft functional construction , which can be done in a manner similar to that undertaken for standard dft .the information garnered from such analysis would be suitable for comparison with kohn sham implementations of cdft based on functionals of .alternative cdft formulations based on the physical densities have also been critically examined .pan and sahni s recent attempt to formulate such a theory is found to be unsatisfactory .both their attempt to prove the existence of a one - to - one mapping between potentials and densities and their constrained - search formulation were found to be flawed .an earlier attempt to formulate a cdft in terms of the physical current by diener has also been examined , and technical difficulties with this approach have been highlighted . despite the appealing physical motivation behind this choice of basic variables , we thus find that a formal justification for such a framework is currently lacking .furthermore , while it remains open whether or not an analogue of the hohenberg kohn theorem holds for the physical current , other aspects of standard dft such as the variation principle , the constrained - search formalism , and formulations in terms of legendre fenchel transformations do not straightforwardly carry over to this type of cdft .we conclude that the most common formulation in terms of is presently the most convenient and viable formulation of cdft .let be a normed vector space and its dual that is , the the linear space of all continuous linear functionals on .a function is said to be convex if it satisfies the relation for all .the effective domain , , is the set of for which .a function is lower semi - continuous at if , for any , there exists such that whenever ; is lower semi - continuous if it is lower semi - continuous at all .a lower semi - continuous convex function may be represented by its conjugate in the manner , \\f(x ) & = \sup_{y \in x^\ast } [ ( x | y ) - f^\ast(y)],\end{aligned}\ ] ] where is also lower semi - continuous and convex .the dual function is said to be a subgradient of at if it satisfies the inequality the set of all subgradients of at is called the subdifferential and is a ( possibly empty ) convex subset of .subgradients and subdifferentials of are defined in an analogous manner .the function and its conjugate satisfy fenchel s inequality , which is sharpened into the equality whenever the equivalent reciprocal relations are satisfied .a function is said to be concave if is convex .its conjugate is defined in the same manner as for convex functions but with replaced by ; likewise , subgradients and subdifferentials are defined as for a convex function but with the inequality sign reversed in eq . .this work was supported by the norwegian research council through the coe centre for theoretical and computational chemistry ( ctcc ) grant no .179568/v30 and the grant no .171185/v30 and through the european research council under the european union seventh framework program through the advanced grant abacus , erc grant agreement no . 267683 .a. m. t. is also grateful for support from the royal society university research fellowship scheme .52ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1103/physrev.136.b864 [ * * , ( ) ] link:\doibase 10.1073/pnas.76.12.6062 [ * * , ( ) ] link:\doibase 10.1002/qua.560240302 [ * * , ( ) ] __ , ed .( , ) link:\doibase 10.1016/j.theochem.2006.05.012 [ * * , ( ) ] link:\doibase 10.1103/physreva.73.012513 [ * * , ( ) ] link:\doibase 10.1088/0305 - 4608/4/8/013 [ * * , ( ) ] link:\doibase 10.1016/0038 - 1098(75)90618 - 3 [ * * , ( ) ] link:\doibase 10.1103/physrevb.13.4274 [ * * , ( ) ] link:\doibase 10.1103/physrevb.15.6006.3 [ * * , ( ) ] link:\doibase 10.1103/physrevb.15.2884 [ * * , ( ) ] link:\doibase 10.1063/1.478234 [ * * , ( ) ] link:\doibase 10.1063/1.1535422 [ * * , ( ) ] link:\doibase 10.1063/1.3082285 [ * * , ( ) ] link:\doibase 10.1063/1.3380834 [ * * , ( ) ] link:\doibase 10.1063/1.3488100 [ * * , ( ) ] link:\doibase 10.1063/1.3660357 [ * * , ( ) ] link:\doibase 10.1103/physreva.50.3089 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physrevb.69.035113 [ * * , ( ) ] link:\doibase 10.1103/physreva.80.032510 [ * * , ( ) ] link:\doibase 10.1002/qua.22862 [ * * , ( ) ] link:\doibase 10.1016/j.jpcs.2011.12.023 [ * * , ( ) ] link:\doibase 10.1103/physreva.85.052502 [ * * , ( ) ] link:\doibase 10.1103/physreva.86.042502 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.59.2360 [ * * , ( ) ] link:\doibase 10.1103/physrevb.37.10685 [ * * , ( ) ] link:\doibase 10.1103/physreva.54.1328 [ * * , ( ) ] link:\doibase 10.1103/physreva.53.r5 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1103/physreva.59.209 [ * * , ( ) ] link:\doibase 10.1103/physreva.74.062511 [ * * , ( ) ] link:\doibase 10.1103/physrevb.77.245106 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.98.036403 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1103/physreva.38.1149 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevb.65.113106 [ * * , ( ) ] link:\doibase10.1088/0953 - 8984/3/47/014 [ * * , ( ) ] _ _( , ) link:\doibase 10.1063/1.464304 [ * * , ( ) ] link:\doibase 10.1103/physreva.59.51 [ * * , ( ) ] link:\doibase 10.1103/physreva.62.012502 [ * * , ( ) ] link:\doibase 10.1021/ct8005248 [ * * , ( ) ] link:\doibase 10.1063/1.2749510 [ * * , ( ) ] link:\doibase 10.1103/physreva.80.022517 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.78.1872 [ * * , ( ) ]
the selection of basic variables in current - density functional theory and formal properties of the resulting formulations are critically examined . focus is placed on the extent to which the hohenberg kohn theorem , constrained - search approach and lieb s formulation ( in terms of convex and concave conjugation ) of standard density - functional theory can be generalized to provide foundations for current - density functional theory . for the well - known case with the gauge - dependent paramagnetic current density as a basic variable , we find that the resulting total energy functional is not concave . it is shown that a simple redefinition of the scalar potential restores concavity and enables the application of convex analysis and convex / concave conjugation . as a result , the solution sets arising in potential - optimization problems can be given a simple characterization . we also review attempts to establish theories with the physical current density as a basic variable . despite the appealing physical motivation behind this choice of basic variables , we find that the mathematical foundations of the theories proposed to date are unsatisfactory . moreover , the analogy to standard density - functional theory is substantially weaker as neither the constrained - search approach nor the convex analysis framework carry over to a theory making use of the physical current density .
my interest in the history of the aharonov - bohm effect [ ab ] started when i joined birkbeck college in 1961 where two of key players in the discovery held professorships .werner ehrenberg held the chair of experimental physics and was head of the department , while bohm had just been appointed to a chair in theoretical physics .my first venture into this effect resulted in some personal embarrassment .i had decided to write a paper on the effect mainly to clarify my own understanding of the phenomenon .when ehrenberg saw a copy of the paper he confronted me with a comment spoken with a strong german accent , ach hiley , zis ab effect that you are discussing , is it the one that siday and i discovered ?" poor dear , gentle werner , sidelined in history by a young new member of his own staff !it was my original intention to describe the way in which ehrenberg and siday discovered the effect in the 1940s .i had at hand in the department a number of people who were working with ehrenberg and siday when they made their discovery , so i am able to obtain first hand accounts of how the story unfolded .my only regret was that ray siday had passed on , but the first hand recollections of the man were still very much alive in the department .the events leading up to the discovery took place in the late 1940s and it turns out to be a fascinating story .i will only briefly comment on the re - discovery of the effect by yakir aharonov and david bohm in the late 1950s .the brevity of my comments is no reflection on their work , which was very significant since it marked the beginning of gauge field theories . although i had many hours of discussion with david bohm , the effect was not high on the agenda , as we were interested in a wider range of physical and philosophical questions .i will leave a discussion of the ab s papers and the subsequent to the development of general gauge theory to others .as i was beginning to collect the background material , my attention was drawn to the abstract of a talk given by werner franz to a physical society meeting in danzig in 1939 .it left me with the impression he might have been aware of the effect even at this early date .could the talk he presented have contained a first mention of this effect and _ ipso facto _could he have been the first to discover this effect ?how does one investigate something that took place in the then free city of danzig in may 1939 , when it was just about to be involved in violent political events ? . by august, the city had undergone a _ coup detat _ and later in september a german battleship used the harbour to open fire on the polish city of westerplatte .not the atmosphere to think deeply about physics! the starting point for this investigation , then , can only be the franz abstract .here we reproduce it in its original form . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ item 5 .w. franz ( knigsberg , pr ) : _ elektroneninterferenzen i m magnetfeld ._ nach de broglie ist einem elektron vom impuls eine wellenlnge zugeordnet , worin die plancksche konstante .i m magnetfeld tritt zum korpuskularen impulse der zusatz , wo das vektor potential des magnetfeldes .da nur bis auf einen gradienten bestimmt ist , hat i m magnetfeld auch die wellenlnge keinen eindeutigen physikalischen sinn .doch fllt diese vieldeutigkeit bei der physikalischen anwendung , der interferenz .- der wellenzahlvektor eines eleklronenfeldes . , nach de broglie also ist der gradient der phase , muss also rot - frei sein .i m magnetfeld ist nun rot ; der zusatz stellt die bedingung rot wieder her .- bei der beugung am doppelspalt ergibt sich als bedingung fr ein interferenzmaximum worin die geometrische gangdifferenz der beiden mglichen korpuskularen bahnen und der magnetische fluss durch die von ihnen eingeschlossene flche ist eine ganze zahl .hiernach ergibt sich in bereinstimmung mit der erfahrung , dass die interferenz nur durch die richtung bestimmt wird , in welcher die elektronen die spalte erreichen und verlassen .die durch die bahnkrmmung i m magnetfeld hervorgerufenen gangdifferenzen werden durch den zusatz aufgehoben[multiblock footnote omitted ] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ although franz clearly states that the interference depends on the magnetic flux enclosed by the electron paths , he adds that the path difference is caused by the curvature of the paths in the magnetic field " .this last statement clouds the issue because it is unclear whether he realised that the electron paths need not experience any magnetic field at all and still produce the same interference shift provided the path encloses the field .this is the key to the whole effect .i originally thought that this was an abstract of a paper , but i failed to find any such paper .it then transpired that it was an abstract of a seminar that franz gave to the physics meeting gauvereins ostland in danzig " held from 18 to 29 may 1939 .soon after this abstract came to my attention , i was approached by gottfried mllenstedt , an experimentalist who had pioneered some brilliant electron interference experiments , and asked to supply some biographical details of both ehrenberg and sidday .he was preparing a paper on the history of the electron biprism , an instrument that he had perfected and had used in his electron of interference experiments . in this article ,mllenstedt informs us that he attended the danzig meeting as a young physics student and it was at that meeting that he heard the term electron interferometry " for the first time . to recall the meeting he used a paper by franz , which was written in 1965 , and translates a key part of this paper which is reproduced here ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in presence of an electromagnetic field , the momentum of a particle with charge is known to be where is the vector potential from which the magnetic field strength is determined as .the phase difference between two rays ( a ) and ( b ) connecting two points 1 and 2 is determined by introducing the expression for is from above , the term yields the same path difference as in absence of a magnetic field whereas , according to stokes theorem , the loop integral over may be transformed to a surface integral over , i.e. , the magnetic flux , yielding this simple relation which should be the first thing taught in a lecture on wave mechanics for beginners after introducing the magnetic field ( strangely enough i could not find it in any lecture notes except my own ) shows that the phase difference between electron rays depends on the magnetic flux included between the rays , even if the rays do not run in a magnetic field ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in 1939 , after the lecture by walter franz on , walther kossel discussed the possibility of an experimental proof , but he came to the conclusion that , at that time , an experimental proof was not feasible .mllenstedt writes , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _i remember w. kossel saying i hear the message but i lack an electron interferometer . " _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ my reading of these last two sentences strongly suggests that there was a discussion about the ab effect at that danzig meeting , but it was brought to a end simply because no one in the group had the means to explore the effect experimentally . but did that mean that franz had really discovered the effect ?in contrast to this , i have a letter from mllenstedt dated feb .12 1993 , which is six years before mllenstedt s paper appeared . in itmllenstedt writes _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ i am enclosing a copy of a recent paper on the amerigo effect " printed in physikalische bltter " .i think that possibly the ab effect is also some kind of an amerigo effect for which the credit should rather have been given to ehrenberg and siday ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the amerigo effect " was a phrase coined by mllenstedt and walther kossel to refer to the mis - attribution of the discoverer of some physical effect .the word takes its meaning from amerigo vespucci who sailed west after columbus made his discovery of america .the medici bank of florence , for whom amerigo worked as a local branch manager in seville , spotting an advertising opportunity , decided to give circulation in europe to an account of amerigo s journey .a geographer waldseemller subsequently attributed the discovery of the new continent to amerigo !unfortunately mllenstedt died in 1997 and by the time the history of the electron biprism " article reached me , it was too late to obtain a clarification of this point , so it is still an open question as to whether franz had spotted the effect .werner ehrenberg was born in berlin and studied philosophy , physics and mathematics at the university of berlin and took his phd at heidelberg .he was an assistant in the physical institute of the technische hochschule in stuttgart but was dismissed for being jewish in 1933 and sort refuge in the uk .he worked at birkbeck under blackett on a grant from the academic assistance council , before working in industry during ww ii . at the end of the war he returned to birkbeck where he worked under j. d. bernal before becoming established in his own right , finally taking the chair of experimental physics at birkbeck .his interests in physics were wide and varied .his early interests were in x - rays and electron optics .indeed this early work centred on the task of developing a method to produce soft focus x - rays that could be used on biological molecules , . with walter spear he developed and built the fine focus x - ray generator that maurice wilkins , rosalind franklin and ray gosling used to study the structure of dna .it was this experimental work that enabled crick and watson to propose the double helix structure that won them , together with wilkins , the nobel prize .ehrenberg s later interests were in electrical conduction in semiconductors and metals .ray siday was a totally different character .he took a first in the b.sc .special physics ( london ) and worked with patrick blackett on nuclear physics . in 1938 after spending several years on the south sea island of tahiti , he returned to birkbeck and began working on beta - spectra . herehe developed a keen interest in electron optics . at the time of the publication of their paper reporting what has become known as the ab effect, he was working in edinburgh university .the collaboration between ehrenberg and siday started in 1933 when both of them were first at birkbeck , although from reading the personal recollections of werner ehrenberg , it seems that many of these discussions took place in the local pubs !these early discussions were eventually interrupted by ray siday s adventures in tahiti and then , of course , by the war , so , in effect , they did not start collaborating again until after the war when they both returned once again to birkbeck .their discussions were very wide ranging , but it was the principles of electron optics that was the focus of their attention and it was these discussions that led them eventually to predict what has become known as the ab effect .their paper reporting this effect appeared ten years before the classic paper of aharonov and bohm .i say at least , because in 1946 , siday took a three year ici fellowship at edinburgh to continue his work on beta - spectra with norman feather who , in turn , had previously worked with ernest rutherford in the cavendish at cambridge .siday s work on beta spectra involved , among other things , the focussing of the beta rays in magnetic fields .this is what sustained his interests in electron optics in general . in the pioneering days of designing electromagnetic lenses ,much use was made of the analogy with optical lens systems .of course in optics , one had long been aware of the tensions between ray optics on the one hand and wave optics on the other . in the case of electronsthere was a similar tension , between the particle properties , rays , and their wave properties . in geometric opticsthe equation of the ray between two points and is obtained from the variation principle , this is essentially fermat s principle which is analogous to hamilton s principle in dynamics here is the conjugate momentum obtained from the lagrangian of the electron in an electromagnetic field .ehrenberg and siday showed , in a manner that would be regarded as rather clumsy today , that this momentum would be given by \end{aligned}\ ] ] thus they concluded that one could think of a refractive index for the electron lens as given by \label{e : ref}\end{aligned}\ ] ] here is simply the optical path length of the ray , so that if we divide this by the de broglie wave length of the electrons , we can find the phase difference between the points and on the ray .the practical problem that siday was thinking about was the design of a magnetic lens for his beta spectrometer , so the electrostatic potential was put to zero .thus one considered an optical path defined by $ ] .now one can clearly see a problem .this optical path depends upon the vector potential , so clearly the refractive index is not a gauge invariant expression .this problem was clearly recognised by ehrenberg and siday and is discussed carefully in their paper . in order to motivate the discussion of gauge invariance , ehrenberg and siday recalled the mathematical conditions that must be placed on an optical refractive index .this index , , is a measured quantity and so must be finite and single valued .it must also be continuous , except at a finite number of surfaces separating any different media traversed by the ray .the same conditions must be satisfied by the equivalent electron refractive index defined by the rhs of equation .this means that the refractive index must be fixed everywhere in space once it is fixed in the neighbourhood of one point .furthermore , it must be single valued .it should have no singularities and any discontinuities should be of such a nature that they appear as limiting cases of a continuous refractive index .since occurs as an additive term in the refractive index , the same conditions must be applied to it and hence to itself .now these conditions are just those for the validity of stokes theorem and this is the only valid restriction which must be imposed on it .a consequence of this is that can not in general be chosen so as to vanish even though the magnetic field vanishes locally .it is this condition that is vital for understanding the ab effect .as is well known by now , the ambiguity in arises because all we insist on is that , the magnetic field .thus it is always possible to add to the gradient of a scalar , , without changing .this follows directly from the identity .this arbitrariness can not produce any observable effects in the geometrical aspects of the optics .however wave optics is a different matter . to bring out the consequences , ehrenberg and siday consider the phase difference between any two paths rays .using we have {\bm dr}-\int _ { 0}^{1}[m{\bm v}+{\bm a}]{\bm dr'}\end{aligned}\ ] ] where is an element of the first path and is an element of the second path .now since the momentum of the electron is constant and using stokes theorem , we find thus we see that , again as in the case of franz , the phase difference between the two paths depends on the flux enclosed by the closed path. now ehrenberg and siday take the argument one stage further .they consider the specific case when the magnetic field is confined to a region within the circuit in such a way that the electrons do not pass through the magnetic field .in other words , if the electron should follow either path it will not experience any field .they then ask the crucial question : can we gauge transform away the vector potential so that there is no contribution from the magnetic flux lying entirely within the circuit ? "they show that this is not possible without introducing a singularity which would lead to the breakdown of stokes theorem .thus they conclude that it is not possible to find a vector potential which satisfies stokes theorem and removes the anisotropy of the whole space outside the field . the fact that this irremovable anisotropy from the field free region as a whole emphasises the fact that the electron - optic refractive index contains the vector potential and not the magnetic field strength .this led them to the conclusion that wave - optical effects will arise from an isolated magnetic field even though the rays travel in a field free region .to emphasise this claim they sketch an experimental situation that would demonstrate this effect .this is shown in figure [ fig : magflux ] isolated from rays , width=384 ] both ehrenberg and siday found this result very strange and totally contrary to what they would expect .at that stage the vector potential was simply regarded as a mathematical symbol with no observable consequences . yet here , in quantum mechanics , it has an observable consequence .siday was overheard by john jennings one day remarking to ehrenberg , is nt it odd that the only physical effect produced by a magnetic flux is in a superconductor ? "ehrenberg replied ach ! but what better superconductor than free space ! "siday went around explaining the effect to anyone who would listen and asking where the argument had gone wrong , if indeed it had .all were puzzled and felt it was wrong , but could not put their finger on the error .siday became incensed with the lack of enthusiasm . there are all these sodding geniuses poncing around but if they ever have to do anything- oh yes , that s a different matter is nt it ! "eventually he became so frustrated that he decided to present the conundrum to max born to see what he would say .recall that at this time , siday was working at edinburgh university in norman fowler s laboratory , while born was then tait professor of natural philosophy at the same university , so he decided to invite max born to his laboratory .david butt - rays .butt was one of the birkbeck group that carried out one of the first experiments to test for quantum non - locality holding over distances of up to 6 m . .] shared the same laboratory with siday and at the time of the meeting , was sitting in the corner of the lab while siday and born discussed the effect .unfortunately he could not hear the actual conversation , which lasted about 45 minutes , but he did see born shaking his head from side to side every so often , seemingly with incomprehension . at the end of the discussion , the two of them rose , shook hands and born departed with a face looking like thunder .as soon as the door closed , siday came up to butt and exclaimed well , sod it , i got absolutely nothing out of him . who can i ask now ? "butt has often puzzled about born s reaction .the problem was simple enough to describe .all the others , mainly experimental colleagues , had understood what siday was claiming , but could not spot the ` error . 'if what siday had presented to born was an obvious consequence of standard quantum mechanics , surely born would have said so , but he did nt . nor did he offer any criticism of the explanation of the effect . in spite of this apparent indifference , ehrenberg and siday decided to go ahead and publish the work in the proceedings of the physical society , which was one of the main british journals at the time .unfortunately they clearly had not been advised by a publicity agent as the paper was entitled the refractive index in electron optics and the principles of dynamics " , a title which gives no clue as to the radical nature of what they had discovered .even the abstract did not highlight the effect they had found , but the conclusion was quite clear .one should observe a fringe shift proportional to the magnetic flux included and isolated from the passing electrons .david butt also informed me that a few years later in about 1957 , he had been talking with siday , and asked him if either he or ehrenberg had been contacted by anyone concerning the paper .siday s is reported as saying , no .we have not heard a bloody thing not as much as a whisper .it has fallen to the bottom like a lump of lead " .but why should they ? at that stage , the vector potential was still regarded as merely a mathematical convenience and could be gauge transformed away .therefore it should produce no physical effect .furthermore the effect was presented in a context that it appeared to be a problem in designing electron lenses , not a general new effect .the choice of title of their paper only confirmed this .however it was not only the title of the paper , the presentation suffered from two further disadvantages .firstly the effect was discussed in specialist terms of ` equivalent refractive indices ' , using the optical path analogy for the electrons .this was the language in common use amongst electron lens specialists at the time , but this terminology was not in general use by those working in quantum mechanics , so it gave the impression , wrongly , that it was a particular effect that was of interest only to specialists in that field .nevertheless the fringe shift was calculated correctly and paper gave a clear discussion of the consequences .ehrenberg and siday concluded that _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ one might therefore expect wave - optical phenomena to arise which are due to the presence of a magnetic field but not due to the magnetic field itself , i.e. , which arise whilst the rays are in field - free regions of space ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the second disadvantage was that the journal in which they published was not one of the leading journals at that time , being a publication of the british physical society before being taken over by the british institute of physics . as a consequence , ehrenberg and siday have not got the recognition that they deserve from the physics community , particularly in america . in saying this , i want to make it absolutely clear that no blame can be attached to aharonov or bohm .bohm was unaware of the original es paper when they wrote their first paper .they came to the same conclusion independently . in their second paper they acknowledged the ehrenberg and siday paper had obtained the same results using a ` semi - classical ' treatment , a rather unfortunate choice of words .the case of ehrenberg and siday falls neatly into what kossel and mllenstedt call the ` amerigo - effekt ' .unfortunately such cases can and do happen when one is exploring a new and unexpected effect , long before the foundations of phenomenon has been properly laid .in fact if their discussion was put into modern terms , we would see that ehrenberg and siday were exploring the common mathematical background shared by both optics and electron optics , namely , the symplectic group and its double cover , the metaplectic group .the discussion of rays follows directly from the symplectic group .in fact the rays are simply generated by a symplectomorphisms .on the other hand , the wave properties follow from the covering group , namely , the metaplectic group .what ehrenberg and siday had discovered in their own way was that the homotopy group of the covering space was non - trivial and were on the way to discovering the notion of a winding number .alas being experimentalists , they would not have known about these advanced mathematical structures , then or even later when these techniques became more well known .ehrenberg and siday s work remained unknown for ten years before the effect was rediscovered by aharonov and bohm .their paper goes straight to the heart of the problem .they note that although in classical physics the fundamental equations can always be written in terms of fields , in quantum mechanics the potentials can not be removed from these fundamental equations , therefore this must have observational consequences .they then ask , ` what are these consequences ? 'lev vaidman , a long time collaborator of aharonov , told me that yakir had spotted the vector potential producing observable effects but , did not realise that potentials were universally considered as mere mathematical artefacts . ah !the innocence of young research students ! he went to talk with bohm , his then supervisor , and they discussed the idea .this discussion led them to propose an actual experiment based , in actual fact , on figure 1 that appeared in the ehrenberg and siday paper .their proposal was to place a shield to the right of the two slits and then to place immediately to the right of the shield , in its geometric shadow , a small long solenoid with its axis parallel to the slits . to ensure none of the field ofthe solenoid could spill out into the region of the electron paths , one could place a strip of mu - metal suitably shaped to conduct the field produced by the ends of the solenoid around the electron paths .this ensures that the electrons move in a field free region .this was precisely what was done later in the beautiful experiment carried by mllenstedt and bayh and bayh .aharonov and bohm were rather fortunate in that an experimentalist , robert chambers at bristol university where they were working , immediately set about doing an experiment to show the effect existed .he used a magnetic whisker and clearly demonstrated the effect . however because of the unexpected nature of the effect , people argued that as the magnetic whisker produced an unshielded field , the effect may be due , after all , to the field rather than the potential .this was wishful thinking .however the appearance of bayh s results immediately showed that any arguments about stray fields causing the effect could be ruled out . since those early days a number of more refined experimentshave all confirmed the effect .the full details of all these experiments can be found in a review article by olariu and popescu .it is now clear that the ab effect arises directly from the schrdinger equation as was explained in the first aharonov and bohm paper . yet clear as the paper was , it too was received with some scepticism and even opposition to begin with .for example , victor weisskopf wrote in some brandies lecture notes , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the first reaction to this work is that it is wrong ; the second is that it is obvious . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this effect is now considered to be of great theoretic importance as it is the first example of quantum gauge phenomena .gauge theories have become central to the modern theory of particle interactions , spawning many examples of gauge phenomena .the importance of the effect is reflected in a leader article published in nature where it was proposed that aharonov and bohm should share the nobel prize with michael berry for their contributions to the understanding of gauge effects .i asked bohm for his reaction to this suggestion .he replied that he did not think the ab effect alone was that noteworthy and added : after all it is only a straight forward application of standard quantum mechanics , and anyway ehrenberg and siday were there first ! " for me that sentence the way physics should be done with , humility , generosity and honesty .i should like to thank my colleague david butt without whose detailed reminiscences of his interaction with ray siday , without this background , this paper would have been a poorer report .i am also indebted to the late john jennings for his input and translation of the abstract of franz s talk given in danzig in 1939 .bayh , w. , messung der kontinuierlichen phasenschiebung von elektronenweuen i m kraftfeldfreien raum dutch das magnetische vektorpotential einer wolfram - wendel , _ zeit .fr physik ._ , * 169 * ( 1962 ) 492 - 510 .franz , w. , electroneninterferenzen i m magnetfeld , _ verh .d. deutschen physikalischen gesellschaft _ * 20 * , ( 1939 ) 65 - 6 . and_ physikalische berichte _ , 21st annual volume ( 1940 )these references are identical in content .mllenstedt , g. and bayh , w. , messung der kontinuierlichen phasensehiebung von elekfronenwellen i m kraftfeldfreien raum durch das magnetische vektorpotential einer luftspule _naturwissenschaften _ ,* 49 * ( 1962 ) 81 - 2 .
this paper traces the early history of the aharonov - bohm effect . it appears to have been ` discovered ' at least three times to my knowledge before the defining paper of aharonov and bohm appeared in 1959 . the first hint of the effect appears in germany in 1939 , immediately disappearing from sight in those troubled times . it reappeared in a paper in 1949 , ten years before the defining paper appeared . here i report the background to the early evolution of this effect , presenting first hand unpublished accounts reported to me by colleagues at birkbeck college in the university of london .
the dynamics of particles dispersed in a host solvent , how they react to and affect the fluid motion , is a relevant problem for science as well as engineering fields .it is necessary to account for the macroscopic properties of suspensions ( such as the viscosity , elastic modulus , and thermal conductivity ) , as well as the mechanics of protein unfolding , the kinetics of bio - molecular reactions , and the tumbling motion of bacteria .as the particles move , they generate long - range disturbances in the fluid , which are transmitted to all other particles . properly accounting for these hydrodynamic interactionshas proven to be a very complicated task due to their non - linear many - body nature .several numerical methods have been developed to explicitly include the effect of hydrodynamic interactions in a suspension of particles .however , their applicability to non - newtonian host solvents or solvents with internal degrees of freedom is not straightforward , and in some cases not possible .we have proposed an alternative direct numerical simulation ( dns ) method , which we refer to as the smooth profile ( sp ) method , that simultaneously solves for the host fluid and the particles .the coupling between the two motions is achieved through a smooth profile for the particle interfaces .this method is similar in spirit to the fluid particle dynamics method , in which the particles are modeled as a highly viscous fluid .the main benefit of our model is the ability to use a fixed cartesian grid to solve the fluid equations of motion .the sp method has been successfully used to study the diffusion , sedimentation , and rheology of colloidal dispersions in incompressible fluids .recently it has been extended to include self - propelled swimmers and for compressible host solvent .so far , however , only spherical particles were considered . in this paperwe extend the sp method to be applicable to arbitrary rigid bodies .we show the validity of the method by computing the mobility / friction tensors for a large variety of geometric shapes .the results are compared to numerical solutions of the stokes equation , which are essentially exact , as well as experimental data .the agreement with our results is excellent in all cases considered here .future papers in this series will deal with the dynamical properties of rigid body dispersions in detail , with this work we aim to introduce the basics of the model and show its validity .we solve the dynamics of a rigid body in an incompressible newtonian host fluid using the sp method .the basic idea behind this method is to replace the sharp boundaries between the particles and the fluid with a smooth interface .this allows us to define all field variables over the entire computational domain , and results in an efficient method to accurately resolve the many - body hydrodynamic interactions .the motion of the host fluid is governed by the incompressible navier - stokes equation where is the fluid velocity , the density , and the newtonian stress tensor \label{e : dstress}\end{aligned}\ ] ] with and the pressure and viscosity of the fluid , respectively .the motion of the particle is given by the newton - euler equations , where , , , and , are the center of mass ( com ) position , velocity , angular velocity , and orientation matrix , respectively , of the -th rigid body ( ) .the total force and torque experienced the particles is denoted as and , respectively , with the moment of inertia tensor and the skew - symmetric angular velocity matrix the force ( and torque ) on each of the particles is comprised of hydrodynamic contributions , particle - particle interactions ( including a core potential to prevent overlap ) , and a possible external field contribution .for the present study , we neglect thermal fluctuations , but they are easily included within the sp formalism .the coupling between fluid and particles is obtained by defining a total velocity field , with respect to the fluid and particle velocity fields , as \label{e : up}\end{aligned}\ ] ] where is a suitably defined sp function ( ) that interpolates between fluid and particle domains ( as described below ) , and . the modified navier - stokes equation which governs the evolution the total fluid velocity field ( host fluid particle )is given by with and .the stress tensor is defined as in eq . , but in terms of the total fluid velocity .the scheme used to solve the equations of motion is the same fractional - step algorithm introduced in ref ., with minor modifications needed to account for the non - spherical geometry of the particles .let be the velocity field at time ( the time interval ) .we first solve for the advection and hydrodynamic viscous stress terms , and propagate the particle positions ( orientations ) using the current particle velocities \label{e : ustar}\\ { \bm{r}}_i^{n+1 } & = { \bm{r}}_i^n + \int_{t_n}^{t_n+h}{\text{d}s\,}{\bm{v}}_i \label{e : rn } \\ { { \bm{\mathsf{q}}}}_i^{n+1 } & = { { \bm{\mathsf{q}}}}_i^{n } + \int_{t_n}^{t_n+h}{\text{d}s\,}\text{skew}\left({\bm{\omega}}_i\right)\cdot{{\bm{\mathsf{q}}}}_i\label{e : qn}\end{aligned}\ ] ] given the dependence of the profile function on the particle position and orientation , we must also update the particle velocity field to \end{aligned}\ ] ] next , we compute the hydrodynamic force and torque exerted by the fluid on the particles , by assuming momentum conservation .the time integrated hydrodynamic force and torque over a period are equal to the momentum exchange over the particle domain & = \int{\text{d}{\bm{x}}\,}\rho\phi_i^{n+1}\left({\bm{u}}^ * - { \bm{u}}_p^{*}\right )\\ \left[\int_{t_n}^{t_n+h}{\text{d}s\,}{\bm{n}}_i^{\text{h}}\right ] & = \int{\text{d}{\bm{x}}\,}\left[{\bm{r}}_i^{n+1}\times \rho\phi_i^{n+1}\left({\bm{u}}^ * - { \bm{u}}_p^*\right)\right]\end{aligned}\ ] ] from this , and any other forces on the rigid bodies , we update the velocities of the particles as \\ { \bm{\omega}}_i^{n+1 } & = { \bm{\omega}}_i^{n } + { { \bm{\mathsf{i}}}}_i^{-1}\int_{t_n}^{t_n+h}\cdot\left[{\bm{n}}_i^{\text{h } } + { \bm{n}}_i^{\text{c } } + { \bm{n}}_i^{\text{ext}}\right]\end{aligned}\ ] ] finally , the particle rigidity is imposed on the total fluid velocity through the body force in the navier - stokes equation \\\left[\int_{t_n}^{t_n+h}{\text{d}s\,}\phi{\bm{f}}_p\right ] & = \phi^{n+1}\left({\bm{u}}_p^{n+1 } - { \bm{u}}^{*}\right ) - \frac{h}{\rho}\nabla p_p\end{aligned}\ ] ] with the pressure due to the rigidity constraint obtained from the incompressibility condition . for computational simplicity, we consider each particle ( ) as being composed of a rigid collection of spherical beads ( see fig.[f : beads ] ) , with position , velocities , and angular velocities given by , , and .we use upper and lowercase variables to differentiate between rigid body particles and the spherical beads used to construct them , as well as the shorthand to refer to the beads belonging to the rigid body . the rigidity constraint on the bead velocitiesis given by where is the distance vector from the com of the rigid body ( ) to the bead s com ( ) .the rigidity constraint on the position of the beads requires that the relative distances between any two of them remain constant .thus , the vectors , expressed within the reference frame of the particle , are constants of motion where is the matrix transpose of .the individual positions of the beads can be directly obtained from the position and orientation of the rigid body to which they belongs through rigid body representation as arbitrary collection of spherical beads.,scaledwidth=35.0% ] the beads should only be considered as a computational bookkeeping device , used to map the rigid particles onto the computational grid used to solve the fluid equations of motion .we are free to chose any representation of the rigid body .the advantage of this spherical - bead representation is the ease with which the smooth profile function of an arbitrary rigid body can be defined .we start with the profile function for spherical particles introduced in ref . } { h\left[\left(a+\xi/2\right ) - r_i\right ] + h\left[r_i - \left(a - \xi/2\right)\right]}\\ h(x ) & = \begin{cases } \exp{\left(-\delta^2 / x^2\right ) } & x \ge 0 \\ 0 & x < 0 \end{cases}\end{aligned}\ ] ] where is the distance vector from the sphere center to the field point of interest , is the radius of the spheres , and is the thickness of the fluid - particle interface .we then define the smooth profile function of the rigid body as the normalization factor in eq .is required to avoid double - counting in the case of overlap between beads belonging to the same rigid particle ( beads belonging to different particles are prevented from overlapping by the core potential in ) .we note that this representation of particles as rigid assemblies of spheres does not impose any constraints on the particle geometries , because the constituent beads are free to overlap with each other .if the dispersion under consideration is such that the inertial forces are negligible compared to the viscous forces , i.e. when the particle reynolds number is vanishingly small ( and being the characteristic velocity and length scales ) , the incompressible navier - stokes equation ( eq . ) reduces to the stokes equation with any external forces on the fluid . due to the linear nature of this equation , the force ( torque ) exerted by the fluid on the particles is also linear in the their velocities where and are dimensional force and velocity vectors , and and are the friction and mobility matrices , respectively { { \bm{\mathsf{\zeta}}}}^{rt}_{11 } & \cdots & { { \bm{\mathsf{\zeta}}}}^{rt}_{1n } & { { \bm{\mathsf{\zeta}}}}^{rr}_{11 } & \cdots & { { \bm{\mathsf{\zeta}}}}^{rr}_{1n } \\\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ { { \bm{\mathsf{\zeta}}}}^{rt}_{n1 } & \cdots & { { \bm{\mathsf{\zeta}}}}^{rt}_{nn } & { { \bm{\mathsf{\zeta}}}}^{rr}_{1n } & \cdots & { { \bm{\mathsf{\zeta}}}}^{rr}_{nn } \end{pmatrix}\end{aligned}\ ] ] { { \bm{\mathsf{\mu}}}}^{rt}_{11 } & \cdots & { { \bm{\mathsf{\mu}}}}^{rt}_{1n } & { { \bm{\mathsf{\mu}}}}^{rr}_{11 } & \cdots & { { \bm{\mathsf{\mu}}}}^{rr}_{1n } \\\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ { { \bm{\mathsf{\mu}}}}^{rt}_{n1 } & \cdots & { { \bm{\mathsf{\mu}}}}^{rt}_{nn } & { { \bm{\mathsf{\mu}}}}^{rr}_{1n } & \cdots & { { \bm{\mathsf{\mu}}}}^{rr}_{nn } \end{pmatrix}\end{aligned}\ ] ] where the off - diagonal matrices are related through the lorentz reciprocal relations by the symmetric block matrices ( ) are composed of friction ( mobility ) matrices , , , and which couple the translational and rotational motion of particle with that of particle .thus , the whole problem of solving the dynamical equations of motion for a suspension of spheres in the stokes regime reduces to calculating the mobility or friction matrix . for a single spherical particle , translating ( rotating ) in an unbounded fluid under stick - boundary conditions , the translational and rotational friction coefficients are obtained from an exact solution of the stokes equation as exact solutions to the friction or resistance problem of two or three spherical particles are known , but for arbitrary many - particle systems , the complex non - linear nature of the hydrodynamic interactions makes it impossible to find a general solution .however , several methods have been developed to obtain accurate estimates for the mobility and friction matrices .two of the most popular approaches are the method of reflections and the method of induced forces .the former relies on a power series expansion of the flow field , in terms of the inverse particle distances ( ) , while the latter uses a multipole expansion of the force densities induced at the particle surface , with the truncation scheme determined by the angular dependence of the flow field ( ) . as an example , the popular rotne - prager ( rpy ) approximation to the mobility tensor can be obtained using the method of reflections , by truncating the hydrodynamic interactions to third order , which corresponds to a pair - wise representation & i\ne j \end{cases}\end{aligned}\ ] ] we will compare our dns results with those obtained using the method of induced forces , using the freely available ` hydrolib ` library .calculations of the friction coefficients for a sedimenting array of spherical clusters , using the induced force method truncated to third order ( ) , gives an error of less than with respect to the experimental results .we therefore consider these values as the exact solution to the stokes equation ( se ) .finally , we note that neither the method of reflections nor the induced force method is able to directly take into account lubrication forces , caused by the relative motion of particles at short distances , as they require the high - order terms to be included in both expansions .following durlofsky et al. , these contributions are usually added to the friction matrix by assuming a pairwise superposition approximation . in this workwe will consider only the collective motion of a rigid agglomerate of spheres , so that lubrication effects need not be considered .in what follows we report the friction coefficients for a variety of non - spherical particles under steady translation ( rotation ) through a fluid .the numerical simulations are performed in three dimensions under periodic boundary conditions . the modified navier - stokes equation ( eq . )is discretized with a dealiased fourier spectral scheme in space and an euler scheme in time .the motion of the particles is integrated using a second order adams - bashforth scheme .the lattice spacing is taken as unit of length , the unit of time is given by , with and the density and viscosity of the fluid .the integration time step is .we only consider neutrally bouyant particles , , so gravity effects are not considered furthermore , we are only interested in single particle motion , so the particle - particle interactions and external field contributions can be ignored . the rigid particles are constructed as rigid agglomerates of non - overlapping spherical beads of equal radius .we use a bead radius of and a system size of ( depending on the particle geometry ) ; the interfacial thickness is in all cases .the particle velocity is fixed throughout the simulation and the steady - state forces , which are purely hydrodynamic in nature , are measured in order to obtain the friction coefficients .all results are given in terms of the kinetic form factors , defined as such that ( ) expresses the force ( torque ) on the agglomerates relative the force ( torque ) experienced by a spherical particle of equal volume ( radius ) moving ( rotating ) at the same velocity ( angular velocity ) and under the same boundary conditions .we use the general term , friction tensor or matrix , to refer to both and .0.15 ( color online ) closed - packed ( hcp and fcc ) arrays of spherical particles .colors are assigned to the individual layers as a visual guide.,title="fig:",scaledwidth=85.0% ] 0.15 ( color online ) closed - packed ( hcp and fcc ) arrays of spherical particles .colors are assigned to the individual layers as a visual guide.,title="fig : " ] 0.15 ( color online ) closed - packed ( hcp and fcc ) arrays of spherical particles .colors are assigned to the individual layers as a visual guide.,title="fig : " ] 0.15 ( color online ) closed - packed ( hcp and fcc ) arrays of spherical particles .colors are assigned to the individual layers as a visual guide.,title="fig : " ] 0.15 ( color online ) closed - packed ( hcp and fcc ) arrays of spherical particles .colors are assigned to the individual layers as a visual guide.,title="fig : " ] 0.15 ( color online ) closed - packed ( hcp and fcc ) arrays of spherical particles .colors are assigned to the individual layers as a visual guide.,title="fig : " ] 0.15 ( color online ) closed - packed ( hcp and fcc ) arrays of spherical particles .colors are assigned to the individual layers as a visual guide.,title="fig : " ] we begin by measuring the friction coefficients of the spherical agglomerates studied experimentally in ref . the agglomerates where constructed to obtain semi - spherical geometries by gluing together from two to spherical beads of equal radius within a closest - packing arrangement ( using either hcp or fcc lattices ) .the spherical agglomerates for are shown in fig .[ f : spherical ] .the system parameters are , , and ( ) , which gives a reynolds number of . given the symmetry of the particles , the friction matrix is diagonal , with only two distinct coefficients ( color online ) friction coefficients for several closed - packed arrays of spherical particles as a function of size ( number of spheres ) .results obtained from dns calculations ( circle ) are compared to the exact solution to the stokes equation ( square).,scaledwidth=48.0% ] .[t : spherical ] friction coefficients for motion parallel and perpendicular to the vertical -axis for all the closed - packed arrays , as given by our dns method and the exact solution to the se . [ cols="<,^,^,^,^,^,^",options="header " , ] precise experimental measurements are available for these systems , but they should not be compared directly to our simulation results due to the mismatch in boundary conditions , particularly for the larger agglomerates .the friction coefficients for movement along the vertical -axis are given in fig [ f : spherical_k ] , where they are compared to the exact ( se ) results .the complete set of values for the two independent friction coefficients are given in table [ t : spherical ] .for the larger systems , , the ` hydrolib ` library does not converge if periodic boundary conditions are used , so no reference data is given .our results show excellent agreement with the available se values , differentiating between nearly identical agglomerates which differ in volume by only a few percent . in all cases , the difference between our results and the reference values are less than , which is the comparable with the error estimates of the actual experiments .we now consider the friction coefficients for a series of non - spherical regular - shaped agglomerates .the simulation protocol is exactly the same as for the spherical agglomerates considered above . in total, we study six different families of configurations , shown schematically in fig .[ f : nonspherical ] , v - shaped , w - shaped , h - shaped , hexagonal and rectangular arrays .the v- and w - shaped configurations vary in the number of particles ( ) , as well as the branching angle . for the h - shaped and hexagonal configurations only the branching angleis varied , the number of particles is fixed to six . for the rectangular arrays , the maximum number of particles in any dimension is four . in total, we have considered different geometric configurations .details on the construction of the agglomerates , as well as experimental data for the kinematic form factors , can be found in ref . . 0.2 ( color online ) non - spherical agglomerates for various regular shaped geometries.,title="fig : " ] 0.2 ( color online ) non - spherical agglomerates for various regular shaped geometries.,title="fig : " ] 0.2 ( color online )non - spherical agglomerates for various regular shaped geometries.,title="fig : " ] 0.2 ( color online ) non - spherical agglomerates for various regular shaped geometries.,title="fig : " ] 0.2 ( color online ) non - spherical agglomerates for various regular shaped geometries.,title="fig : " ] 0.2 ( color online ) non - spherical agglomerates for various regular shaped geometries.,title="fig : " ] the total friction matrix is block diagonal for all the geometrical configurations considered here , except for the case of v- and w - shaped particles , for which a slight coupling between translation and rotation can be observed : and .for the moment , however , we consider only translational motion . due to the symmetry of the particles ,the friction matrix is again diagonal we have computed the friction coefficients for motion parallel to the vertical axis for all the systems , the full friction matrix is only measured for the h - shaped and hexagonal agglomerates .our results are summarized in fig .[ f : nonspherical_k ] , along with the experimental data , and the exact solutions for both a periodic and an infinite system .although our dns results should only be compared with the se solutions under equivalent boundary conditions , the periodicity effects for the systems considered here are small , being of the same order of magnitude as the errors in the experiments .the relative error of the dns results ( compared to experiments ) is less than for all configurations .a comparison of our results with the exact se values under periodic boundary conditions show almost perfect agreement , considerably better than that of experiments with the exact se values for an infinite system .0.42 ( color online ) friction coefficients for various non - spherical regular - shaped agglomerates .( a ) all the friction coefficients computed with the dns method as a function of the exact se values .( b ) the friction coefficients for the hexagonal shaped agglomerates as a function of their vertical dimension .( c ) vertical friction coefficients for rectangular arrays as a function of height .values obtained from the exact solution to the se , for both periodic and unbounded systems , as well as experimental values , are also shown in ( b ) and ( c).,title="fig : " ] 0.42 ( color online ) friction coefficients for various non - spherical regular - shaped agglomerates .( a ) all the friction coefficients computed with the dns method as a function of the exact se values .( b ) the friction coefficients for the hexagonal shaped agglomerates as a function of their vertical dimension .( c ) vertical friction coefficients for rectangular arrays as a function of height .values obtained from the exact solution to the se , for both periodic and unbounded systems , as well as experimental values , are also shown in ( b ) and ( c).,title="fig : " ] 0.42 ( color online ) friction coefficients for various non - spherical regular - shaped agglomerates . ( a )all the friction coefficients computed with the dns method as a function of the exact se values .( b ) the friction coefficients for the hexagonal shaped agglomerates as a function of their vertical dimension .( c ) vertical friction coefficients for rectangular arrays as a function of height .values obtained from the exact solution to the se , for both periodic and unbounded systems , as well as experimental values , are also shown in ( b ) and ( c).,title="fig : " ] the friction coefficients of all the configurations are plotted in fig .[ f : nonspherical_k_all ] with respect to the exact se value for a system with the same periodic boundary conditions .the results clearly show the accuracy of our method , as within .detailed results for the hexagonal particles are given in fig .[ f : nonspherical_k_hex ] , where the three form factors are plotted as a function of the vertical dimensions of the agglomerate ( ) .the dns results show almost perfect agreement with the exact se results .the difference among the frictions coefficients and their dependence on the branching angle is very accurately reproduced .finally , the form factors for the rectangular arrays are plotted in fig .[ f : nonspherical_k_rect ] , as a function of increasing vertical height .as expected , the agreement with the exact results is very good , and we are able to accurately distinguish between particles with the same cross - sectional area .( color online ) helical structures of varying pitch ( is the particle diameter ) , with constant length ( beads ) and number of turns ( ).,scaledwidth=36.0% ] up to now , we have considered symmetric particles for which the translational and rotational motion are only weakly coupled , if at all . herewe will analyze chiral structures which exhibit a very strong coupling between the two motions .we consider left - handed helices composed of a fixed number of beads ( ) and turns ( ) , which vary only in their degree of pitch ( surface to surface distance between turns ) .the pitch will then vary in the range , with corresponding to a packed configuration ( hollow cylinder ) , and to a linear chain ( see fig .[ f : helix ] ) .the simulation protocol is slightly modified with respect to the previous systems , we now consider beads of radius and a simulation box size of . to compute the complete friction matrix , we now require six different simulations for each helical geometry : three at fixed velocity and three at fixed angular velocity ( ) .the friction matrices obtained from the dns simulations , together with the exact se values , for a helix with pitch ( the diameter of the beads ) are given in eqs . the complex nature of the fluid flow generated by the motion of the body is clearly evident in the form of the friction matrices . the translational ( rotational ) friction matrix ( )is no longer diagonal , eqs . and , although it remains symmetric , which means that the hydrodynamic force ( torque ) will not be parallel to the direction of motion ( axis of rotation ) .we note that although the off - diagonal components can be up two three orders of magnitude smaller than the diagonal components , the dns method is able to accurately measure all contributions .although this accuracy is slightly reduced when considering the coupling between translation and rotation ( the small off - diagonal components can show large relative errors ) , the dominant components are well reproduced . to establish a clear estimate of the error, we use the frobenius norm as a measure of the difference between the two matrices the overall error of the dns method , computed as the relative distance between the dns and se friction matrices is . friction coefficients for a helix of fixed length and number of turns , as a function of pitch ( is the bead radius ) .( top ) rotation - rotation and ( bottom ) rotation - translation friction tensor .the results obtained from dns simulations are compared to the exact se values , as well as those obtained using an rpy approximation.,title="fig:",scaledwidth=48.0% ] + friction coefficients for a helix of fixed length and number of turns , as a function of pitch ( is the bead radius ) .( top ) rotation - rotation and ( bottom ) rotation - translation friction tensor .the results obtained from dns simulations are compared to the exact se values , as well as those obtained using an rpy approximation.,title="fig:",scaledwidth=48.0% ] + finally , to see in what way the structure of the helix will affect its motion , we consider the components of the and friction matrices , as a function of the pitch distance .as the pitch is increased and the helix starts to stretch , fluid flow between the turns of the helix will start to increase , while the cross sectional area of the particle ( plane ) is reduced .the latter will increase the torque felt by the helix ( as the particle must now drag the fluid along ) , while the former will tend to reduce it ( by reducing the moment of inertia ) .these effects give rise to the behavior seen in fig [ f : helix_pitch ] , where the coefficient shows a maximum at an intermediate pitch value .similar behavior is observed for the coefficient , although the maximum is obtained at a different pitch value , and the coupling between rotation and translation disappears for and , as expected . for comparison purposes ,we have also plotted the results obtained using the ewald - summed rpy tensor in fig [ f : helix_pitch ] .the agreement is surprisingly good for the tensor , but a considerable discrepancy appears in the coefficients of the tensor , as this rpy formulation does not include hydrodynamic effects arising from the particle rotation .we have extended the sp method to apply to arbitrary rigid bodies . for the moment ,we have considered only particles constructed as a rigid agglomerate of ( possibly overlapping ) spherical beads , but alternative formulations are straightforward .we have verified the accuracy of our method by performing low reynolds number dns simulations to compute the single particle friction coefficients for a large variety of rigid bodies .our results were compared with the exact solutions to the stokes equation , showing excellent agreement in all cases . while there are several methods capable of performing these type of calculations , they impose a number of restrictions which can severely limit the type of systems to which they can be applied .our method can capture the many - body hydrodynamic effects very accurately , and it is not restricted to zero reynolds number flow or newtonian host solvents . in future paperswe will consider the lubrication effects of non - spherical particles ( chains , rods , disks , tori , etc ) , as well as the dynamics at high reynolds number and in the presence of a background flow fields .the authors would like to express their gratitude to the japan society for the promotion of science for financial support ( grants - in - aid for scientific research kakenhi no .23244087 ) and mr .takuya kobiki for his help with the simulations on helical particles .34ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] http://www.amazon.com/colloidal-dispersions-cambridge-monographs-mechanics/dp/0521426006%3fsubscriptionid%3d1v7vtj4ha4mft9xbj1r2%26tag%3dmekentosjcom-20%26linkcode%3dxm2%26camp%3d2025%26creative%3d165953%26creativeasin%3d0521426006[__ ] , ed .( , , ) link:\doibase 10.1063/1.1673799 [ * * , ( ) ] link:\doibase 10.1063/1.3665415 [ * * , ( ) ] http://prl.aps.org/abstract/prl/v109/i24/e248109 [ * * , ( ) ] http://gateway.webofknowledge.com/gateway/gateway.cgi?gwversion=2&srcauth=mekentosj&srcapp=papers&destlinktype=fullrecord&destapp=wos&keyut=a1988n090500008 [ * * , ( ) ] link:\doibase 10.1063/1.466366 [ * * , ( ) ] link:\doibase 10.1063/1.478857 [ * * , ( ) ] link:\doibase 10.1006/jcph.2000.6592 [ * * , ( ) ] link:\doibase 10.1006/jcph.2000.6542 [ * * , ( ) ] http://link.springer.com/article/10.1023/a:1010414013942 [ * * , ( ) ] link:\doibase 10.1088/0953 - 8984/16/38/009 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.85.1338 [ * * , ( ) ] link:\doibase 10.1103/physreve.79.031401 [ * * , ( ) ] link:\doibase 10.1103/physreve.81.041807 [ * * , ( ) ] link:\doibase 10.1103/physreve.80.061402 [ * * , ( ) ] http://link.aps.org/doi/10.1103/physreve.84.051404 [ * * , ( ) ] link:\doibase 10.1103/physreve.87.022310 [ * * , ( ) ] link:\doibase 10.1039/c3sm00140 g [ * * , ( ) ] link:\doibase 10.1103/physreve.85.066704 [ * * , ( ) ] http://www.amazon.com/introduction-dynamics-cambridge-mathematical-library/dp/0521663962%3fsubscriptionid%3d1v7vtj4ha4mft9xbj1r2%26tag%3dmekentosjcom-20%26linkcode%3dxm2%26camp%3d2025%26creative%3d165953%26creativeasin%3d0521663962[__ ] , ed .( , , ) http://www.amazon.com/classical-dynamics-a-contemporary-approach/dp/0521636361%3fsubscriptionid%3d1v7vtj4ha4mft9xbj1r2%26tag%3dmekentosjcom-20%26linkcode%3dxm2%26camp%3d2025%26creative%3d165953%26creativeasin%3d0521636361[__ ] , ed .( , , ) link:\doibase 10.1143/jpsj.77.074007 [ * * , ( ) ] https://acces-distant.upmc.fr/http//pre.aps.org/abstract/pre/v71/i3/e036707 [ * * , ( ) ] link:\doibase 10.1140/epje / i2007 - 10332-y [ * * , ( ) ] http://www.amazon.com/microhydrodynamics-principles-applications-sangtae-kim/dp/0486442195%3fsubscriptionid%3d1v7vtj4ha4mft9xbj1r2%26tag%3dmekentosjcom-20%26linkcode%3dxm2%26camp%3d2025%26creative%3d165953%26creativeasin%3d0486442195[__ ] , ed .( , , ) http://books.google.com/books?hl=en&lr=&id=mmartf5sj9oc&oi=fnd&pg=pp2&dq=dhont+introduction+to+dynamics+of+colloids&ots=vsma887cs_&sig=s9phghvl4kfvwmkuu86linnuvzy[__ ] , ed .( , , ) http://journals.cambridge.org/production/action/cjogetfulltext?fulltextid=376906 [ * * , ( ) ] http://link.aip.org/link/?pfldas/28/2033/1 [ * * , ( ) ] link:\doibase 10.1063/1.866120 [ * * , ( ) ] http://www.sciencedirect.com/science/article/pii/001046559500029f [ * * , ( ) ] link:\doibase 10.1063/1.868626 [ * * , ( ) ] http://journals.cambridge.org/production/action/cjogetfulltext?fulltextid=393344 [ * * , ( ) ] http://www.kona.or.jp/search/15_202.pdf [ * * , ( ) ] link:\doibase 10.1063/1.451199 [ * * , ( ) ]
an improved formulation of the `` smoothed profile '' method is introduced to perform direct numerical simulations of arbitrary rigid body dispersions in a newtonian host solvent . previous implementations of the method were restricted to spherical particles , severely limiting the types of systems that could be studied . the validity of the method is carefully examined by computing the friction / mobility tensors for a wide variety of geometries and comparing them to reference values obtained from accurate solutions to the stokes - equation .
rankings of the elements of a set is a common daily decision making activity , such as , voting for a political candidate , choosing a consumer product , etc .so there is a huge literature concerning the analysis and interpretation of preference rankings data .however , if we trace back in time , we find that de borda ( 1781 ) was the first author , who outlined a simple well thought method based on a solid argument .borda , as a member of the french academy of sciences , criticised the plurality method of choosing a new academy member and suggested , what is known as the borda count(bc ) rule , to fully rank order ( seriate ) the candidates based on the preferences of the judges .bc has generated a large litterature , and this paper makes full use of it as much needed .let denote a set of alternatives / candidates / items , and a set of voters / individuals / judges . in this paperwe consider the linear orderings / rankings / preferences , in which all objects are rank - ordered according to their levels of desirability by the voters .we denote a linear order by a sequence , where means that the alternative is preferred to the alternative let be the set of all linear orders on the cardinality of is .a _ voting profile _ is a function from to , that is , we denote by the set of permutations of the elements of the set the borda score is a function from to where for a linear ordering , borda assigned to the element score of , because is preferred to ( other alternatives , or equivalently it is the most preferred alternative . we denote , where is a matrix having rows and columns , and designates the borda score of the judge s preference of the alternative .the average borda score of the elements of is where is a column vector of s having coordinates .borda s count rule ( bc ) seriates / orders the elements of the set according to their average scores : means alternative is preferred to alternative .we define the reverse borda score to be a function from to where for a linear order , we assign to the element score of we denote , where is a matrix having rows and columns , and designates the reverse borda score of the judge s preference of the alternative .the average reverse borda score of the elements of is we note that this example has two aims : first to make the notation clear ; second to show that traditional well established methods for rank data , such as distance and latent class based , may mask groups of small sizes in mixture models .table 1 introduces a well known data set first analyzed by croon ( 1989 ) ; the data derive from a german survey of 2262 rankings of four political items concerning inglehart ( 1977 ) s theory of postmodernism .the four items are : ( ) maintaining order in the nation ; ( ) giving people more to say in important government decisions ; ( ) fighting rising prices ; ( ) protecting freedom of speech .inglehart advanced the thesis that there is a shift in political culture in europe ; that is , some younger europeans have different political values than their fathers : he named the elder europeans as materialists , because after the first and second world wars , they valued mostly material security item ( ) and domestic order item ( ) ; while he named some of the younger generation as postmaterialists , because they valued much more human rights and political liberties item ( ) and democracy item ( ) .so in this example , , and the voting profile is displayed in the first two columns of table 1 ; similarly , table 1 displays the borda scores and the reverse borda scores .the average bc score and the average reverse bc score show that , the 2262 voters generally rank materialist items postmaterialist items . + item & observed & & + ordering & & & b & c & d & & b & c & d + a & & & 2 & 1 & 0 & & 1 & 2 & 3 + a & & & 2 & 0 & 1 & & 1 & 3 & 2 + a & & & 1 & 2 & 0 & & 2 & 1 & 3 + a & & & 0 & 2 & 1 & & 3 & 1 & 2 + a & & & 1 & 0 & 2 & & 2 & 3 & 1 + a & & & 0 & 1 & 2 & & 3 & 2 & 1 + b & & & 3 & 1 & 0 & & 0 & 2 & 3 + b & & & 3 & 0 & 1 & & 0 & 3 & 2 + b & & & 3 & 2 & 0 & & 0 & 1 & 3 + b & & & 3 & 2 & 1 & & 0 & 1 & 2 + b & & & 3 & 0 & 2 & & 0 & 3 & 1 + b & & & 3 & 1 & 2 & & 0 & 2 & 1 + c & & & 1 & 3 & 0 & & 2 & 0 & 3 + c & & & 0 & 3 & 1 & & 3 & 0 & 2 + c & & & 2 & 3 & 0 & & 1 & 0 & 3 + c & & & 2 & 3 & 1 & & 1 & 0 & 2 + c & & & 0 & 3 & 2 & & 3 & 0 & 1 + c & & & 1 & 3 & 2 & & 2 & 0 & 1 + d & & & 1 & 0 & 3 & & 2 & 3 & 0 + d & & & 0 & 1 & 3 & & 3 & 2 & 0 + d & & & 2 & 0 & 3 & & 1 & 3 & 0 + d & & & 2 & 1 & 3 & & 1 & 2 & 0 + d & & & 0 & 2 & 3 & & 3 & 1 & 0 + d & & & 1 & 2 & 3 & & 2 & 1 & 0 + & & & 1.10 & 2.05 & 0.88 & & & & + & & & & & & & 1.90 & 0.95 & 2.12 + table 2 provides a statistical summary of four methods of data analysis of table 1 .the first method suggested by inglehart is _ deductive and supervised _ ; it opposes to the other three methods , which are _ inductive , unsupervised and aim to validate inglehart s theory of postmaterialism , _ see also moors and vermunt ( 2007 ) .the other three methods are mixture models and they attempt to see if this data set confirms inglehart s theory of postmaterialism .the first one is by croon ( 1989 ) , who used a stochastic utility ( su ) based latent class model ; the second one by lee and yu ( 2012 ) , who used a weighted distance - based footrule mixture model ; and the third one is based on taxicab correspondence analysis ( tca ) , which is the topic of this paper . here , we provide some details on the statistics displayed in table 2 .\a ) inglehart ( 1977 ) _ apriori _ classified the respondents into three groups : materialists , postmaterialists and mixed .his method of classification is based on partial rankings based on the first two preferred choices . here , we discuss each group separately .materialists are defined by their response patterns , where the pair of materialist items are always ranked above the pair of postmaterialist items ; they make of the voters . in the ideal casewe expect to have the average bc scores for the four items to be : and the corresponding observed values , displayed in table 2 , are ( very near to the ideal ones ) : and postmaterialists are defined by response paterns ( ) , where the pair of postmaterialist items are always ranked above the pair of materialist items ; they make of the voters .the comparison of ideal and observed average bc scores , displayed in table 2 , show that : is very near to while is somewhat near to .the last group is named mixed by inglehart and is composed of the remaining sixteen response patterns ; they make of the voters . in the ideal casewe expect to have the average bc scores for the four items to be : the corresponding observed values , displayed in table 2 , are ( somewhat near to the ideal ones ) : and furthermore , based on the global homogeneity coefficient ghc in % : and inglehart s mixed group is not globally homogenous ; that is why we did not calculate its ghc index .the development of the ghc index and its interpretation will be done in section 3 .it is important to note that , the underlying hypothetical conceptual - structural model for this data is a mixture composed of three specific groups ( materialist , postmaterialist and mixed ) , which are explicitly characterized by inglehart .b , c ) given that , croon s su model and lee and yu s weighted distance - based footrule mixture model produced globally very similar groups , we present them together .a summary of croon s analysis can also be found in skrondal and rabe - hesketh ( 2004 , p.404 - 406 ) , lee and yu ( 2012 ) and in alvo and yu ( 2014 , p.228 - 232 ) .sections b and c of table 2 are taken from alvo and yu ( 2014 , p. 230 ) , who present a summary and a comparison of results from croon ( 1989 ) and lee and yu ( 2012 ) .the interpretation of the estimated parameters of the su model in table 2 is similar to the average borda score : for each group the score shows the intensity of the preference for that item in an increasing order .there are two kinds of estimated parameters in lee and yu s weighted distance - based footrule mixture model : the modal response pattern for each group is shown in the last column ; and the weight of an item , which reflects our confidence in the ranked position of the item in the modal response pattern , the higher value representing higher confidence .both methods find a mixture of three groups similar in contents : the first two groups represent materialists with and of the voters for the weighted footrule mixture model , and and of the voters for the su mixture model ; and the third group represents postmaterialists with of the voters for the weighted footrule mixture model , and for the su mixture model .lee and yu ( 2012 ) s conclusion is : _ based on our grouping , we may conclude that inglehart s theory is not appropriate in germany_. this assertion shows that , the two well established traditional methods masked the existence of the mixed group as put forth by inglehart .\d ) our approach , based on taxicab correspondence analysis ( tca ) , which is an l variant of correspondence analysis ( ca ) , discovers a mixture of three globally homogenous groups as advocated by inglehart : materialists with of the voters , postmaterialists with , and mixed with of the voters .furthermore , there is an outlier response pattern ( ) representing of the voters .so contrary to lee and yu ( 2012 ) s assertion , _ our results validates inglehart s theory of postmodernism for this data set_. probably , this is due mainly to the fact that tca is a directional method specially useful for spherical data : rank data with all its permutations is spherical by nature ( graphically , it is represented by a permutahedron ; see marden ( 2005 , figure 2.4 , page 11 ) or benzcri ( 1980 , p.303 ) .furthermore , based on the global homogeneity coefficient ghc in % : , and we see that the materialist voters form much more globally homogenous group than the voters in the mixed group ; and the voters in mixed group are much more homogenous than the voters in the postmaterialist group . furthermore , our analysis clearly shows why the postmaterialists ( they have three poles of attractions as defined by marden ( 1995 , ch . 2 ) or benzcri ( 1966 , 1980 ) ) are much more heterogenous than the materialists ( they have two poles of attractions ) .more details on the local heterogeneities of each group will be presented later on in section 4 .+ + group & sample% & & & & & + materialist & & & & & & + postmaterialist & & & & & & + mixed & & & & & & + + group & sample% & & & & & + materialist 1 & & & & & & + materialist 2 & & & & & & + postmaterialist & & & & & & + + group & sample% & & & & & + materialist 1 & & & & & & + materialist 2 & & & & & & + post - materialist & & & & & & + + group & sample% & & & & & + materialist & & & & & & + postmaterialist & & & & & & + mixed & & & & & & + outlier & & & & & & + the traditional methods of finding mixture components of rank data are mostly based on distance and latent class models ; these models may mask groups of small sizes ; probably due to the spherical nature of rank data . in this paper ,our apparoach diverges from the traditional methods , because we discuss the concept of a mixture for rank data essentially in terms of its globally homogenous group components .we use the law of contradiction to identify globally homogenous components .for instance , by tca we were able to discover that the data set in table 1 is a mixture of three globally homogenous group components ( materialist , postmaterialist and mixed ) ; furthermore , each group component can be summarized by its average borda count ( bc ) score as its consensus ranking ; this is the first step in our procedure . in the second step , we look at local heterogeneities if there are any , given the globally homogenous component .this two step procedure produces finer visualization of rank data ; it is done via the exploratory analysis of rank data by taxicab correspondence analysis with the nega coding .also we introduce a new coefficient of global homogeneity , ghc .ghc is based on the first taxicab dispersion measure : it takes values between 0 and 100% , so it is easily interpretable .ghc measures the extent of crossing of scores of voters between 2 or 3 blocks seriation of the items where the borda count statistic provides consensus ordering of the items on the first axis .furthermore , to our knowledge , this is the first time that a tangible method has been proposed that identifies explicitly outliers in a rank data : neither the recently written monograph by alvo and yu ( 2014 ) , nor the much cited monograph of marden ( 1995 ) discuss the important problem of identification of outliers in rank data .we mention two publicly available written packages in r , that we used , _ rankclustr _ by jacques , grimonprez and biernacki ( 2014 ) , and _ pmr _ ( probability models for ranking data ) by lee and yu ( 2013 ) . the contents of this paper are organized as follows : section 2 reviews the tca approach for rank data ; section 3 develops the new global homogeneity coefficient ghc ; section 4 presents the analysis of some well known rank data sets by tca ; and finally in section 5 we conclude with some remarks .we just want to mention that there is a large litterature in social choice theory or social welfare theory studying the properties of the bc . here , we mention some important contributions according to our personal readings .young ( 1974 ) presents a set of four axioms that characterize uniquely bc ; see also among others , saary ( 1990a ) and marchant ( 1998 ) .saari ( 1990b ) distinguishes two levels of susceptiblity of manipulation of voting theories : _macro_- where a large percentage of voters- , and _micro_- where a small percentage of voters - attempt to change the results of the elections . in data analysis ,a macro manipulation is equivalent to the existence of a mixture of groups of voters . while , a micro manipulation is equivalent to the existence of few outliers in the globally homogenous set of voters .further , saari concludes that among all positional voting systems , bc is the least susceptible to micro manipulation ; this assertion seems fully true in this paper .saari ( 1999 ) proves that bc is the only positional voting method that satisfies the property of _ reversal symmetry , _ which states that if everyone reverses all their preferences , then the final outcome should also be reversed .this property plays an important role in the nega coding of a rank data set before the application of tca .choulakian ( 2014 ) incorporates the bc to interpret the first principal factor of taxicab correspondence analysis ( tca ) of a nega coded rank data , see theorem 1 in the next section .additionally , this essay further extends and complements the ideas of global homogeneity and local heterogeneities for rank data .results of this section are taken from choulakian ( 2006 , 2014 ) .we start with an overview of tca of a contingency table ; then review the corresponding results concerning rank data .let be a contingency table cross - classifying two nominal variables with rows and columns , and be the associated correspondence matrix with elements where is the sample size .we define as usual , the vector the vector , and a diagonal matrix having diagonal elements and similarly let in tca we compute the following quadruplets for the two spaces , for : in the row space of and in the column space of . given that in ca and tca , the row and column spaces are dual to each other , we name the pair of vectors ( and principal axes , the pair ( and basic vectors of coordinates , the pair ( and vectors containg tca factor scores , and the nonnegative scalar the tca dispersion measure . the relations among the seven terms will be described in the next two subsections .tca is computed in 2 steps : in the first step we compute the taxicab singular value decomposition ( tsvd ) of as a function of for which is a stepwise matrix decomposition method based on a particular matrix norm , see below equation ( 3 ) . in the 2nd step , we reweight the pair of basic vectors by respective weights of the columns , and the rows , to obtain the vectors of factor scores for .let be the residual data matrix at the iteration , where , for tsvd consists of maximizing the norm of the linear combination of the columns of the matrix subject to norm constraint , where the norm of a vector is defined to be is the norm ; more precisely , it is based on the following optimization problem equivalently , it can also be described as maximization of the norm of the linear combination of the rows of the matrix ( 1 ) is the dual of ( 2 ) , and they can be reexpressed as matrix operator norms is a well known and much discussed matrix norm related to the grothendieck problem ; the inequality in theorem 2 section 3 of this paper sheds further insight into grothendieck s theorem ; see pisier ( 2012 ) for a comprehensive and interesting history of grothendieck s theorem with its many variants .equation ( 3 ) characterizes the robustness of the method , in the sense that , the weights affected to the columns ( similarly to the rows by duality ) are uniform the principal axes , and are computed by is evident that for represents a column vector of ones of length the two principal axes and are named trivial , and they are used only to center the rows and the columns of . let represent the tsvd coordinates of the rows of by projecting the rows of on the principal axis , and represent the tsvd coordinates of the columns of by projecting the columns of on the principal axis .these are given by and in particular , by ( 6,7,8 ) , we have for ( 7 ) are named transition formulas , because and and , and are related by and if otherwise . to obtain the tsvd row and column coordinates and and corresponding principal axes and , we repeat the above procedure on the residual dataset note that the because by ( 6 ) through ( 9 ) , by induction , implies that for in particular we see that for is , the basic vectors and are centered .the data reconstitution formula for the correspondence matrix as a function of the basic vectors for with the dispersion measures is in tca of both basic vectors and for satisfy the equivariability property , as a consequence of equations ( 8,12 ) , see choulakian ( 2008a ) .this means that and are balanced in the sense that \nonumber \\ & = & -\sum_{i}\left [ a_{\alpha } ( i)|a_{\alpha } ( i)<0\right ] \\ & = & \sum_{j}\left [ b_{\alpha } ( j)|b_{\alpha } ( j)>0\right ] \nonumber \\ & = & -\sum_{j}\left [ b_{\alpha } ( j)|b_{\alpha } ( j)<0\right ] .\nonumber\end{aligned}\ ] ] in tsvd , the optimization problems ( 3 ) , ( 4 ) or ( 5 ) can be accomplished by two algorithms .the first one is based on complete enumeration ( 3 ) ; this can be applied , with the present state of desktop computing power , say , when the second one is based on iterating the transitional formulas ( 7 ) , ( 8) and ( 9 ) , similar to wold s ( 1966 ) nipals ( nonlinear iterative partial alternating least squares ) algorithm , also named criss - cross regression by gabriel and zamir ( 1979 ) .the criss - cross nonlinear algorithm can be summarized in the following way , where is a starting value : step 1 : , and step 2 : and step 3 : if go to step 1 ; otherwise , stop .this is an ascent algorithm , see choulakian ( 2016 ) ; that is , it increases the value of the objective function at each iteration .the convergence of the algorithm is superlinear ( very fast , at most two or three iterations ) ; however it could converge to a local maximum ; so we restart the algorithm times using each row of as a starting value .the iterative algorithm is statistically consistent in the sense that as the sample size increases there will be some observations in the direction of the principal axes , so the algorithm will find the optimal solution .a simple reweighting of the basic coordinates for produces the vectors that contain tca factor scores of the rows and the columns and ( 8) becomes similar to ca , tca satisfies an important invariance property : columns ( or rows ) with identical profiles ( conditional probabilities ) receive identical factor scores . moreover , merging of identical profiles does not change the result of the data analysis : this is named the principle of equivalent partitioning by nishisato ( 1984 ) ; it includes the famous distributional equivalence property of benzcri , which is satisfied by ca . by ( 13 and 15 ) , one gets the data reconstitution formula in tca ( the same formula as in ca ) for the correspondence matrix as a function of the factor coordinates for with the eigenvalues .\]]the visual maps are obtained by plotting the points for or for for correspondence analysis ( ca ) admits a chi - square distance interpretation between profiles ; there is no chi - square like distance in tca .fichet ( 2009 ) described it as a general scoring method . in the sequelwe suppose that the theory of correspondence analysis ( ca ) is known .the theory of ca can be found , among others , in benzcri ( 1973 , 1992 ) , greenacre ( 1984 ) , gifi ( 1990 ) , le roux and rouanet ( 2004 ) , murtagh ( 2005 ) , and nishisato ( 2007 ) ; the recent book , by beh and lombardi ( 2014 ) , presents a panoramic review of ca and related methods .further results on tca can be found in choulakian et al .( 2006 ) , choulakian ( 2008a , 2008b , 2013 ) , choulakian and de tibeiro ( 2013 ) , choulakian , allard and simonetti ( 2013 ) , choulakian , simonetti and gia ( 2014 ) , and mallet - gauthier and choulakian ( 2015 ) . in the sequel, we use the same notation as in choulakian ( 2014 ) .let for and represent the borda scores for rank data , where takes values similarly , represent the reverse borda scores .we note that and contain the same information . to incorporate both in one data set , there are two approaches in correspondence analysis literature . in the first approach we vertically concatenate both tables , that is ,we double the size of the coded data set by defining ... in the second approach , we summarize by its column total , that is , we create a row named then we vertically concatenate to , thus obtaining the size of is choulakian ( 2014 ) discussed the relationship between tca of and tca of we will consider only the application of tca to , because this will show if the rank data set is globally homogenous or heterogenous .so let the correspondence table associated with note that is a matrix of size and is a row vector of size .we denote the sequence of principal axes , and the associated basic vectors , tca vectors of principal factor scores and dispersion measures by for and .the following theorem is fundamental , and it relates the average bc score of items , to the first principal tca factor score . * theorem 1 ( * ) : * ( * this is theorem 2 in choulakian ( 2014 ) * ) * : properties a , b , c are true iff , where is the first principal axis of the columns of \a ) the first principal column factor score of the items is an affine function of the average bc score that is , \b ) the first nontrivial tca dispersion measure equals twice taxicab norm of the row vector \c ) consider the residual matrix then that is , the nega row is the null row vector .note that in theorem 1 we have eliminated sign - indeterminacy of the first principal axis , by fixing property a implies that the first principal factor score of the items , can be interpreted as the borda ranking of the items .property b shows that the nega row of accounts for 50% of the first nontrivial taxicab dispersion property c shows that the residual matrix does not contain any information on the heavyweight nega row .properties b and c imply that the first nontrivial factor is completely determined by the nega row , which plays a dominant heavyweight role , see choulakian ( 2008a ) .such a context in ca is discussed by benz**cri ( 1979 ) using asymptotic theory , and in dual scaling by nishisato ( 1984 ) , who names it forced classification .the importance of applying tca to nega coded dataset , * r* , and not to the original data set * r * stems from the following two considerations : first , if there are two columns in * r * such that for and then the columns and have identical profiles , and by the invariance property of tca they can be merged together , which will be misleading .second , as discussed by choulakian ( 2014 ) , the interpretation of tca maps of * r* is based on the law of contradiction , which will be used recursively to find the mixture components .let be a statement and and its negation ; then the law of contradiction states that and oppose each other : they can not both hold together , see for instance eves ( 1990 ) .we shall use the law of contradiction as a basis for the interpretation of the maps produced by tca of * r* in the following way .first , we recall that there are items , and we represented the borda score of an item by the voter for and , by and its reverse borda score by by the law of contradiction , and oppose each other ; which in its turn also implies that and oppose each other because the contains or they are not associated at all if .we let represent the first tca vector of factor scores of the rows . for the interpretation of the results by tca of * r* we can have the following two complementary scenarios : scenario 1 happens when by the law of contradiction ,the first principal dimension is interpretable and it shows the opposition between the borda scores of the items to their reverse borda scores summarized by .if scenario 1 happens , then we will say that the data set is globally homogenous , because all voters have positive first tca factor scores ; that is , they are directionally associated because for all now by property c of theorem 1 , the nega row disappears and do not contribute to the higher dimensions ; thus the higher dimensions will exhibit either random noise or local heterogeneities of the voters represented by their response patterns .scenario 2 is the negation of scenario 1 , it corresponds to the results of tca of * y* are not interpretable by the law of contradiction : because some voters , say belonging to the subset v are directionally associated with the nega ; so to obtain interpretable results as described in scenario 1 , we eliminate the subset of voters v , and repeat the analysis till we obtain scenario 1 . if the number of deleted voters in v is small , we consider them outliers ; otherwise , they constitute another group(s ) of voters .we have the following * definition 1 * : if scen1 holds , then we name the rank data * r * or * r* globally homogenous . _it is of basic importance to note that using r_ _ , only globally homogenous data are interpretable by the law of contradiction . _rank data is much more structured than ratings data ; and this aspect will be used to propose a global homegeneity coefficient ( ghc ) of rank data .we recall that the nega coded rank data and associated correspondence matrix .we note the following facts : _ fact 1 _ : the row sum of the elements of are : for rows and for the nega row ( or row ) . from which we get the total sum of elements of to be .so , the marginal relative frequency of the row is for , and , the marginal relative frequency of the nega row is .fact 2 _ : the column sum of the elements of are : for columns so , the marginal relative frequency of the column is for _ fact 3 _ : using facts 1 and 2 , we see that the first residual matrix elements of : \end{aligned}\]]and ( 18 ) states that is row centered with respect to average ranking , because for and ; it is also column centered .we have the following * proposition 1 : * for a globally homogenous rank data , .the proofs of new results are in the appendix .young ( 1974 ) presented a set of four axioms that characterize uniquely bc rule .his axiom 4 , named _faithfulness , _ states that when there is only one voter , if the relation that he uses to express his preferences is so simple that one result seems the only reasonable one , the result of the method must be that one .the _ faithfulness _axiom was the inspiration of this section . by the invariance property of tca ,that is , merging of identical profiles does not change the results of the data analysis , a _ faithfully homogenous group _ is equivalent to the existence of one response pattern , and its borda score values for a complete linear order of items can be represented by , without loss of generality by reordering of the items .then the nega coded correspondence table , will have only two rows and columns will be of rank 1 ; that is , there will be only one principle factor , for which we note its taxicab dispersion measure by for a fixed finite integer value of following result gives the value of explicitly .* theorem 2 ( faithfully homogenous group ) * : \a ) for or for , then we define \b ) the first and only factor score of the two rows are and \c ) the first and only factor score of the item is for in particular , we see that and for are equispaced .so for an odd number of items for , we have let denote the cardinality of a set , that is , the number of elements in .the next definition formalizes the partition of a set of items obtained in theorem 1 . *definition 2 * : for a globally homogenous rank data set , we define a partition of a set of items to be _ faithful _ if a ) for an even number of items and the first tca axis divides the set of items into 2 blocks such that where and \b ) for an odd number of items for the first tca axis divides the set of items into 3 blocks where and , and * remarks 1 * \a ) the bc for a faithfully homogenous group is , and the pearson correlation _corr(_ as in theorem 1a .\b ) theorem 2 concerns only one group .theorem 4 generalizes theorem 2 to multiple faithfully homogenous subgroups ; however in the multiple case only parts a and b of theorem 2 are satisfied and not part c. the maximum number of multiple faithfully homogenous subgroups is for or and which represents the number of within ( intra ) block permutations of the rankings .the next result shows that is an upper bound for the first tca dispersion measure * theorem 3 * : for a globally homogenous rank data set we have * definition 3 : * based on theorem 3 we define for a globally homogenous rank data the following global homogeneity coefficient takes values between 0 and 100 . in real applicationswe seldom find the value of .however , it may approach 100% as in the potato s rank data set considered later on .* theorem 4 * : if and only if there is a faithful partition of the items and the borda scores of all voters are intra ( within ) block permutations .* corollary 1 * : if and only if for .the following result complements proposition 1 .* corollary 2 : * for a globally homogenous rank data , for . * definition 4 * : a voter is named faithful if its first factor score in the next subsection we explain these results . *we consider the following artificial example with two voters and eight items .[ cols= " < , > , > , > , > , > , > , > , > " , ] for this example we have : , and so figure 2 displays the tca biplot .we note : first , the first axis subdivides the items into 2 faithful blocks and additionaly , the ordering of the items on the first axis is given by the borda count where we see that second , voters 1 and 2 are confounded , because they have the same profile ; further so voters 1 and 2 are faithful .third , which is smaller in value than because voter 3 scores cross the two faithful blocks : score 4 has crossed the block to the block and score 3 has crossed the block to the block .the second axis will represent local heterogeneity or random error .* as an application of theorem 4 , we consider inglehart s a priori classification summarized in part a in table 2 .we mentioned that , inglehart s _ materialist _ group , composed of 4 response patterns , has and the _ postmaterialist _ group , also composed of other 4 response patterns , has also ; there is no value of ghc for the _ mixed _ group , because the _ mixed _ group is not globally homogenous .let us consider inglehart s _ materialist _ group where the pair of materialist items are always ranked above the pair of postmaterialist items ; from table 1 we have ( where we permuted the positions of items c and b): see that the subsets and form a faithful 2 blocks partition of the four items with no crossing between the blocks , so theorem 4 applies and * we consider the following four orderings found in table 1 with their frequencies : , , and where abcd137 represents the ordering with its frequency of 137 .figure 3 displays the tca biplot .we can summarize the data analysis by the following observations concerning the first axis : first , by definition 1 the rank data is globally homogenous , because the factor scores of the 4 response patterns are positive on the first axis .second , item opposes to the items so the partition of the items is not faithful .evidently , this means that all voters ranked item a as their first choice ; however there is considerable heterogeneity concerning the rankings of the other 3 items these local heterogeneities will appear on the second and third axes ( not shown ) .third , for while by theorem 2 so .this example shows that the condition for is not sufficient for * the above discussion shows that : * ghc takes into account the following two aspects of rank data : a ) how the items are partitioned into blocks by the first axis .b ) the extent of crossing of scores of voters among the partitioned blocks , where the borda count statistic provides consensus ordering of the items on the first axis*. means that the 2 or 3 blocks are faithful and for all voters their orderings of the items are intra block permutations with no crossing between the blocks .* example 1 * : we consider tca of the potatos rank data set , found in vitelli et al .( 2015 , table a2 ) ; it has assessors and potatos .the first four tca dispersion measures are : , and ; by theorem 2 , so , which is very high .figure 4 displays the tca biplot of the assessors ( as points ) and the true ranks of the potatos as provided in their paper .first , on the first axis we observe a _faithful _ partition of the 20 potatos into 2 blocks : ten potatos t11 to t20 are found on the left side of the first axis , and ten potatos t1 to t10 are found on the right side of the first axis .however , potatos numbered 5 to 10 are not correctly ranked ( the true ranks of the 3 pairs , and are permuted ) .the distribution of the majority of the true ranks of the potatos on the first axis seem uniform .second : the first factor score of the assessors * * for has values ( 4 times ) , ( 2 times ) , ( 3 times ) and ( 3 times ) , which are highly clustered around .these values show that only three assessors are faithful ; the other 9 assessors scores have some inter block crossings ; by examining the signs of , the crossings happened between the subsets of items and , which are near the origin ; and this is the reason that did not attain its upper value of 100% but it approached it .third , given that the rank data is globally homogenous , the bc vector reflects the consensus linear ordering of the potatos on the first axis , because by theorem 1a , fourth , the data is almost unidimensional , because approaches 0 and this is apparent in figure 1 . in conclusion, we can say that this is a nice ideal real data set almost faithfully homogenous with some random sampling error .the notion of global homogeneity and local heterogeneity will further be explained by analyzing few real data sets .first we provide some details concerning part d of table 2 . for the other data sets we provide just essential aspects . herewe describe the six consecutive steps for the analysis of the rank data set in table 1 .we recall the the description of the four items : ( ) maintaining order in the nation ; ( ) giving people more to say in important government decisions ; ( ) fighting rising prices ; ( ) protecting freedom of speech ..... step 1 : tca of the full data set ....figure 5 displays the biplot of the complete data set , where to each response pattern its observed frequency is attached ; for instance the first response pattern in table 1 , with observed frequency of 137 , is labeled as abcd137 in the biplot . by the law of contradiction figure 4is not interpretable , because there are 16 response patterns associated with nega ; we recall that the point nega contains the negations of all response patterns .note that the 8 response patterns that appear on the second axis have very small negative values on the first axis .so we eliminate these 16 response patterns , which have negative first factor scores , and apply tca to the remaining 8 response patterns in step 2 ..... step 2 : tca of the subset composed of 8 response patterns .... the application of tca to the nega coded subset of weighted 8 response patterns produces the following tca dispersion values : and .figures 6 and 7 summarize the data .figure 6 represents the biplot of the principal plane of the 8 response patterns ; it has very clear interpretation .\a ) the first factor opposes the nega row to the 8 response patterns which represent the _ materialists _ : the 8 response patterns form a globally homogenous group of voters , and they represent the voters in the sample ; further they can be summarized by their average bc score , because by theorem 1a , note that contains the first factor coordinates of the four items plotted in figure 6 . by theorem 2 , so the global homogeneity coefficient of this group is , which is relatively high . on the first axis we observe the faithful partition of the items into 2 blocks , and ; but the first factor scores of the voters have two values , and .this implies that the response patterns cadb294 , cabd330 , acdb255 and acbd309 are faithful ; while there are inter block crossings of scores of the response patterns cdab70 , cbad117 , adcb93 and abcd137 .this last assertion is evident .\b ) the nega point contributes only to the first axis ; and by theorem 1c , it is eliminated from the higher axes : in figure 7 it is found at the origin .\c ) given that the 2nd and 3rd tca dispersion values are almost equal and relatively high , and , it is worthwhile to examine the principle plane made up of axes 2 and 3 , represented in figure 7 : it is evident that there are two principle branches dominated by items a and c respectively ; these two branches represent _ local _ _ heterogeneities _ , in the sense that item c opposes to item a on both axes , which are both qualified as materialist items .the two groups postmaterialist1 and postmaterialist2 in table 2 , which appeared as individual groups in croon s su mixture model and lee and yu s weighted distance based model , are the two local branches ( subdivisions ) of the materialists in the tca approach .these two branches are similar to marden ( 1995 , chapter 2 ) s the poles of attraction for items a and c. .... step 3 : tca of the 16 response patterns .... we apply tca to the 16 remaining response patterns that were associated with the nega point in figure 5 , and we get figure 8 , which by the law of contradiction is not interpretable : so we eliminate the 6 response patterns , which are associated with the nega point in figure 8 ; and apply tca to the remaining 10 response patterns in step 4 ( step 4 is similar to step 2 ) ..... step 4 : tca of the 10 response patterns .... tca dispersion measures for this case are : and .figures 9 and 10 totally reflect the data .figure 9 has the following interpretation : a ) the first factor represents the _ postmaterialists _ with _ _ which is low for the following two reasons : the partition of the four items into two blocks , and is unfaithful by the first axis and there are a lot of inter - block crossings by the response patterns .\b ) in figure 10 , the points nega and item a are found on the origin : they do not contribute to principal axes 2 and 3 .\c ) in figure 10 there are three principle branches dominated by items b , d and c , respectively ; these three branches represent _ local _ _ heterogeneities _ , but the two branches starting with b ( giving people more to say in important government decisions ) and d ( protecting freedom of speech ) are more important than the smaller branch starting with item c ( fighting rising prices ) . .... step 5 : tca of the 6 response patterns .... figure 11 represents the tca map of the last six patterns deleted in step 4 . in this plot, we identify the response pattern dacb30 as an outlier because its proportion is very small ; so we eliminate it ..... step 6 : tca of the remaining 5 response patterns .... tca dispersion measures are : and . in figure 12the first factor represents the _mixed _ group .note that on the first axis , the mixed items oppose to the mixed items ; similarly , on the second axis , the mixed items oppose to the mixed items . here , we continue the analysis of croon s political goals data by reducing it to the first two choices , that is considering only partial rankings , as done by inglehart .inglehart s approach is based on the first two choices of the four items ; thus the 24 response patterns of table 1 is reduced to 12 response patterns .for example , the first two response patterns and with respective frequencies 137 and 29 , are collapsed into one response pattern with frequency of 166 , where * represents either c or d. now the borda score of that is the items c and d take equal scores .the tca of the partial ranking table produces only two groups : materialists with two branches ( poles of attraction ) and postmaterialists with two branches .figures 13 through 16 display these results . on these figuresthe label , for instance cb186 , represents the partial order with its frequency of 186 .it is obvious that there is loss of information by reducing the complete rankings into partial rankings .roskam preference data of size 39 by 9 was analyzed by de leeuw ( 2006 ) and de leeuw and mair ( 2009 ) and can be downloaded from their package homals in r. in 1968 , roskam collected preference data where 39 psychologists ranked all 9 areas of the psychology department at the university of nijmengen in the netherlands .the areas are : soc = social psychology , edu = educational and developmental psychology , cli = clinical psychology , mat = mathematical psychology and psychological statistics , exp = experimental psychology , cul = cultural psychology and psychology of religion , ind = industrial psychology , tst = test construction and validation , and lastly phy = physiological and animal psychology .de leeuw ( 2006 ) compared linear and nonlinear principal components analysis ( pca ) approaches ( figures 4.6 and 4.7 in his paper ) , and concluded that the grouping in the nonlinear pca is clearer : psychologists in the same area are generally close together , and there is relatively clear distinction between qualitative and quantitative areas .this assertion is true , because it describes the two component groups of the mixture identified by tca as will be seen .later on , de leeuw and mair ( 2009 ) applied multiple correspondence analysis , named also homogeneity analysis , to this data set with the scale level ordinal , and interpreted the obtained figure ( figure 8 in their paper ) with the following conclusion : the plot shows interesting rating twins of departmental areas : , , , . is somewhat separated from the other areas .this assertion does not seem to be completely true , because of masking phenomenon .our tca analysis reveals that the 39 psychologists represent a mixture of two globally homogenous groups of sizes 23 and 16 as shown in figures 17 and 18 . in figure 17 , the following borda ordering of the areas can be discerned visually : the quantitative areas of psychology are preferred for this group ._ note that _ _ is not separated from the rest , it has a middle ranking_. in figure 18 , the following borda ordering of the areas can be discerned visually : for this group of psychologists phy is the worse rated area ; further , qualitative areas are preferred for this group . for the complete data set , , butthe resulting tca map is not interpretable . for group 1 , , , so ; for group 2 , so so , group1 is somehat more globally homogenous than group 2 ; however both groups have a lot of inter block crossings .we also note that : and ; that is , the first tca dispersion measure of noninterpretable data is much smaller than the corresponding value of an interpretable maximal subset .this data set of preferences can be found in takane ( 2014 , p.184 - 5 ) : in 1978 delbeke asked 82 belgian university students to rank - order 16 different family compositions , where the 16 orders are described by the coordinate pairs for and , the index represents the number of daughters and the index the number of sons .this data set has been analyzed by , among others , heiser and de leeuw ( 1981 ) , van deun , heiser and delbeke ( 2007 ) , takane , kiers and de leeuw ( 1995 ) . in these studies ,the family composition ( 0,0 ) is considered an outlier because of its high influence and sometimes omitted from analysis . in our approachthere are no outlier items , but voters can be tagged as outliers only by the law of contradiction .our results differ from theirs : we get a mixture of two globally homogenous groups of sizes 68 and 14 as shown in figures 19 and 20 , where points represent students and the symbol represents the family composition for . in figure 19, we see that for this majority group of 68 students the least preferred combination of kids is ( 0,0 ) and the most preferred combination is ( 2,2 ) .the first borda axis opposes the combinations composed of ( 0 daughters or 0 sons ) to the combinations composed of ( at least one daughter or at least one son ) .further , we see that there is a bias towards boys : on the first axis the position of the point is always to the left of the point . for group 1 , , , so ; looking at the values of the students first factor scores , we notice 2 clusters : cluster 1 , characterized by small number of inter blocks crossings , is composed of 26 students with first factor score of 8 students with and 5 students with this cluster , of proportion is represented by the dots making a vertical line in figure 15 .cluster 2 , characterized by large number of inter block crossings , are quite dispersed , having first factor scores between 0.4583 and 0.0750 . in figure 20 , for the minority group of 14 students ( labeled on the biplot by their row numbers ) the least preferred combination of kidsis and the most preferred combination is .the first borda axis opposes the combinations composed of and such that to the rest .further , in this group also there is a bias towards boys : on the first axis the position of the point is always to the left of the position of the point .for group 2 , and we conclude with a summary of some aspects of tca of nega coded rank data .we note that the rank data is spherical by nature , they are represented on a permutahedron ; so a directional method , like tca of the nega coded rank data , is able to discover some other aspects of the data , which are eclipsed or masked by well established methods , such as distance or latent class based methods . like occam s razor ,step by step , tca peels the essential structural layers ( globally homogenous groups ) of rank data ; it can also identify outliers in a group .we presented a new coefficient , , that measures the global homogeneity of a group .ghc is based on the first taxicab dispersion measure : it takes values between 0 and 100% , so it is easily interpretable .ghc takes into account the following two aspects of rank data : a ) how the items are partitioned into blocks by the first axis .b ) the extent of crossing of scores of voters among the partitioned blocks , where the borda count statistic provides consensus ordering of the items on the first axis . means that the partition of the set of items into 2 or 3 blocks is faithful and for all voters their orderings of the items are intra block permutations with no crossings between the blocks . for fully ranked data ,the lower bound of is positive but unknown being an open problem .as is well known , a coefficient in itself does not show important local details in a data set .we named these important local details , local heterogeneity ; and they appear in the higher dimensions of tca outputs : so it is important to examine the sequence of tca dispersion measures and the graphical displays as expounded and professed by benzcri .benzcri , j.p .geometric representation of preferences and correspondence tables . in _pratique de lanalyse des donnes _ , vol .2 , by bastin , ch . , benzcri , j.p . , bourgarit , ch . and cazes , p. p : 299 - 305 , dunod , paris . choulakian v. ( 2013 ) . the simple sum score statistic in taxicab correspondence analysis . in _ advances in latent variables _( ebook ) , eds .brentari e.and carpita m. , vita e pensiero , milan , italy , isbn 978 88 343 2556 8 , 6 pages .de leeuw , j ( 2006 ) .nonlinear principal component analysis and related techniques . in mjgreenacre , j blasius ( eds . ) , _ multiple correspondence analysis and related methods _ ,chapter 4 , 107134 , chapman & hall / crc , boca raton .van deun , k. , heiser , w.j . and delbeke , l. ( 2007 ) multidimensional unfolding by nonmetric multidimensional scaling of spearman distances in the extended permutation polytope ._ multivariate behavioral research _ , 42(1 ) , 103132 ._ proof _ : using the same notation as in choulakian ( 2014 ) , we designate , by ( 12 ) we have which we get, , by triangle inequality of the l norm we have \c ) the vector containing the first and only factor scores of the columns is where for in particular , we see that and for are equispaced .so for an odd number of items for , we have _ proof _ : a ) a _ faithfully homogenous group _ consists of one response pattern , and its borda score values for a complete linear order of items , without loss of generality by relabeling of the items , will be .then the nega coded correspondence table will have only two rows and columns will be of rank 1 ; that is , there is only one principle factor , for which we note the taxicab dispersion measure by for a fixed finite integer value of ( 18 ) the elements of are value of by theorem 1 b is \a ) necessary condition .we have , \end{aligned}\]]where and for given that is a sum of positive terms , it is easy to see that * u* is the first tca principal axis , where is the characteristic function of that is , it has the value of 1 if and 0 otherwise .so for and the first tca axis divides the set of items into 2 blocks such that with and that is , the partition of the set of items is _ faithful . _furthermore , _ _ the borda scores of all voters are intra block permutations by definition of and .\b ) sufficient condition .we suppose that the partition of is faithful and there are no crossings between the 2 blocks and this implies that for for and for thus we get as in the proof of the necessary condition .
the traditional methods of finding mixture components of rank data are mostly based on distance and latent class models ; these models may exhibit the phenomenon of masking of groups of small sizes ; probably due to the spherical nature of rank data . our approach diverges from the traditional methods ; it is directional and uses a logical principle , the law of contradiction . we discuss the concept of a mixture for rank data essentially in terms of the notion of global homogeneity of its group components . local heterogeneities may appear once the group components of the mixture have been discovered . this is done via the exploratory analysis of rank data by taxicab correspondence analysis with the nega coding : if the first factor is an affine function of the borda count , then we say that the rank data are globally homogenous , and local heterogeneities may appear on the consequent factors ; otherwise , the rank data either are globally homogenous with outliers , or a mixture of globally homogenous groups . also we introduce a new coefficient of global homogeneity , ghc . ghc is based on the first taxicab dispersion measure : it takes values between 0 and 100% , so it is easily interpretable . ghc measures the extent of crossing of scores of voters between two or three blocks seriation of the items where the borda count statistic provides consensus ordering of the items on the first axis . examples are provided . key words : preferences ; rankings ; borda count ; global homogeneity coefficient ; nega coding ; law of contradiction ; mixture ; outliers ; taxicab correspondence analysis ; masking .
predicting future developments of financial asset prices is an extremely difficult task .certain problems , such as predicting the direction of movements , are nearly unsolvable . meanwhile , certain aspects can be easily predicted .for instance , one can be sure that the prices will always fluctuate intermittently , largely due to people who believe they have found the winning algorithm , the philosopher s stone .the analysis of the historic charts of price dynamics is referred to as technical analysis .the essence of the technical analysis is the hypothesis that the patterns of historic data can forecast the future price movements . on the other hand , the efficient market hypothesis ( c.f . ) states that security prices reflect fully all available information . in its century - long historydated back to the work of bachelier , the financial analysis has made use of the both approaches .the random walk hypothesis , standard deviation and the correlations between securities returns are the cornerstones of seminal papers in financial analysis .econophysics has introduced various more elaborate , mostly non - linear tools of analysis ( c.f . and references therein ) . herewe extend a recently developed technique of length - distribution of low - variability periods .strongly non - linear systems , such as turbulent fluids and plasmas , granular media , biological and economical systems , etc ., are typically characterized by scale - invariance and scaling laws .so , it should be not surprising that such seemingly different disciplines like turbulence studies , biological physics , and econophysics have many common tools of data analysis .for example , power - law distributions were first used in economics in the end of 19th century , and later found in a wide variety of systems ( often referred to as the zipf s law ) , c.f .similarly , diverse systems are known to generate signals , which are self - affine , and hence , can be characterized by the hurst exponent , c.f .further , the stable lvy distributions ( c.f . ) have been found to be relevant to all the mention disciplines ; about truncated levy flights in econophysics , c.f . .finally , multi - fractal formalism is by far the most popular tool for scale - invariant analysis of intermittent time - series , c.f .. it should be noted , however , that in many cases ( including econophysical applications ) , there is no profound understanding of the origin of multi - fractality , and hence , the multi - fractal analysis is not necessarily the optimal method . indeed , multi - fractal formalism has been devised in the context of turbulence , and is specifically suited for systems with random multiplicative cascades .meanwhile , in the case of such time - series as stock prices or heart rate variability signal , the presence of multiplicative cascades is not evident . therefore , _ there is a clear need for a deeper understanding of the character of intermittency in the case of financial time - series_. this problem can be approached by studying new independent and/or more general methods of scale - invariant analysis .for instance , in the case of heart rate variability , it has been found that the whole time - period can be clusterised into self - similarly distributed segments of approximately constant heart rate . in order to address this problem ,a new method has been suggested recently which is based on the analysis of the distribution of low - variability periods .the low - variability period is defined as a time period with maximal length where consecutive relative changes in realizations of the time series are less than given threshold .note that in addition to the financial assets , the low - variability period analysis has been applied to biological systems .it has been shown that in the case of the multi - affine time series , the cumulative distribution function of the low - variability periods is in the form of a ( multi - scaling ) power - law .since the opposite is not necessarily true , the low - variability period analysis is , indeed , a more universal method than the multi - affine analysis .even if the time - series is actually multi - affine , the low - variability period analysis can be still useful , because _ ( a ) _ the power - law exponent is related to the multi - fractal dimension ; this circumstance provides an easy method for checking the assumption of multi - affinity ; _ ( b ) _ low - variability periods provide higher time - resolution of time series analysis .this paper serves three main purposes ._ first _ , we are going to apply the method of the analysis of low - variability periods to the data of trading volumes .this analysis is motivated as follows .similarly to the asset prices , the trading volumes are known to fluctuate intermittently , c.f . . according to the mandelbrot s model of stock prices as a fractional brownian motion in multi - affine trading time , one could expect that the time series of trading volumes are multi - affine .the analysis of the length - distribution of low - variability periods of trading volumes serves as a test of this model ._ second _ , we discuss the consequences of the presence of power laws of low - variability periods ._ third _ , an attempt is made to generalize the method to multivariate time - series ( e.g. a stock price together with the trading volume ) .it should be noted that in the case of multi - affine analysis , there is no simply interpretable way for such a generalization .we start with a brief description of the method devised in ref . , a low - variability period is defined as such a contiguous time - interval ],title="fig:",width=491 ] [ conv - m1dax ] ._ method 0 ( volume)_. the scaling exponents are calculated for spx time series by using conditions ( [ delta ] ) and ( [ m1 ] ) with days and $],title="fig:",width=491 ] [ conv - m1spx ] according to eq . ( [ m2 ] ) , the low - variability period is terminated when price change ( rise or drop ) is significant ( i.e. larger than the threshold parameter ) , and the volume increases faster than the threshold . equation ( [ m3 ] ) represents a definition , opposite to eq .( [ m2 ] ) : the low - variability period is terminated when the price change exceeds the threshold parameter , and the volume decreases faster than the threshold .these definitions are useful for studying the asymmetry between the volume rise and drop : if the multi - scaling exponent turns out to be different for methods 2 and 3 , there must be an asymmetry between those volume spike and squeeze events , which are accompanied by a large price variability .indeed , the price condition in eqns ( [ m2 ] ) and ( [ m3 ] ) is the same ; so , the differences in the scaling exponent must be due to the different effect of the volume condition ( [ m2 ] ) and ( [ m3 ] ) .let us refer back to method 1 [ eq .( [ m1 ] ) ] . with this method, the events terminating the low - variability periods represent a superposition of the respective events for the methods 2 and 3 [ eqns ( [ m2 ] ) and ( [ m3 ] ) ] .so , if the scaling exponents calculated according to method 2 are very similar to the ones calculated according to method 1 , we can conclude , that the amount of the low - variability periods defined by method 3 is insignificant .the same holds true if the scaling exponents of method 3 tend to be similar to the ones of method 1 .the multi - factor scaling exponents are found for the dax data with days . in fig .[ 2f - logtau ] , the scaling exponents of dax index are plotted against with _ ( a ) _ , _ ( b ) _ , _ ( c ) _ and _ ( d ) _ .the methods 0 - 3 [ eqns ( [ m0])([m3 ] ) ] are used for definition of low - variability periods .of dax time series are plotted against with _( a ) _ , _ ( b ) _ , _ ( c ) _ and _ ( d ) _ ,title="fig:",width=529 ] [ 2f - logtau ] by giving a small value to the thresholds and in any of eqns ( [ m0])([m3 ] ) , the number of longer low - variability periods becomes small .therefore , we have a large number of short low - variability periods ( compared to high values ) ; this leads to high values of the scaling exponent . in fig .[ 2f - logtau]a , the threshold value of is set to very low level of . from eq .( [ m1 ] ) it can be seen that then the two - factor model leads to results which are very similar to the single - factor ( i.e. price factor ) model .likewise , the larger the volume threshold , the larger the difference between the -curves of the single- and multi - factor models ( at , the curves are rather dissimilar ) . an important issue is the difference between the curves of methods 2 and 3 .one can notice that there is a different behaviour at the small values of the parameter , an evidence of the asymmetry between volume rise and drop .one can also notice that the scaling exponents calculated according to the method 3 [ eq .( [ m3 ] ) ] are lower than the ones calculated according to methods 1 and 2 [ eqns ( [ m1 ] ) and ( [ m2 ] ) ] .meanwhile , the difference between the outcomes of the methods 1 and 2 [ eqns ( [ m1 ] ) and ( [ m2 ] ) ] is minor .therefore , we conclude , that high price variability is typically accompanied by increasing volume .this conclusion is independent of the price / volume pre - history ( i.e. is valid both for short and long low - variability periods ) . in this paper, we have not analysed the problem of higher - rank multi - factor models. however , this can be useful for e.g. multi - stock data analysis , where each stock price provides an independent input stream .this situation will be addressed in further studies .the concept of low - variability periods has been proven to be useful for various econophysical issues ( not just limited to the scope of stock prices / indices and currency exchange rates ) .so , we found that the time series of stock trading volumes obey multi - scaling properties , similarly like the price data .however , while the multi - scaling exponent of the price time series follows a pattern , characteristic to the multi - affine data , in the case of trading volumes , there is a clear departure from that pattern ( one can say that the fluctuations are less intermittent ) .further , we have shown that the presence of the multi - scaling distribution of the length - distribution of the low - variability periods gives rise to a super - universal scaling law for the probability of observing next day a large price movement .this probability is inversely proportional to the length of the ongoing low - variability period , a fact which can be used for risk forecasts .finally , the multi - factor model is proposed for time series analysis . in this paper , only the simplest two - factor model is described and applied to stock price and volume data .the low - variability periods of multi - variate time series can be defined in different ways ; for instance , the threshold conditions applied to the single data streams can be combined by logical `` and '' , as well as by logical `` or '' . in the our case of price and volume data , three different definitions of low - variability periodshave been applied ( in order to study the asymmetry between the volume rise and drop , we have also applied sign - dependant threshold conditions ) .this analysis led us to the conclusion that high price variability is typically accompanied by increasing trade volume , independently of the prior events of the market . in the light of this observation , the common thesis of technical analysis, `` increased trading volumes confirms the price trend '' , becomes less useful .indeed , most of the significant price jumps are accompanied by increased trading volumes ; so , almost all the `` price trends '' pretend to be `` confirmed '' .the support of estonian sf grant no .5036 is acknowledged .we would also like to thank prof .jri engelbrecht for fruitful discussions .e. fama , journal of business 38 ( 1965 ) 34 e. fama , journal of finance 25 ( 1970 ) 383 e. fama , journal of finance 46 ( 1991 ) 1575 l. bachelier , theorie de la speculation .( doctoral dissertation in mathematical sciences , faculte des sciences de paris , defended march 19 , 1900 h. markowitz , journal of finance 7 ( 1952 ) 77 w. sharpe , journal of finance 19 ( 1964 ) 425 f. black , m. scholes , journal of political economy 81 ( 1973 ) 637 r.n .mantegna , h.e .stanley , an introduction to econophysics : correlations and complexity in finance , cambridge university press , cambridge , 1999 m. ausloos , ph .bronlet , physica a 324 ( 2003 ) 30 x. gabaix , p. gopikrishnan , v. plerou and h.e .stanley , nature 423 ( 2003 ) 267 xavier gabaix , power laws and the origins of the business cycle , mit , department of economics and nber , august 20 , 2003 y. fujiwara , c. di guilmi , h. aoyama , m. gallegati , w. souma , physica a 335 ( 2004 ) 197 y. fujiwara , physica a 337 ( 2004 ) 219 b.b .mandelbrot , the fractal geometry of nature , w.h.freeman , new york , 1982 n. vandewalle , m. ausloos , eur .j. b 4 ( 1998 ) 257 r. benzi , g. paladin and a. vulpiani and m. vergassola , j. phys . a : math 17 ( 1984 ) 3521 p. bernaola - galv ' an , p.ch .ivanov , l.a.n .amaral and h.e .stanley , phys .87 ( 2001 ) 168105 j. kalda , m. skki , m. vainu , m. laan , arxiv : physics/0110075 v1 26 oct 2001 , m.skki , j. kalda , m. vainu , m. laan , physica a 338 ( 2004 ) 255 m. bachmann , j. kalda , j. lass , v. tuulik , m. skki , h. hinrikus , method of nonlinear analysis of electroencephalogram for detection of the effect of low - level electromagnetic field , accepted ( 2004 ) by : med .comput .p. gopikrishnan , v. plerou , x. gabaix , h.e .stanley , phys . rev .e 62 ( 2000 ) 4493 m. ausloos , k. ivanova , in : h. takayasu ( ed . ) , the applications of econophysics , proceedings of the second nikkei econophysics symposium , springer verlag , berlin , 2004 , p. 117s. maslov , phys .74 ( 1995 ) 562
the scaling properties of the time series of asset prices and trading volumes of stock markets are analysed . it is shown that similarly to the asset prices , the trading volume data obey multi - scaling length - distribution of low - variability periods . in the case of asset prices , such scaling behaviour can be used for risk forecasts : the probability of observing next day a large price movement is ( super - universally ) inversely proportional to the length of the ongoing low - variability period . finally , a method is devised for a multi - factor scaling analysis . we apply the simplest , two - factor model to equity index and trading volume time series . , econophysics , multi - scaling , low - variability periods 89.65.gh , 89.75.da , 05.40.fb , 05.45.tp ,
protein - protein interactions play key roles in many cellular processes such as signal transduction , bioenergetics , and the immune response . moreover , many proteins function in the context of protein complexes of variable sizes and lifetimes. examples of such complexes are ribosomes , polymerases , spliceosomes , nuclear pore complexes , cytoskeletal structures like the mitotic spindle or actin stress fibers , adhesion contacts , the anaphase - promoting complex , and the endocytotic complex . for yeast , 800 different core complexeshave been identified , suggesting the existence of 3000 core complexes for humans .in addition it has been shown for yeast that most protein complexes are assembled just - in - time during the course of the cell cycle .in fact many protein complexes in the cell are highly dynamic , with fast turnover of many components .one can argue that their dynamics , although experimentally very hard to access , is biologically more relevant than their equilibrium properties .therefore a systematic understanding of the dynamics of protein complexes in cells is one of the grand challenges in quantitative biology .the elementary unit of all of these cellular processes is the bimolecular protein - protein interaction .the strength and specificity of protein - protein association are determined by the integrated effect of different interactions , including shape complementarity , van der waals interactions , hydrogen bonding , electrostatic interactions and hydrophobic effects .for example , the importance of electrostatic interactions has been demonstrated by experimental measurement at different ionic strengths . to a first approximation ,bimolecular reactions are characterized by on- and off - rates .the equilibrium association constant ( or affinity ) then follows as the ratio of the two . from a conceptual point of view , on- and off - rates are very different . on - ratesare commonly believed to be controlled by the diffusion properties as well as by long - ranged electrostatic interactions , whereas off - rates are rather controlled by short - ranged interactions like hydrogen bonding and van der waals forces .the main features of the dynamics of protein association can be conceptualized within the framework of the _ encounter complex _ . to this end, the association is divided into two parts .first , mutual entanglement - the encounter complex - is achieved by the proteins due to a transport process including mainly diffusion but also electrostatic steering on small length scales .if diffusion - controlled , classical continuum approaches can be used to describe this part of the process . to form the final complex , the system then has to overcome a free energy barrier due to local effects like dehydration of the binding interface . due to the various molecular contributions involved in this step , here the two binding partners essentially have to be modelled at atomic detail .moreover the solvent may need to be treated explicitly and one might has to account for conformational changes .thus , it appears reasonable to use the encounter complex as a crossover point from a detailed , atomistic treatment to a coarse - grained model and vice versa .thermal fluctuations are an essential element of protein - protein encounter because they allow the two partners to exhaustively search space for access to the binding interface . from the viewpoint of stochastic dynamics ,protein - protein association is a first passage time problem which can be addressed mathematically in the framework of langevin equations .the application of langevin equations to association phenomena goes back to early work in the colloidal sciences . in these early approaches ,the reactants were considered to have small spatial extensions and to be uniformly reactive . for large biomolecules like proteins ,the situation is fundamentally different .typically , proteins and other biomolecules have specific sites on their surface , where a particular binding reaction can take place . therefore , such binding events are subject to intrinsic geometric constraints for every particular protein - protein pair or larger assembly . the standard model for ligand - receptor interactionwas introduced by berg and purcell in the context of chemoreception based on the idea of using reactive patches to model anisotropic reactivity . due to anisotropic reactivity ,also rotational diffusion becomes important .shoup et al . showed that the effect of rotational diffusion can strongly increase the association rate between a receptor with a flat reactive patch and uniformly reactive ligands .later , analytic expressions for the association rate between two spherical particles with both carrying a flat axially symmetric and asymmetric reactive patch were derived .similar concepts were also applied by schulten and colleagues .for many important aspects , analytical approaches are not possible and computer simulations are required .this approach has been used early for protein - protein association .the importance of electrostatic interactions for long ranged attraction was also emphasized by brownian dynamics simulations of protein - protein encounter .if atomic structure is taken into account , then successful encounters are defined by simultaneous fulfillment of two to three distance conditions between opposing residues on the two surfaces .brownian dynamics have also been used for the simulation of high density solutions , e.g. by bicout and field who studied a cellular `` soup '' containing ribosomes , proteins and trna molecules , or by elcock and coworkers who simulated a crowded cytosol for 10 . in order to develop a quantitative framework for modelling the dynamics of protein complexes, it is essential to understand the relative importance of generic principles and molecularly determined features of specific systems of interest .only a good understanding of this issues will allow us in the future to develop reasonable coarse - graining strategies to address also large complexes of biological relevance . in this study, we therefore address how general principles guiding the diffusional association of biomolecular pairs are modulated by their particular physicochemical properties . to this endwe have selected three molecular systems of interest with different steric and electrostatic properties .one of the best studied bimolecular complexes is the extracellular ribonuclease barnase and its intracellular inhibitor barstar .both proteins carry a net charge of and , respectively , which leads to a considerable electrostatic steering . considering the structure of the two proteins , barnase has a bean - like form , matching well on a large reactive area with the nearly spherical barstar .a classic example of electrostatically - driven protein association is the iso-1-cytochrome _ c _ - cytochrome c peroxidase ( cytc : ccp ) complex , charged with and , respectively , and exhibiting dipoles aligned well with the reactive areas .finally , we selected the medically important complex of a peptide fragment of p53 and its inhibitor mdm2 , which is used for anticancer drug design . in this system , electrostatic attraction plays a minor role . on the other hand ,the steric match of the two surfaces is of particular importance here .it is a perfect example of a key - lock binding interface , where p53 is buried deep into a cleft on the mdm2 surface . in this paperwe systematically explore the effect of various coarse - graining procedures on the rate for protein - protein encounter for the three selected model systems .we revisit early approaches based on langevin equations and combine them with current knowledge on molecular structure .the paper is organized as follows : in sect.[model_and_methods ] , we present our different stochastic models and describe the methods we use to parameterize the three considered bimolecular model systems . sect . [ results ]contains the main findings of our study , which are discussed and summarized in sect.[summary_and_discussion ] .one aim of this work is to determine how important specific details of the model proteins are with respect to the association properties .therefore , we considered three different levels of detail as depicted in fig .[ different_model_scheme ] for the three chosen systems . in the most generic approach ( ) , we only considered the steric interaction between spherical particles covered with reaction patches .as a first refinement ( ) , an effective coulombic interaction was introduced using the dipolar sphere model ( dsm ) .finally , since our langevin equation approach is particularly suited to capture anisotropic transport , we consider a more refined version for protein sterics ( ) . in this approachthe excluded volume of each protein was modeled by 8 - 25 smaller beads . uses the dsm as well . in fig .[ different_model_scheme ] , we also show the full structures in the bottom row as surface representations , including the locations of the binding interfaces . in the following , the general properties of the simulation model and the different techniques used in this work will be explained .the diffusion of the protein model particles is described by an anisotropic diffusion matrix in all versions of our model . in ref . , de la torre and coworkers present a method to calculate this diffusion matrix from the pdb structure of a protein .this method has been implemented in a software called hydropro which is provided online by the same authors ( http://leonardo.fcu.um.es/macromol ) .the basic concept is to put spheres of a certain size at the position of any non - h atom .the volume of these spheres effectively models a fixed hydration shell .this construct is then filled up with smaller , densely packed , but non - overlapping spheres .since the hydrodynamic properties of a rigid body are determined by its outer boundary only , a shell of these small spheres is generated by deleting all spheres which have a maximum number of possible neighbors . to this shell a sophisticated technique is applied , which has been developed by de la torre and colleagues over the years , to calculate the diffusion matrix of such a cluster of non - overlapping spheres ( see references in ) .several system properties are implicitely contained in the mobility matrix , such as ambient temperature , as well as the density and dynamic viscosity of the solvent , where we chose the respective parameters of water , and s. for simplicity , hydrodynamic interactions were not introduced in our models , because the corresponding effect on the association rates is expected to be well below 10% . for the integration of the langevin equation , which describes the stochastic motion of the particles , we follow an approach which has been recently developed to model cell adhesion via reactive receptor patches .let be a six - dimensional vector describing position and orientation of a particle at time .since the noise due to brownian motion is additive ( which means that it does not depend on due to a constant mobility matrix ) , the langevin equation is given by : here , is a six - dimensional vector containing the force and torque acting on the particle , and denotes gaussian white noise : as explained in app .c of ref . , the euler algorithm can be used to solve a discretized version of this equation : for proteins , the typical orders of magnitude are and nm .therefore , a reasonable choice for the time step is , as this leads to a mean step length of nm .the mobility matrix of a particle is defined in a particle - fixed coordinate system .thus , the whole step has to be calculated in terms of particle - fixed coordinates and then transformed to the laboratory coordinate space . in particular, this transformation implies a rotation regarding the orientation of the particle .special attention has to be payed to the force , which is typically calculated in the global frame of reference and hence has to be transformed to particle space before eq .[ langevin_equation_discrete ] can be evaluated .this back - transformation is achieved by applying to .since rotation matrices simply consist of a list of orthonormal vectors , their inverse is equal to the transposed matrix .thus eq.[langevin_equation_discrete ] can be rewritten as + { \mathcal o}(\delta t^2)\text { . }\label{langevin_equation_discrete_rotated}\ ] ] as and are six - dimensional and contain information about torque and rotation , eq .[ langevin_equation_discrete_rotated ] is only formally correct , as acts on both the translational and rotational parts of the respective vectors separately . in each step of the simulation , a displacement vector drawn for each particle as described above .if this global displacement leads to any violation of the hardcore repulsion , all suggested displacements are rejected and new are calculated .this procedure continues , until an update of all positions and orientations is found which does not lead to any overlap . in this way , the constraint according to the excluded volume effect is included in the stochastic motion .the spherical reactive patches are not taken into account for the steric interactions , i.e.they may not only overlap pairwise but also with the model particles .one would expect that our procedure leads to errors of order if two particles are in close proximity of order .however , it has been shown for a different system that in practise the deviation from the expected behavior is very small and thus the approach is reasonable . as mentioned before, the mobility matrix represents anisotropic diffusion . for large times ,anisotropic diffusion crosses over into isotropic diffusion because the information about the initial orientation gets lost after a certain relaxation time due to the rotational diffusion .in general , translational and rotational diffusion are coupled so that large time steps can not be used . however , for the particular systems studied here , we found that the diffusive coupling is a very small effect .in particular , the major entries in the diffusion matrix of the proteins used here according to hydropro multiplied with different powers of the stokes radius cm to make the dimensions comparable are , , .therefore , the effect of diffusive coupling is and smaller than rotational and translational diffusion , respectively . finally , the typical time scale at which the cross - over is expected can be calculated to be .time steps of this magnitude were rarely used in the simulations ( see below ) , so that for most of the steps , the anisotropicity is well preserved .therefore we can safely neglect changes in the anisotropicity of the mobility matrix .the simulations were performed in a cubic box with periodic boundary conditions .schreiber and fersht used concentrations between m and m in their experimental studies of the association rate of the barnase : barstar complex . the average volume containing one particle at a concentration is with the avogadro number .hence , the edge length of a cubic boundary box representing concentration can be calculated from {v}=1/\sqrt[3]{c n_a} ] .the probability that , e.g. , the particular pair reaches encounter at a certain time before the three other possible pairs ( , , ) , is therefore : thus , the probability that any of the four possible particle pairs reaches encounter before the respective three other pairs do , is as just calculated , i.e. has again a poisson form like and . in general , for higher numbers of particle pairs , we expect to again find an exponential distribution of the time to first encounter with the encounter frequency .this quadratic behavior is nicely confirmed by the data shown in fig.[randinit_finite_size ] , which suggests that even for small systems with only two particles , no severe finite size effects have to be expected . in particular , this rules out that larger numbers of particles lead to noticeable three - body interactions or hindering of the encounter process .one feature of special interest which we can address with our langevin equation approach is the pathway through which the encounter is formed .we dissected the encounter process into several parts as visualized in fig .[ alignment_states_scheme ] . at the start of each run, the systems were prepared in the unaligned state , as described earlier .a state of close approach which however does not allow for binding is called .the two model proteins will switch between states and a number of times , until they finally reach the encounter complex due to a favorable combination of translational and rotational diffusion . in the following ,each occurance of will be termed a _contact_. thus counts the number of unsuccessful contacts before the encounter is finally formed .a separate set of simulations was performed to measure the distribution of .furthermore we analyzed the distribution of return times .this is the time it takes for two model proteins to get into contact again ( ) after having lost translational alignment ( ) , i.e. after they were in close proximity . finally , we determined the distribution of resting times in translational alignment before the two model particles separated again .as an example , fig .[ alig_no_example ] shows the distribution of for the barnase : barstar model system at m in the framework of .surprisingly the distribution of the number of contacts has again a poisson form .note that the number of unsuccessful trials in state can be rather large ( up to ) .we also found that the distribution of is roughly independent of concentration .this is reasonable , as after the two proteins were in contact once , the further encounter process is guided by returns to state and thus should be more or less independent of system size .[ alig_onoff_example ] shows that the return time ( plotted with the plus - symbol ) is not exponentially distributed .instead , it follows a power law and undergoes an exponential cutoff due to the finite size of the boundary box at large .therefore , there is a high probability for very small return times , i.e.situations , where the two model proteins do not really separate , but immediately after loosing translational alignment ( ) get closer again ( ) .the power law behavior of the return time is consistent with the problem of a random walk to an absorber in three dimensions . in principal , these two situations are equivalent since the relative motion of the two proteins while unaligned be approximately understood as an isotropic random walk , and the criterion for going over to translational alignment reflects an absorbing boundary in the configuration space of relative positions .the distribution of resting times ( plotted with the cross - symbol in fig .[ alig_onoff_example ] ) follows the same power law as , but the exponential cutoff occurs much earlier .the reason is that here the cutoff is determined by the region in configuration space where the two model proteins are in state .as this is much smaller than the whole volume of the boundary box , in which they are unaligned and therefore in state , a random walk in state will end earlier .the differences we obtain in the distributions of and when using the variants and compared to are generally very small and unlikely to account for any deviations in the overall encounter rates .also , the distribution of is always well described by a single exponential decay .however , the inverse decay length significantly varies between the different situations .therefore , changes in the overall encounter rate are mainly caused by a different probability for reaching state from state .this is reasonable when considering that the interactions are strongly localized and can thus only act while the system is in the aligned state .so far we have only considered barnase : barstar ( ) to demonstrate how our computational model works .we now use our setup for a more comprehensive investigation .in particular , we also apply our method to two other systems , cytochrome c and its peroxidase ( ) as well as the p53:mdm2 complex ( ) . those represent systems with different interface characteristics and where the role of electrostatics is either much stronger ( ) or much weaker ( ) than for . to this end, all the previously described quantities were measured for 8 different concentrations .furthermore , to find out how the choice of the radius of the reactive patch affects the results , we used patch radii of and in addition to the initially considered value of . tab .[ main_data_table ] lists the encounter rates as obtained from these simulations .the rates are all roughly of the same order of magnitude . yetseveral interesting qualitative features are readily apparent .first , for decreasing patch sizes , the rates generally decrease .second , this effect is weaker for to , which basically means that the electrostatic attraction and orientation due to the dipole interaction are indeed enhancing the encounter .the strongest effect of the electrostatic interaction is obtained for cytc : ccp , which is the system with the largest monopole and dipole and the best alignment of the directions of the dipoles and the reactive patches . on the other hand p53:mdm2is nearly unaffected by the effective charges , due to its weak monopole charges and , additionally , an unfavorable alignment of the dipolar interaction and the reactive surface area .furthermore , regarding the results with detailed steric structure , the effect on the rate is correlated with the deviations of the protein forms from the spherical excluded volume approach in and .this deviation is smallest for cytc : ccp and largest for p53:mdm2 . the findings for the encounter rate are also reflected in the results for .as expected , an increase in correlates with a decrease in .the only exception is cytc : ccp observed in , which is also special in regard to the effect of patch size . here , the effective coulombic interaction is strongest and the dipole moment is best aligned with the reactive patches . therefore ,having reached state once , the proteins do systematically orient towards , while they are additionally strongly steered back towards when loosing their translational alignment .this behavior is the stronger the closer the model proteins have approached once i.e. for the case of small patch sizes , where state the smallest distance . while this only explains the inversion in the behavior as a function of patch size , is obviously still slightly decreasing with smaller patch sizes. this can be explained by the fact that the time to the first approach of state is larger for smaller patches , as this implies a smaller relative distance .this obviously compensates the fact that afterwards the encounter is formed even quicker , as reflected by the decreasing . the strong correlation between the encounter rate and the mean number of contacts is also evident from the correlation plot in fig.[correlation_plot ] .indeed , seems valid for most of the different systems and models .it is noteworthy that the prefactor is very similar in all cases .basically , this means that one unsuccessful contact takes the same amount of time on average , no matter what the local details of the system are .this gets more obvious recalling the distributions of the resting and return times and in fig .[ alig_onoff_example ] , which shows that . as the average time for one contact will be approximately , it is dominated by , which is only marginally influenced by the local details of the system and the chosen model .therefore it can be concluded , that for and the incorporation of a more detailed modeling approach influences and , but not the overall characteristics of the encounter process .the only exceptions for the clear correlation of and are and for the case cytc : ccp ( ) , where is nearly independent of because of the strong electrostatic interaction .this is consistent with the earlier finding , that the behavior of cytc : ccp is qualitatively different , as its electrostatic interactions would facilitate long - lived nonspecific encounters between the proteins that allowed the severe orientational criteria for reaction to be overcome by rotational diffusion .for all three systems studied , in the smallest patch size leads to a somewhat artificial slowing down , because in this case an overlap of the patches is rather hindered by the beads modeling the protein structure .we next address the dependence of the data on the size of the reaction patches in more detail .this behavior is exemplary studied with the barnase : barstar model system . in fig .[ patch_size_plot ] , the encounter frequency has been obtained from simulations for barnase : barstar - like model particles in the framework of at several concentrations and varying patch sizes .all values in the figure have been scaled with the concentration , which leads to data collapse .it is obvious that as gets larger than at around , the reactive patch covers the whole model particle and we therefore cross over to the smoluchowski limit of isotropic reactivity , where .however , at high densities and large , the patches span a large part of the simulation box of edge length , and do immediately encounter for a threshold value of , where the sum of the patch diameters equals the triagonal .thus , the encounter frequency must diverge with , where we suppose , as the volume of configurational space without immediate encounter is decreasing with .this assumption in addition with the smoluchowski behavior would lead to for large , which follows the data in fig .[ patch_size_plot ] well ( black dashed lines ) . as already mentioned it is well known that the electrostatic interaction of proteins can severely increase the association rate . however , under physiological salt conditions , coulombic interactions are screened by counter ions in the solution on a small length scale of approximately 1 nm . thus , deviations from case without effective charges will only arise for small .[ patch_size_electro_plot ] shows the results of respective simulations for compared to the results for , as considered before .indeed , for large patch radii , the results are similar , while for smaller , the encounter rates in are clearly higher compared to .however , the crossover to a power law behavior with roughly can be detected for very small , but at a prefactor of about 50 times larger than for .the main goal of this work was to model protein encounter in a generic framework which allows us to include molecular details without making future upscaling to larger complexes impossible .our model approach incorporates steric , electrostatic and thermal interactions of the proteins considered .these interactions are thought to be the major factors governing protein encounter .not included are conformational changes of the proteins upon association , related entropic terms , and the molecular nature of the surrounding solvent that becomes relevant at close distances .the model parameters are extracted from the atomic structures available in the protein data bank by generally applicable protocols as described in sect.[model_and_methods ] . in principle, these methods of data extraction can be fully automatized .the biggest advantage of our coarse - grained model is the possibility to extend the simulations to large scales in terms of particle numbers , time and system size . in many of the earlier studies ,the system was prepared already close to encounter and the overall association rate was then calculated via a sophisticated path - integral like procedure . in contrast , our simulations account for the whole process of diffusional encounter and is thus rather general , allowing for spanning large time scales via our adaptive time step algorithm . in particular ,each set of simulations consists of to runs of lengths up to the order of seconds and could be performed on a standard cpu within hours of computer time .being able to directly obtain the first passage times ( fpt ) of the encounter processes in our model allows to check the validity of several phenomenological assumptions .first of all , the fpt distribution matched very well a poisson process with a single stochastic rate , as seen in fig .[ randinit_histogram ] , which validates the notions of encounter and association .moreover , our approach provides two ways of controlling the particle density and for both cases the results corresponded well to the expected scaling .first , the concentration is inversely correlated with the size of the periodic boundary simulation box .we show that the encounter rate grows linearly with the particle concentration .furthermore , leaving the box size constant , the concentration can also be varied by adding a higher number of particles . considering only the _ first _ encounter of any of the possible complementary pairs of model particles , the mean first passage time to this eventis not only lowered by a factor but we show that the expected behavior is an enhancement of the encounter frequency by , which is nicely matched by the results of the simulations. therefore we can conclude that the computational model studied here satisfies the general requirement of stochastic bimolecular association processes that describe binding by a single rate constant . to test our model against known resultswe have chosen three well - known bimolecular systems with different characteristics .the barnase : barstar complex is the gold standard for protein - protein association and characterized by relatively strong electrostatic steering .the association of cytochrome c and its peroxidase is even more strongly affected by coulombic attraction . here , both proteins have a rather spherical form .finally , the p53:mdm2 complex has a different characteristic with a very small net charge and a deep cleft perfectly matching the small peptide p53 , whose reactive area is therefore nearly spanning over its whole surface .these model systems were purposely chosen to check whether our effective representations of the protein properties would lead to reasonable and significantly distinguishable results . indeed , this is the case as the discussion of the results in tab .[ main_data_table ] in the respective section shows . when comparing the results for the encounter rates in tab.[main_data_table ] with previous studies from the field of bimolecular protein association , several aspects have to be kept in mind .first , throughout this study , we do only consider the _encounter _ of our model particles .as explained in the beginning , the complete association of the complex still lacks the step over a final free energy barrier , which is due to effects such as the dehydration of the protein surfaces and thus requires more detailed modelling . in the framework of our approach , this final step could be modelled by a stochastic rate criterion , where the rate can be obtained by transition state theory from the energy landscapes characterized in atomistic calculations . in any case , any additional process to be included can only lower the values found in our study . in the work on barnase: barstar by schreiber et al . , the authors reported that the association between barnase and barstar is a diffusion - limited reaction .the argument for this is that the association rates at high ionic concentrations in the solution , i.e. for the limit in which the electrostatic steering gets negligible , are clearly lowered by the addition of glycerol , which will lead to slower diffusion . assuming diffusion control ,the reactive step over the final barrier should be kinetically unimportant , as generally discussed in ref .indeed , we see that our results for the encounter rates lead to values in the correct order of magnitude of , which is similar to the experimental value obtained by schreiber et al . for the association constant of barnase : barstar at physiological salt .however , the basal association rate , i.e. the rate at high ionic strength , is reported as from experiments . given that the association process of brn : brs is diffusion limited, these findings should actually coincide with our values for .but as we already discussed in the results section , in our simulations the influence of the effective electrostatics introduced in do not result in such a drastic change of the encounter behavior . in several earlier approaches ,similar problems have been addressed by computational and analytical studies . in work by zhou and coworkers , basal encounter rates for particles with reactive patcheshave been found to be and , that is closer to the basal rates reported by schreiber and coworkers .it has to be noted that , in both cases , the patches were flat areas above the surface of the spherical model particles , which had a smaller angular extension compared to our cases , and especially required a much closer translational approach ( in ) to form the encounter .if we expand the graph in fig.[patch_size_plot ] to smaller patch radii like , we also find basal rates in the order of .also , the deviation between and , i.e. the influence of the effective electrostatics , is more prominent and could enhance the encounter rate by about two orders of magnitude , which is consistent with the findings in the previously cited work .there , the effect of coulombic interaction is reflected with a boltzmann factor due to a pairwise coulomb energy .this approach works well , as shown in ref . , and has been recently used in a more complex model study of the energy landscape of protein - protein association .any model for the reaction patches has to rely on results obtained from more detailed modeling .the surface of a protein is typically densely covered by water molecules due to the hydrophilic nature of its surface .this hydration shell has a thickness of about 3 and will therefore in principal hinder the approach of two proteins to distances below 6 . setting the encounter patches to values below this threshold of 3 would then mean that part of the dehydration would already have happened before the encounter is actually formed , which is probably hardly described by simple diffusion with drift . moreover , all of the considered protein systems feature distinct key - lock binding interfaces regarding the steric structure , apart from some flexibility due to intrinsic thermal motion. therefore , it makes sense to represent the encounter area by a three - dimensional extended object rather than by a flat surface region .indeed , the results of our studies show that our approach is capable of reproducing encounter rates in a reasonable order of magnitude , qualitatively reproducing generally expected features . in an in - depth investigation of the dependency of the encounter rate and the patch radius itis shown , that the choice of the geometry of the reactive area is at least as crucial for the results as definition of the model interactions and its parameters . in principal, one could think of the patch radius as a valuable tuning parameter to fit experimental results and the encounter kinetics in the computational model .our approach makes it possible to observe general features of the encounter process .in particular , we dissect the pathway to the encounter complex in several levels of alignment between our model proteins .as we observe the full trajectory to encounter in our simulations , we are able to extract the number of unsuccessful contacts between the proteins until they finally reach a reasonably aligned state to bind .the distribution of is again in all cases well described by a single exponential decay .this behavior is not obvious as the probability of success for one contact is depending on several aspects of diffusion in a complex manner .first , the closer the rotational alignment at the beginning of the contact is to the encounter state , the higher is the probability of success .second , this initial alignment is also coupled to the last contact if the time in between is small .finally , longer contact resting times also increase the probability of encounter .it is interesting , that all these effects still lead to a simple poisson distribution of the number of contacts when averaging over the initial conditions as it is done in this work .furthermore , we find that the distributions of these resting and return times can not be described by a poisson process , but are consistent with the expectations for a spatially constricted random walk in three dimensions .we find that the particular mean fpt to encounter is in most of the cases directly proportional to the number of unsuccessful contacts .this seems to be a very fundamental qualitative feature irrespective of the details of the proteins and the applied model .however , for cytc : ccp the behavior is qualitatively different , which is consistent with earlier studies of this highly electrostatically steered complex . in summary , herewe have presented a langevin equation approach to protein - protein association which in principle allows us to combine long simulation times and large systems with molecular details of the involved proteins .this first study has focused on bimolecular reactions and has proven that this approach is capable of reproducing known association rates with a reasonable dependance on the main parameters involved .one special strength of our approach is that it allows us to address the details of the binding pathway , for example by measuring the statistics of unsuccessful contacts before encounter . in the future, this approach will be extended to large protein complexes of special biological interest .we thank christian korn for many helpful discussions .this work was supported by the volkswagen foundation through grants i/80469 and i/80470 to v.h . and, respectively .j.s . and u.s.s. are supported by the center for modelling and simulation in the biosciences ( bioms ) at heidelberg and by the karlsruhe institute of technology ( kit ) through its concept for the future ..[parameter_table ] protein structures and parameters used in the study .the coordinates of the patches ( ) and the dipole moment ( ) are given relative to the center of mass . [ cols="^,^,^,^,^,^,^,^",options="header " , ]
we study the formation of protein - protein encounter complexes with a langevin equation approach that considers direct , steric and thermal forces . as three model systems with distinctly different properties we consider the pairs barnase : barstar , cytochrome c : cytochrome c peroxidase and p53:mdm2 . in each case , proteins are modeled either as spherical particles , as dipolar spheres or as collection of several small beads with one dipole . spherical reaction patches are placed on the model proteins according to the known experimental structures of the protein complexes . in the computer simulations , concentration is varied by changing box size . encounter is defined as overlap of the reaction patches and the corresponding first passage times are recorded together with the number of unsuccessful contacts before encounter . we find that encounter frequency scales linearly with protein concentration , thus proving that our microscopic model results in a well - defined macroscopic encounter rate . the number of unsuccessful contacts before encounter decreases with increasing encounter rate and ranges from 209000 . for all three models , encounter rates are obtained within one order of magnitude of the experimentally measured association rates . electrostatic steering enhances association up to 50-fold . if diffusional encounter is dominant ( p53:mdm2 ) or similarly important as electrostatic steering ( barnase : barstar ) , then encounter rate decreases with decreasing patch radius . more detailed modeling of protein shapes decreases encounter rates by 595 percent . our study shows how generic principles of protein - protein association are modulated by molecular features of the systems under consideration . moreover it allows us to assess different coarse - graining strategies for the future modelling of the dynamics of large protein complexes .
when a sediment bed is exposed to a fluid flow , particles can be entrained and transported by different mechanisms .the transport regime depends primarily on the inertial characteristics of the particles and the fluid .sufficiently light particles are transported as suspended load in which their weight is supported by the turbulence of the fluid .in contrast , particles which are sufficiently heavy are transported along the surface .this type of transport incorporates two main transport modes , namely : _saltation _ ,which consists of sediment grains jumping downstream close to the ground at nearly ballistic trajectories , and _ creep _ , which consists of particles rolling and sliding along the sediment bed .sediment transport along the surface is responsible for a wide range of geophysical phenomena , including surface erosion , dust aerosol emission , and the formation and migration of dunes .therefore , the quantitative understanding of sediment transport may improve our understanding of river beds evolution , the emission of atmospheric dust and the dynamics of planetary sand landscapes .once sediment transport begins , the fluid loses momentum to accelerate the particles as a consequence of newton s second law ( the transport - flow feedback , e.g. ) .therefore , the sediment flux , , which is the average momentum of grains transported per unit soil area , is limited by an equilibrium value , the saturated flux , .although previous studies focused on this equilibrium flux ( e.g. ) , the dynamics of sediment landscapes is controlled by situations _ out - of - equilibrium_. in particular , the sediment flux needs a spatial lag the so - called saturation length , to adapt to a change in flow conditions .this saturation length introduces the main relevant length - scale in the dynamics of sediment landscapes under water and on the surface of planetary bodies .for instance , the saturation length controls the minimal size of crescent - shaped ( barchan ) dunes moving on top of bedrock , as well as the wavelength of the smallest dunes ( the so - called `` elementary dunes '' ) emerging on top of a sediment bed .although important insights were gained recently from experimental studies , the physics behind the saturation length , and thus the dependence of on flow and sediment attributes , is still insufficiently understood .one of the most important deficiencies in our understanding of the dependence of on flow and sediment conditions is that it remains uncertain which mechanisms are most important in determining the saturation of the sediment mass flux . on the one hand, it has been suggested that the acceleration of transported particles due to fluid drag is the dominant relaxation mechanism .this model neglects the entrainment of sediment bed particles due to fluid lift , as well as the entrainment of sediment bed particles and the deceleration of transported particles due to collisions of transported particles with the sediment bed ( grain - bed collisions ) . on the other hand , the entrainment of sediment bed particles by fluidlift and grain - bed collisions has also been proposed to be the dominant relaxation mechanisms . however , these models neglect momentum changes of transported particles , which is exactly the opposite situation of the models in refs . .moreover , to our knowledge , all previous models neglected a further relaxation mechanism of the sediment flux , namely , the relaxation of the fluid speed in the transport layer ( ) due to the saturation of the transport - flow feedback . to address this situation and develop an accurate expression for that can be used in future studies, this paper presents a model for flux saturation in sediment transport which , for the first time , accounts for _ all _ aforementioned mechanisms for the saturation of sediment flux .in particular , our theoretical model accounts for the coupling between the entrainment of sediment bed particles due to fluid lift and grain - bed collisions , the acceleration and deceleration of transported particles due to fluid forces and grain - bed collisions , and the saturation of due to the saturation of the transport - flow feedback . our analytical model allows us to derive a closed expression for which can be applied to different physical environments .our model suggests that grain - bed collisions , which have been neglected in all previous studies , have an important influence on the saturation length , .moreover , our model suggests that the relaxation of plays an important role for sediment transport in dilute fluids ( aeolian transport ) , whereas it plays a negligible role for sediment transport in dense fluids ( subaqueous transport ) . in a recent letter ( see ref . ) , we presented our equation for and showed that it is consistent with measurements of in both subaqueous and aeolian sediment transport regimes over at least five orders of magnitude in the ratio between fluid and particle density .in the present paper , we derive the analytical model presented in ref . in more detail and study the properties of the equations governing the behavior of the saturation length in both transport regimes . since ref . includes a detailed comparison of our model against measurements , no model comparisons against measurements are included here .this paper is organized as follows .sections [ derivation ] and [ obtaining_ls ] discuss the analytical treatment of flux saturation . in the former section , we derive the mass and momentum conservation equations for the layer of sediments in transport , as well as the differential equation of the sediment flux in terms of the mass density and average velocity of the transported particles .these equations allow us to obtain a mathematical expression for the saturation length of sediment transport , which is presented in section [ obtaining_ls ] .this section also discusses how to determine the quantities appearing in the saturation length equation , which encode the attributes of sediment and flow , as well as the characteristics of sediment entrainment and particle - fluid interactions . in section [ discussion ]we use our theoretical expression to perform a study of the saturation length as a function of the relevant physical quantities controlling saturation of sediment flux .conclusions are presented in section [ conclusions ] .the downstream evolution of the sediment flux , , towards its equilibrium value , , can be described by the following equation , which is identical to eq .( 1 ) of ref . , which is valid in the regime where is close to saturation ( ) .the length - scale , the saturation length , characterizes the response of the sediment flux due to a small change in flow conditions around equilibrium . since , can be written as the negative inverse first - order taylor coefficient of , in this section , we derive the equations that describe the downstream evolution of the sediment mass flux , , towards its equilibrium value , , in sediment transport under turbulent boundary layer flow .the mass flux is defined as , where is the average transported mass per unit soil area and is the average particle velocity .therefore , the saturation of is dictated by the mechanisms governing the relaxation of and towards their saturated values , and , respectively .the quantitative description of the saturation processes of and requires incorporation of all relevant forces acting on the sediment particles in transport , namely drag , gravity , buoyancy , collision forces between particles in transport ( `` mid - fluid collisions '' ) and friction due to collisions between particles and the bed . indeed , moraga et al . found experimentally that lift forces due to shear flow acting on a particle surrounded by fluid which have often been assumed to be significant during transport ( e.g. ) are approximately an order of magnitude smaller than the drag force and can be , thus , neglected in our calculations . on the other hand ,the so - called added mass force exerted by accelerated or decelerated particles to dislodge the fluid as they move through it leads to enhanced inertia of the particles in transport .this added mass effect plays a relevant role for the motion of the particles , and thus we also take it into account .our analytical treatment applies to situations where the fluid velocity is not too high such that only transport through saltation or creep ( the main transport modes of particles along the surface ) is considered .transport through suspension or dense transport regimes , such as sheet flow , are , thus , not considered . in section [ definitions ]we first present the definitions and notations used in our study . afterwards in section [ mass_and_momentum ] , we present the local conservation equations , from which we obtain the saturation equations , presented in section [ sec : saturation ] .we use a three - dimensional coordinate system , where denotes the direction of fluid motion , is the lateral direction and is the vertical direction .the top of the sediment bed , which corresponds to the height at which the local particle concentration equals approximately of the particle concentration deep within the bed , is located at the vertical position .here we use the approximation that the slopes of bedforms are usually very small ( ) .moreover , since the time - scale of the relaxation of the sediment flux due to changes in the flow is typically much smaller than the time - scale of the evolution of bedforms ( dunes and ripples ) ( ) , we can adopt the approximation that the transport over the sediment landscape is in the steady - state , i.e. , where denotes time .furthermore , since our description relates to the saturation of the mass flux due to changes in the downstream direction , we consider a laterally invariant sediment bed ( ) .we consider a certain microscopic configuration of particles ( including the limit ) labeled by an upper index whose centers of mass are located at .each particle has a mass , a velocity , and is subjected to a force resulting in an acceleration .these forces include both external body forces ( ) and interparticle contact forces . in general these forcesare non - conservative .the interparticle contact forces occur for all pairs of contacting particles . we therefore denote them by , which is the contact force applied by the particle with the number on the particle with the number .we note that if these particles are not in contact , and we define ( no self - interaction ) .hence , the total acceleration of particle can be written as , we define , the density of a certain microscopic configuration of particles at time , as it describes the number of particles , , with positions , velocities , and masses in infinitesimal intervals around , , and , respectively , at time , moreover , determines the mass density , while the mass - weighted average of a quantity is defined through the equation , in eqs .( [ def_rho ] ) and ( [ def_av ] ) denotes the time average , using these definitions , we can calculate the total transported mass per unit soil area ( ) , the total mass flux ( ) , and the average particle velocity ( ) from the expressions , respectively , where the overbar denotes the mass - weighted height average , in this section , the local average mass and momentum conservation equations for our particle system are presented using the notations and definitions introduced in the last section .the derivation of these conservation equations can be found in babic .for our system ( ) , these equations are , where is given by , with . is the contact force contribution to the particle stress tensor since its gradient compensates the contact force density , it describes the momentum flux due to collisions between particles .in fact , even though the total momentum is conserved in collisions , the finite size of the particles and thus lead to a shift of the location of this momentum .we note that this shift of the momentum location in collisions has been neglected in our model derivation in ref . ( dilute approximation ) . as a consequence , eq .( [ mombx ] ) is a generalization of eq .( 2 ) of ref . , such that these two equations are equal if the contributions from in eq .( [ mombx ] ) are neglected .the distribution appearing in eq .( [ pij ] ) is the mathematical expression for a delta line between and .integrating this distribution over an arbitrary domain yields the fraction of the line contained in this domain .the inhomogeneities introduced by this and the other delta distributions indirectly appearing in quantities of the type are smoothed out by the time averaging procedure , which is also incorporated in the definition of .the results of the last section can be now used in order to derive the saturation equations for the average transported mass per unit soil area ( ) and the average particle velocity ( ) , used to define the sediment flux , . to do so, we first integrate eqs .( [ massb])-([mombz ] ) over the height .this calculation is the subject of section [ height_integration ] .thereafter , in section [ momentum_balance ] , we combine the resulting horizontal and vertical momentum balances by means of a coulomb friction law and rewrite each term of the horizontal momentum balance equation in terms of and .we then present the mass and horizontal momentum balance equations in their final form in section [ final_form ] .since our description relates to the saturation of the mass flux due to changes in the downstream direction ( ) , we integrate eqs .( [ massb])-([mombz ] ) over height ( ) . by using eqs .( [ m])-([heightav ] ) and by further taking into account and , this height - integration yields , we note that eq .( [ mombalance3x_new ] ) corresponds to eq .( 2 ) of ref . if the contributions from in eq .( [ mombalance3x_new ] ) are neglected ( dilute approximation ) . _ coulomb friction law _ the terms and are the vertical fluxes of horizontal and vertical momentum component per unit volume at the location of the sediment bed , respectively , whereby the velocity terms are the contributions due to particle motion , and and are the contributions due to collisional momentum transfer .in other words , these two terms describe the total amounts of horizontal and vertical momentum , respectively , per unit soil area that enter the transport layer per unit time from the sediment bed .these momentum changes per unit area and time of the transport layer can be seen as being caused by an effective force per unit area ( ) which the sediment bed applies on the transport layer , bagnold was the first who proposed that these force components are related to each other through a coulomb friction law , independent of whether the transport regime is subaqueous or aeolian .that is , where is the coulomb friction coefficient .models for saturated sediment transport using this coulomb friction law have been successfully validated through comparison with experiments , thus giving support to the coulomb friction law adapted to sediment transport ( e.g. ) .additional support comes from numerical simulations of saturated ( ) granular couette flows under gravity .zhang and campbell found for such flows that the interface between the particle bed and the transport layer is characterized by a constant ratio between the and components of the particle stress tensor ( ) , .since both couette flow and sediment transport along the surface are granular shear flows , it seems reasonable that also the interface between the sediment bed and the transport layer for saturated sediment transport along the surface is characterized by such a law .indeed , and become equal to and , respectively , if , which is fulfilled for saturated sediment transport since implies ( cf .( [ massb ] ) ) , which in turn implies due to vanishing sufficiently deep within the sediment bed .finally , it seems reasonable that the coulomb friction law should be also approximately valid in situations weakly out - of - equilibrium , provided the sediment flux is close to its saturated value . assuming the validity of eq .( [ coulombassumption ] ) , we can combine eqs .( [ mombalance3x_new ] ) and ( [ mombalance3z ] ) to , where is a correlation factor given by , _ the correlation factor _ since we are only interested in situations close to equilibrium , and since at equilibrium ( see discussion in the previous paragraph ) , it follows ( is of order unity ) .moreover , for sufficiently dilute granular flows , the momentum transfer in collisions is small and thus . while sediment transport in the aeolian regime is certainly dilute enough to ensure this condition for most of the transport layer , sediment transport in the subaqueous regimemight not fulfill it because a large part of the transport occurs in rather dense regions of the transport layer .however , using the code of durn et al . , we confirmed that also for subaqueous transport .hence , can be approximated as , we confirmed , using the code of durn et al . , that for transport in equilibrium ( ) is nearly constant with the fluid shear velocity , , in both sediment transport regimes .hence , it seems reasonable that changes of with during the saturation process of the sediment flux close to equilibrium can be regarded as negligible compared to the corresponding changes of or with . in this manner, we can consider the value of associated with sediment transport in equilibrium , independent of the downstream position and of the fluid shear velocity .this leads to the following approximation for , where is the equilibrium value of .this equilibrium value of can be determined from experiments as we will discuss in section [ cv_aeolian ] and [ cv_subaqueous ] .now we express both terms on the right - hand - side of the momentum conservation equation , i.e. eq . ( [ mombalance3x ] ) , as functions of and in order to obtain a differential equation describing the saturation of and . the first term on the right - hand - side of eq .( [ mombalance3x ] ) can be written as , where is the ratio between sediment and fluid density ; is defined as , which is the difference between the average fluid velocity ( ) and the average horizontal particle velocity ( ) , where is the fluid velocity profile ; is the drag coefficient , which is a function of , and accounts for the added mass force through , the added mass force arises when the particle is accelerated relative to the surrounding fluid , because the fluid layer immediately surrounding the particle will also be accelerated . as denoted by eq .( [ ca ] ) , this `` added mass '' of the fluid layer amounts to approximately one half the weight of the fluid displaced by the particle . while the added mass correction is significant for transport in a dense medium such as water , it is negligibly small for sediment transport in the aeolian regime since for large .thus , this correction is usually disregarded in studies of aeolian sediment transport ( e.g. refs .we note that eq .( [ drag ] ) is not valid for dense transport regimes like sheet flow , in which the drag coefficient displays a strong dependence on the concentration profile of transported particles . in this manner ,( [ drag ] ) can be used in the present study because our analytical treatment considers the two main modes of transport , namely saltation and creep . the second term on the right - hand - side of eq .( [ mombalance3x ] ) , , can be taken as approximately equal to the buoyancy - reduced gravity force corrected by the added mass force .it can be written as , where is the buoyancy - reduced value of the gravity constant , . by substituting eqs .( [ drag ] ) and ( [ friction ] ) into eq .( [ mombalance3x ] ) using ( cf .( [ cvdef2 ] ) ) , we obtain the momentum conservation equation in terms of and , whereas eq .( [ massb2 ] ) gives the mass balance .therefore , the mass and momentum conservation equations in their final form read , we note that eq .( [ momconserv ] ) is identical to eq .( 4 ) of ref . if the definition of ( eq .( [ ca ] ) ) is inserted .we further note that eq .( [ momconserv ] ) can be used to obtain the saturated value of the velocity difference . by using ( saturated sediment transport ) ,we obtain , which can be numerically solved for .in this section , we use the results presented in last section in order to derive a closed expression for the saturation length as a function of the attributes of sediment and flow , both for aeolian and subaqueous regimes .the derivation of the saturation length equation is the subject of section [ derivation_of_ls ] . in section [ equations_for_ls ]we present and discuss the resulting equation for the saturation length . in sections [ ls_aeolian ] and [ ls_subaqueous ]we show how the saturation length equation can be applied to compute in the aeolian and subaqueous regimes , respectively .close to equilibrium , and saturate simultaneously following a certain function , where .this function is linked to the characteristics of the erosion and deposition of bed material and thus to the unknown shape of as a function of and in eq .( [ massconserv ] ) . moreover , also the mean fluid velocity will saturate following a certain function close to the saturated regime , since is influenced by the feedback of the sediment transport on the fluid flow .therefore , eq . ( [ vr ] ) becomes , by taking into account that both and are functions of , and by using eq .( [ vrs ] ) , we can rewrite the momentum balance eq .( [ momconserv ] ) in such a way to obtain the following expression for , , \label{omegadef}\ ] ] where the functions and are given by the equations , furthermore , since , we obtain , in this manner , using eq .( [ omegadef ] ) , can be written as , using eqs .( [ dv_dq ] ) and ( [ gamma_v ] ) , we can write eq .( [ lsdef2 ] ) for the saturation length as , where we further used that . calculating through eq .( [ lv ] ) requires obtaining an expression for , where is defined in eq .( [ omegadef ] ) .however , incorporates , through the function defined in eq .( [ b_v ] ) , a dependence on the equilibrium value of the relative velocity , i.e. . in order to obtain an expression for , we solve eq .( [ vrs ] ) for using the drag law of julien for natural sediment , which writes , whereas we find that the specific choice of the drag law has only a small effect on the value of obtained from our calculations . by substituting the expression for , obtained with eq .( [ dragjulien ] ) , into eq .( [ vrs ] ) , and solving this equation for , yields , this equation is , then , used to compute through eq .( [ b_v ] ) , whereupon can be obtained using eqs .( [ vrv ] ) and ( [ omegadef ] ) . the resulting expression for the saturation length , computed with eq .( [ lv ] ) , reads , where the quantity , describes the relative change of with close to the saturated regime , while is given by , and thus encodes information about the drag law . in order to obtain our final expression for ,we need to express .we note that , for the saturated state , the mean flow velocity is dominantly a function of the shear velocity and the shear velocity at the bed , that is , the shear velocity at the bed , , is reduced due to the feedback of the sediment transport on the fluid flow , where is the fluid shear stress profile. we can express using the inner turbulent boundary layer approximation of the navier - stokes equations .these equations approximate the navier - stokes equations for heights much smaller than the height of the boundary layer , which is the region in which we are interested .george derived the inner boundary layer approximation of the navier - stokes equations in the absence of an external body force . in the presence of an external body force, these equations must be slightly modified by adding the body force term in the momentum equations .the horizontal momentum equation thus writes , where is the horizontal body force per unit volume acting on the flow at each height . is the drag force per unit volume which the particles apply on the fluid . in other words , is the reaction force per unit volume of the horizontal force per unit volume which the fluid applies on the particles .that is , we then substitute eq .( [ fxbody ] ) into eq .( [ boundarylayer ] ) and integrate this equation from to , where is a height which incorporates the entire transport layer , thereby using , and .this leads to , by substituting this equation into eq .( [ ub ] ) and using eq .( [ drag ] ) , we obtain the following equation for , since does not depend on , we can now express as , where is the value of in equilibrium , and the quantity is given by the equation , where we used , .we note that describes the relative change of with close to the saturated regime .moreover , the derivative can be calculated using eq .( [ ubv ] ) with , , , and , which follows from .we thus obtain , .\end{aligned}\ ] ] this expression is substituted into eq .( [ uvs ] ) , whereas the resulting equation is then solved for this is the term involving which we need to compute in eq .( [ lvfinal ] ) . in this manner, we finally obtain a closed expression for the saturation length , which we present and discuss in the next subsection . the equation for the saturation length , , which is identical to eq .( 5 ) of ref . if the definition of ( eq .( [ ca ] ) ) is inserted , reads , where and are calculated using eqs .( [ vrs ] ) and ( [ cd_fx ] ) , respectively .in addition , the last factor on the right - hand - side of eq .( [ lvfinal2 ] ) is given by the equation , \cdot\left(\dfrac{u_{\ast}^2}{u_{bs}^2}-1\right)}{1+\left[\dfrac{c_uc_m\cdot(v_s+v_{rs})}{2v_s}\right]\cdot\left(\dfrac{u_{\ast}^2}{u_{bs}^2}-1\right)}\approxeq\dfrac{1+\left[\dfrac{c_u\cdot(v_s+v_{rs})}{2fv_{rs}}\right]\cdot\left(\dfrac{u_{\ast}^2}{u_{{\mathrm{t}}}^2}-1\right)}{1+\dfrac{c_uc_m\cdot(v_s+v_{rs})}{2v_s}\cdot\left(\dfrac{u_{\ast}^2}{u_{{\mathrm{t}}}^2}-1\right ) } , \label{lsfluid}\ ] ] and thus encodes information about the saturation of the transport - flow feedback .in fact , if the saturation of the transport - flow feedback is neglected ( ) , it follows and thus .we note that eq .( [ lsfluid ] ) is identical to eq .( 9 ) of ref . , which we obtained for aeolian transport , for ( see section [ ls_aeolian ] ) .moreover , for transport in the subaqueous regime , as shown in section [ ls_subaqueous ] .therefore , in this regime , eq . ( [ lsfluid ] ) gives , which is the result we obtained for subaqueous transport in ref .in fact , using the corresponding values for and and inserting eq .( [ ca ] ) , eq .( [ lvfinal2 ] ) becomes equal to , for subaqueous transport and , for aeolian transport , where we further used for aeolian transport .( [ lvfinal2subq ] ) and ( [ lvfinal2aeol ] ) are identical to eqs .( 8) and ( 10 ) of ref . , respectively . in eq .( [ lsfluid ] ) , we assumed that the saturated shear velocity at the bed ( ) and the bed shear stress in equilibrium ( ) approximately equal and , respectively , i.e. the minimal shear velocity and the minimal shear stress at which sediment transport can be sustained , in the following , we present arguments which justify this assumption . for aeolian sediment transport , eqs .( [ tauftaut ] ) and ( [ ubut ] ) are known as `` owen s hypothesis '' .these equations are known to be approximately valid when is close to the threshold ( e.g. figure 2.10 in ref .however , as increases , actually decreases away from .nonetheless , the approximation which we use in eq .( [ lsfluid ] ) is reasonable even for large shear velocities , since , when is significantly larger than ( which means for earth conditions with ) , we have that , which is nearly independent of . using this approximation with , eq .( [ lvfinal2aeol ] ) becomes , which is identical to eq .( 39 ) of the supplementary material of ref . . for subaqueous sediment transport , eqs .( [ tauftaut ] ) and ( [ ubut ] ) are known as `` bagnold s hypothesis '' .this hypothesis is widely used in the literature ( e.g. ) , although some studies have questioned it ( e.g. ) .however , there is evidence from recent studies that this hypothesis is approximately fulfilled . in order to review this evidence , we use eqs .( [ drag ] ) , ( [ vrs ] ) , and ( [ tautauf ] ) to express as , . \label{ms}\end{aligned}\ ] ] to our knowledge , the only study in which has been measured as a function of is the recent study of lajeunesse et al . , who obtained , using video - imaging techniques , that , .\label{msshape}\end{aligned}\ ] ] therefore , if we assume as in eq .( [ tauftaut ] ) , then , by comparing eqs .( [ ms ] ) and ( [ msshape ] ) with valid for subaqueous sediment transport ( cf .( [ ca ] ) with ) , we obtain and thus .indeed , values within the range between and and thus consistent with the value of estimated above have been reported from measurements of particle trajectories in the subaqueous sediment transport .further evidence that bagnold s hypothesis is approximately correct was provided by the recent numerical study of durn et al .these authors simulated the dynamics of both the sediment bed and of transported particles at the single particle scale .durn et al . found that , which is similar to eq .( [ msshape ] ) and can satisfactorily explain all simulated data with a single proportionality constant. moreover , the authors also found that reduces to at a height very close to the top of the bed , .given these separate lines of evidence , we believe that bagnold s hypothesis is a reasonable approximation .moreover , we emphasize that our analysis for subaqueous sediment transport is not affected by this approximation , since we estimate in section [ cu_subaqueous ] that and thus regardless of the value of . in summary ,the saturation length of sediment flux , , can be calculated using eq .( [ lvfinal2 ] ) , where and are given by eqs .( [ vrs2 ] ) and ( [ cd_fx ] ) , respectively , while eq .( ) is used to compute the term , , which appears on the right - hand - side of eq .( [ lvfinal2 ] ) .these equations include certain quantities which depend on the characteristics of the sediment transport and thus on the transport regime .these quantities are , , , , the saturated particle velocity , and the threshold shear velocity , .we estimate these quantities for the aeolian regime of sediment transport in section [ ls_aeolian ] and for the subaqueous regime in section [ ls_subaqueous ] . in this section, we estimate the parameters , , , , and express the saturated particle velocity and the threshold shear velocity for aeolian sediment transport . note that we estimate these parameters only roughly , which is sufficient in the light of the large scatter ( factor ) of the experimental data . in this section ,we reiterate some of the results we obtained in section a1 of the supplementary material of ref .the parameter ( eq .( [ cvdef ] ) ) occurs as a prefactor in eq .( [ lvfinal2 ] ) , and thus determines the magnitude of . since , conclude that must be larger than unity , that is , however , experiments on aeolian sediment transport show that the change of with is small close to , where most of the transport takes place .consequently , the value of must be close to unity .we estimate from experiments on aeolian sediment transport .creyssels et al . measured an exponentially decaying particle concentration profile , and a linearly increasing particle velocity profile , where mm , m/s , and were not varying much with . using these measurements , we obtain , in order to obtain , it remains to estimate . in order to do so , we use measurements of greeley et al .13 of ref . ) , who reported a histogram of the horizontal particle velocity of the particles located at a height , from which we obtain , since the shape of the distribution of the horizontal particle velocity does not vary much with the height , we thus estimate as , in contrast to , the parameter ( eq . ( [ cudef ] ) ) significantly influences the functional shape of as a function of . characterizes the significance of the transport - flow feedback for the saturation of the sediment flux .for instance , means that the transport - flow feedback does not affect the saturation process since the flow is already saturated ( from eq .( [ cudef ] ) ) . in order to estimate , we need to know how the mean fluid speed behaves as a function of the feedback - reduced bed shear velocity ( see eq .( [ cudef ] ) ) . for aeolian sediment transport ,the fluid speed is strongly suppressed by the reaction drag forces which the transported grains apply on the wind .the feedback is , in fact , so strong that the mean fluid speed in the transport layer changes only weakly with . in leading - order approximation ,the mean fluid speed is thus proportional to , we thus obtain , from eq .( [ cudef ] ) , we note that a value of close to unity is obtained even if the more complicated dependence of on , obtained from modeling saturated sediment flux , is taken into account .( [ eq : cu_value ] ) is approximately valid for . beyond this range, turbulence - induced fluctuations of the shear velocity , neglected in the present work , should affect the value of .the parameter , given by eq .( [ cmdef ] ) ) , occurs as a prefactor in eq .( [ lvfinal2 ] ) and it further affects the functional shape of since itself affects the feedback term , . encodes the relative importance of the respective relaxation processes and for the saturation of the sediment flux .there are two extreme cases : and .the case means that the saturation of towards is much faster than the saturation of towards , while the opposite situation corresponds to the case . in order to estimate for the aeolian regime of sediment transport ,we first estimate how the function behaves close to the saturated regime . for this purpose , we make use of the fact that , for aeolian sediment transport , the dominant mechanism which brings grains of the sediment bed into motion is the ejection of bed grains due to impacts of already transported grains , a mechanism known as `` splash '' ( see e.g. ) . it is known that ejection of new grains is mainly due to the impacts of the fastest transported particles , whereas the impacts of slow particles have a negligible effect on the splash process .indeed , the speed of a fast impacting grain mainly determines the number of ejected grains , but not their ejection velocities , as found in experiments .the ejected particles are typically slow compared to the rebound speed of the impacting particle . in other words , the impact of a fast grain naturally results in two species of particles : a single ( fast ) rebounding particle and many ejected ( slow ) particles . using numerical simulations of splash and particle trajectories, andreotti could observe these two distinct species in the characteristics of transported particles .the author noted that the slow species ( `` reptons '' ) accounts for the majority of transported mass per unit soil area ( ) .furthermore , the author s analysis suggested that the impact flux of reptons , and thus , in good approximation , the total transported mass , adjusts to changes of the impact flux of the fast species ( `` saltons '' ) within a distance much shorter than .therefore , it seems reasonable to treat as locally equilibrated with respect to the impact flux of saltons .the locally equilibrated value of is proportional to the number of ejected particles per impact , which in turn is proportional to the impact speed of saltons and thus approximately proportional to .a rough estimate of the function is therefore , which yields , the coulomb friction coefficient , , occurs as a prefactor in eq .( [ lvfinal2 ] ) , and also changes the functional shape of through eq .( [ vrs2 ] ) . can be determined indirectly from measurements of the saturated mass of transported particles as a function of the shear velocity , which fulfills the equation , , \label{mudef}\end{aligned}\ ] ] note that this equation is eq .( [ ms ] ) with ( eq .( [ tauftaut ] ) ) . the value, was found in a previous work through determining indirectly both from experiments as mentioned above and from numerical simulations of aeolian sediment transport in equilibrium .the saturated particle velocity is dominantly controlling the dependence of on in eq .( [ lvfinal2 ] ) .since the dependence of on is rather weak for aeolian sediment transport , the saturation length will not change much with .here we use an expression for which has been obtained in a recent work , since the values of saturated sediment flux obtained using this equation produced excellent quantitative agreement with measurements .the expression for reads , where and are given by the equations , in these equations , is the exponential integral function , is the von krmn constant , and is a dimensional parameter encoding the influence of cohesion , while and are empirically determined parameters .the saturated particle velocity for transport in the aeolian regime can be obtained by iteratively solving eq .( [ vsaeolian ] ) for and using the expressions for and given by eqs .( [ eq : v_t ] ) and ( [ f ] ) , respectively .we calculate the threshold shear velocity by using the following equation , which has been obtained from an analytical model for aeolian sediment transport in equilibrium , where is given by the following equation , in the equation above , is an empirically determined parameter , while , which is the surface roughness of the quiescent sediment bed , is given by the equation , , \nonumber\end{aligned}\ ] ] where .we note that the ratio between ( which is the threshold for sustained transport ) and the fluid threshold required to initiate transport in the aeolian regime depends strongly on the environmental conditions .( [ ut ] ) yields for earth conditions , which is in agreement with measurements . however , the ratio under martian conditions can be as small as , as also found from numerical simulations .indeed , eq . ( [ ut ] ) , which was obtained from the same theoretical work leading to eq .( [ vsaeolian ] ) , has been validated by comparing its prediction with outcomes of numerical simulations under a wide range of fluid - to - sediment density ratio and particle diameter , thereby leading to excellent quantitative agreement ( see fig .13b of ref . ) . in this section ,we provide expressions for the parameters , and , as well as for the coulomb friction coefficient , , the saturated particle velocity , , and the threshold shear velocity , , for transport in the subaqueous regime .we remark that we estimate these quantities only in a rough manner , consistent with the large scatter ( factor ) of the experimental data . in this section ,we reiterate some of the results we obtained in section a2 of the supplementary material of ref .we can estimate for transport in the subaqueous regime from measurements of the distribution of horizontal velocities in subaqueous sediment transport in equilibrium .such measurements were undertaken by lajeunesse et al . in experiments of sediment transport under water using particles of average diameter mm and relative shear velocity . in these experiments , particleswere considered as being transported if they had a velocity larger than a certain cut - off value , .the distribution of horizontal velocities for these transported particles was fitted using an exponential distribution , },\ ] ] where . by using this distribution, we can compute as , lajeunesse et al .did not report specific values of corresponding to specific measurements .instead they mentioned that lies within the range between and , depending on the water flow rate .since mm and ( which are the values reported for the measurement of ) correspond to intermediate values for and investigated in the experiments , we use the intermediate value as an approximate estimate for the average cut - off velocity . using this estimate for , eq.([estcv ] )yields , for transport in the subaqueous regime .in contrast to the aeolian regime , the suppression of the fluid flow due to the sediment transport in the subaqueous sediment transport is weak .the mean fluid speed is thus mainly a function of the shear velocity and the dependence of on and thus on is negligible . by neglecting this dependence , we obtain , which is consequence of eq .( [ cudef ] ) with . in order to estimate for subaqueous sediment transport, we use evidence provided by the recent numerical study of durn et al . . as mentioned before, these authors simulated the dynamics of both the transported particles and the sediment bed at the single particle scale .durn et al . found that , during flux saturation in subaqueous sediment transport , changes within a time scale which is more than one order of magnitude larger than the time scale in which changes .this observation can be mathematically expressed as , and thus , eq . ( [ dmvsdv ] ) further implies that , and thus where we used the definition of , which is given by eq .( [ cmdef ] ) .hence , we estimate as , however , we note that our model predictions are consistent with experiments even if we assume a coupling of to which is as strong as in the aeolian regime that is , even by assuming and thus increasing by a factor of as compared to the value obtained with .this means that the saturation length in the subaqueous regime is not very sensitive to the value of within the range between and ( whereas the latter value corresponds to sediment transport in the aeolian regime ) . in this section, we reiterate some of the results we obtained in section b of the supplementary material of ref .as obtained in experiments on subaqueous sediment transport in equilibrium , the average mass flux approximately follows the expression , .\end{aligned}\ ] ] by comparing this equation with eq .( [ ms ] ) with and ( see section [ equations_for_ls ] ) , we obtain , for sediment transport in the subaqueous regime . it has been verified in a large number of experimental studies , that the equilibrium particle velocity in the subaqueous regime of transport approximately follows the expression , where is a dimensionless number .we note that the above expression is consequence of the equation , , where is taken proportional to . in order to obtain for sediment transport in the subaqueous regime using eq .( [ vsvrs ] ) , we calculate using eq .( [ vrs2 ] ) and use the value , which we have obtained by comparing the prediction of eq .( [ vsvrs ] ) with measurements of as a function of from experiments on subaqueous sediment transport in equilibrium ( see fig . [ aplot ] ) ., as a function of the dimensionless shear velocity . for the symbols , the average dimensionless particle speed was obtained from measurements , while we computed using eq .( [ vrs2 ] ) with .the black solid line corresponds to the best fit to the experimental data using eq .( [ vsvrs ] ) , which yields . ] the threshold velocity for sustained sediment transport , , in the subaqueous regime is computed by using the equation , where the threshold shields parameter is obtained through an empirical fit to the shields diagram . the resulting expression for reads , where , {s\tilde g/\nu^2}12 & 12#1212_12%12[1][0] _ _ ( , ) _ _( , ) _ _ ( , ) _ _ ( , ) link:\doibase 10.1103/physreve.66.031302 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1073/pnas.0800202105 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/14/4/043035 [ * * , ( ) ] in _ _ ( , , ) _ _ ( , ) in _ _ ( ) * * , ( ) link:\doibase 10.1002/esp.1300 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreve.64.031305 [ * * , ( ) ] link:\doibase 10.1016/j.epsl.2006.09.004 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1063/1.2397005 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physrevlett.111.218002 [ * * , ( ) ] * * , ( ) * * , ( ) \doibase 10.1061/(asce)0733 - 9429(2008)(134:3)(340 ) [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) in _ _ , vol .( ) pp . * * , ( ) * * , ( ) _ _ ( , ) _ _ ( , ) * * , ( ) link:\doibase 10.1029/2001wr000681 [ * * , ( ) ] link:\doibase 10.1029/2009jf001628 [ * * , ( ) ] * * , ( ) link:\doibase 10.1098/rsta.1977.0009 [ * * , ( ) ] * * , ( ) link:\doibase 10.1017/s0022112008005491 [ * * , ( ) ] * * , ( ) link:\doibase 10.1029/2007jf000774 [ * * , ( ) ] link:\doibase 10.1029/2009jd011702 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.107.098001 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.058001 [ * * , ( ) ] link:\doibase 10.1017/s0022112004009073 [ * * , ( ) ] * * , ( ) * * , ( ) \doibase 10.1061/(asce)0733 - 9429(1998)124:12(1235 ) [ * * , ( ) ] link:\doibase 10.1103/physrevlett.104.074502 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physreve.75.011301 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.98.198001 [ * * , ( ) ] link:\doibase 10.1016/j.aeolia.2011.07.006 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1061/(asce)hy.1943 - 7900.0000116[ * * , ( ) ] link:\doibase 10.1061/(asce)hy.1943 - 7900.0000116 [ * * , ( ) ] \doibase 10.1061/(asce)0733 - 9429(2008)134:4(367 ) [ * * , ( ) ] link:\doibase 10.1061/(asce)hy.1943 - 7900.0000296 [ * * , ( ) ] link:\doibase 10.1680/wama.10.00035 [ * * , ( ) ]
the transport of sediment by a fluid along the surface is responsible for dune formation , dust entrainment and for a rich diversity of patterns on the bottom of oceans , rivers , and planetary surfaces . most previous models of sediment transport have focused on the equilibrium ( or saturated ) particle flux . however , the morphodynamics of sediment landscapes emerging due to surface transport of sediment is controlled by situations out - of - equilibrium . in particular , it is controlled by the saturation length characterizing the distance it takes for the particle flux to reach a new equilibrium after a change in flow conditions . the saturation of mass density of particles entrained into transport and the relaxation of particle and fluid velocities constitute the main relevant relaxation mechanisms leading to saturation of the sediment flux . here we present a theoretical model for sediment transport which , for the first time , accounts for both these relaxation mechanisms and for the different types of sediment entrainment prevailing under different environmental conditions . our analytical treatment allows us to derive a closed expression for the saturation length of sediment flux , which is general and can thus be applied under different physical conditions .
the pricing of contingent claims contracts in financial economics is often based on very restrictive assumptions about the time evolution of the underlying instrument . in recent yearsresearchers have endeavored to remove some of these restrictions by proposing more realistic models which would incorporate features found in real markets .there is however a trade - off between analytic tractability and adherence to stylized facts observed from empirical financial time series .the most attractive features of the usual black - scholes type of models is the possibility of obtaining closed , exact formulas for the premium of derivative securities , and to build a risk free replicating strategy .such qualities are amply used in financial institutions which require fast calculation and tools to hedge risky assets. however , the geometric brownian motion assumption of the black - scholes models ignores several empirically observed features of the real markets such as , volatility clusters , fat tails , scaling , occurrence of crashes , etc .it is as yet unknown which stochastic process is responsible for the motion of risky assets , but physicists have taken some important steps in the right direction . in this workwe implement a microscopic agents - based model of market dynamics which gives rise to a quite complex and rich behaviour and whose output are macroscopic quantities such as price returns . by varying its parametersthe model exhibits market crashes , gaussian statistics and short ranged correlation , fat tailed returns and long range correlation .this model retains the nontrivial opinion formation structure of the grand canonical minority game because it incorporates two new features .the first one is to allow two categories of agents , producers ( who use the market for exchanging goods ) , and speculators ( whose aim is to profit from price fluctuations ) .the second feature is that speculators might choose not to trade , and in this sense the model is similar to the grand canonical ensemble of statistical physics since the number of active traders is not constant .we perform an analysis of the time series generated by this model in order to classify its dynamical behaviour .we first test the data for unit roots and remove a simple kind of nonstationarity by taking differences wherever necessary .the bds statistic , originated in the chaos literature , uses the correlation integral as the basis of a test for the hypothesis that the data is independent and identically distributed ( _ i.i.d . _ ) .we apply this statistic as a model specification test by applying it to the residue of an arima , autoregressive integrated moving average , process ( although this might not remove all kinds of linearity ) .if the null hypothesis is rejected then this is indication that the data is nonlinear .another test based on surrogate data is used to confirm the nonlinearity of the model . since the alternative hypothesis for the bds procedure is not specified other tests have to be applied in order to determine whether the nonlinearity comes from a stochastic or deterministic mechanism .such distinction is subtle and here we approach this question by computing more parameters , the correlation dimension and two other procedures which do not require embeddings : recurrence plots , and the lempel - ziv complexity ( lzc ) . in complement to the above procedures we implement a bayesian approach called cluster weighted modelling ( cwm ) in order to find further indication of determinism .the paper is organized as follows . the market model and its trajectories are analysed in section [ sec:2 ] .the time series analysis , including all statistical tests is discussed in section [ sec:3 ] .a succinct presentation of the cwm and its results are found in section [ sec:4 ] .analysis of the results obtained and a classification of the model evolution are given in section [ sec:5 ] while in the last section we summarize the main results and comment on future work .inspired by the _el farol problem _ proposed by arthur , the so called minority game ( mg ) model introduced by challet and zhang represents a fascinating toy - model for financial market .now it is becoming a paradigm for complex adaptive systems , in which individual members or traders repeatedly compete to be in the winning group .the game consists of agents that participate in the market buying or selling some kind of asset , e.g. stocks . atany given time agent can take two possible actions , meaning buy or sell . those players whose bets fall in the minority group are the winners , i.e. , the sellers win if there is an excess of buyers , and vice versa . to determine the minority group we just consider the sign of the global action , so that if is positive the minority is the group of sellers ; in this case the majority of players expect asset prices to go up .in other words , this dynamics follows the law of demand and supply . in the end of each turn, the output is a single label , or , denoting the winning group at each time step .this output is made available to all traders , and it is the only information they can use to make decisions in subsequent turns .indeed , they store the most recent output of winners set . in this way a limited memory of length is assigned to the traders corresponding to the most recent history bit - string that traders use to make decisions for the next step . in order to decide what action to take, agents use strategies .a strategy is an object that processes the outcomes of the winning sets in the last bets and from this information it determines whether a given agent should buy or sell for the next turn .when a tie results from the strategies , buying or selling is decided by coin tossing .the memory defines possible past histories so the strategy of one agent can be viewed as a d - dimensional vector , whose elements can be 1 or -1 .the space of strategies is an hypercube of dimension , and the total number of strategies in this space is . at the beginning of the gameeach agent draws randomly a number of strategies from the space and keeps them forever . after each turn, the traders assigns one ( virtual ) point to each of the strategies which would have predicted the correct outcome . along the gamethe traders always choose the strategy with the highest score .the mg is a very simple model , capable of exhibiting complex behaviour , like phase transitions between an information - efficient phase and information - inefficient phase , by just varying a control parameter , e.g. , the ratio between information complexity and number of strategies present in the game . , , and and ( b ) sp500 data ] more realistic features are reached with the grand canonical minority game . in this modelwe define the price process in terms of excess demand , and introduce two kinds of agents .* the first kind is called producers , who go to the market only for the purpose of exchanging goods ; they only have one strategy and in this sense they behave in a deterministic way with respect to .the number of producers is . *the other kind of agents are the speculators , who go to the market to make profit from price fluctuations . since they are endowed with at least two strategies during the game they need to use the best strategy ; in this sense speculators are adaptative agents with bounded rationality .the number of speculators is . in this version of the game ,the number of speculators can change anytime , because the agent may decide not to trade in which case . since the strategy chosen by speculators is the one with highest score, it is very important to update the scores of each strategy at each time step .the updating of the scoring of each strategy belonging to agent is given by the following equations where is a threshold parameter .a sample of the trajectory generated by this model is shown in fig .[ fig : return](a ) .an important piece of information to understand the generating mechanism of the system can be obtained from the periods where no coin is tossed .the game is necessarily deterministic during this mode of evolution and we selected a rather long period of 1114 points to perform a time series analysis in this regime .this work is based upon the grand canonical mg model , henceforth called mg model .in this section we discuss the tools employed in time series analysis starting with the bds statistics .we generate points for each of the benchmark data described in the appendix .another benchmark is the set comprising closing prices for the index sp500 from january 01 , 1965 to january 01 , 1995 , shown in fig .[ fig : return](b ) . all time series used here are unit root stationary with the exception of the financial index : this nonstationary is removed by taking first difference .the bds test uses the correlation integral as the basis of a statistic to test whether a series of data is __ in the chaos literature the correlation integral is part of an efficient tool to compute the fractal dimension of objects called attractors ( for a formal definition of attractor see , e.g. , ref . ) . given a sample of empirical data ,the theory of state - space reconstruction requires that the -histories of the data be constructed , where is called the embedding dimension . under certain conditions it is possible to reproduce in this space the dynamics of the system for a correct choice of .the correlation integral is a function defined on the trajectories in this space and from it one can compute the correlation dimension . a simple test for determinism consists of increasing the embedding dimension and observing the occurrence of a corresponding increase in the correlation dimension .some conditions have to be met in order to apply this method , mainly stationarity and sufficient number of data points .our results show that the increase of the correlation dimension for the mg model with respect to the embedding dimension is practically identical to the stochastic benchmark series , including the index sp500 . in this workwe resort to better ways of analysing the occurrence of stochastic behaviour in complex time series evolution .the bds statistic will be applied to the residue of arima processes in order to detect nonlinearity in the data . in this sensethe statistic is used as a specification test .the asymptotic distribution of the statistic under the null of pure whiteness , is the standard normal distribution .the alternative hypothesis is not specified .the code implemented here is taken from ref . . from table[ table1 ] , the null hypothesis for the mg model and for the sp500 index are rejected more strongly than for the nonlinear stochastic model . in the surrogate data analysis, a null hypothesis is tested under a measure , usually some nonlinear statistic .surrogates are copies of the original time series preserving all its linear stochastic structure .let and be the test measure for the original and the surrogate data sets , respectively .the null hypothesis is rejected only if or for all surrogates .some care should be taken to test the null hypothesis , otherwise false - positive rejection of null hypothesis could result .a detailed analysis on this is found in . heresurrogate data sets are used in complement to the bds statistic to test for nonlinearity .the null hypothesis is that the data is described by a stationary linear stochastic process with gaussian inputs .the test statistic is a simple measure of predictability as in and the procedure is to generate samples of rescaled surrogates and to compare with the mg model .if the test statistic falls outside the interval defined by the surrogates , then the null is rejected at the % level .we found that , for embedding dimensions from 2 to 5 , the mg is nonlinear at this level of significance , corroborating the previous result using the bds . in the remaining of this sectionwe discuss methods which do not require embeddings .another measure of randomness that provides further insight into time series dynamics is the lempel - ziv complexity .no embedding is necessary and the data is interpreted as a binary signal generated by some kind of source .this idea is ever present in communication theory where one wishes to determine the minimum alphabet required to code a source whose signal is to be sent through a noisy channel . let us consider the length of the minimal program that reproduces a sequence with symbols .the lempel - ziv algorithm is constructed by a special parsing which splits the sequence into words of the shortest length and that has not appeared previously .for example , the sequence is parsed as .one can show that where is the number of distinct words in a parsing and the size of the sequence . from this onecan see that contains a measure of randomness where a source that produces a greater number of new words is more random than a source producing a more repetitive pattern .in analogy with dynamical evolution , those systems that are composed of well defined cycles are predictable while chaotic motion and stochastic processes are always producing new kinds of trajectories that never repeat themselves .a comparison between chaos generated by differential equations and stochasticity can now be obtained . in the former casethe lempel - ziv complexity is well below 1 while in the latter it is close to this value .more specifically , if we consider an oscillatory system such as the well known van der pol oscillator , then . for the lorenz attractor with 2000 points . in section [ sec:5 ] we comment on the discrepancy between this value and that in table [ table2 ] , and discuss the use of this complexity measure in dynamical systems generated by maps .the complexity for the mg dynamics is found to be while the financial index has .the implementation of recurrence plots used here is taken from since it provides a clear distinction of the systems we intend to classify .the idea is to detect regions of close returns " in a data set .the construction of the plots is simple : just compute the absolute values for all the points in the data base .if the horizontal axis is designated by , corresponding to , and the vertical axis by , corresponding to , then plot a black dot at the site , whenever the absolute value difference is lower than ; otherwise plot it white .actually the time series is normalized to and then is taken as % of the average distance between successive points .the black / white pattern can be used to detect determinism in the data .there is a clear difference amongst plots generated by differential equations , maps , random data and stochastic processes as shown in fig .[ fig : rp ] .patterns of horizontal segments in the recurrence plot indicate the presence of unstable periodic orbits in maps or differential equations . in this sense, these plots detect low dimensional chaos in relatively small and even noisy data sets .an interesting probability density estimation approach to characterize and forecast time series developed by gershenfeld , schoner and metois is the so called cluster - weighted modelling .this seems to be a powerful technique as it characterizes extremely well the time series of nonlinear , nonstationary , non - gaussian and discontinuous systems using probabilistic dependence of local models .the cluster - weighted modelling technique estimates the functional dependence of time series in terms of delay coordinates .the main task of this approach is to find the conditional forecast by estimating the joint probability density .component is fitted by where , with delay and ( b ) minority game model , return is fitted in a similar fashion as in ( a ) ] let be the observations in which are known inputs and are the corresponding outputs . by knowing the joint probability density , we can derive the conditional forecast , the expectation value of given , .we can also deduce other quantities such as the variance of the above estimation .actually , the joint density is expanded terms of clusters which describe the local models .each cluster contains three terms namely , the weight , the domain of influence in the input space , and finally the dependence in the output space .thus the joint density can be written as , once the joint density is known the other quantities can be derived from . for example , the conditional forecast is given by , here describes the local relationship between and .the parameters are found by maximizing the cluster - weighted log - likelihood .the simplest approximation for the local model is with linear coefficients of the form , the method just described is capable of modelling a wide range of deterministic time series .here we use cluster weighted modelling to distinguish between deterministic and stochastic time series . in deterministic systemswe observe that the variances of the different clusters converge to values lower than the variance of the original time series and one can verify that this property is robust under changes of the number of clusters . on the other hand stochastic systems do not have this property .[ fig : cwm ] illustrates the comparison between the fitting of a deterministic system ( lorenz ) and the minority game data using cluster weighted modelling .the main objective is to understand the minority game mode of evolution and other similar time series behaviour .although this system is not generated by any kind of differentiable dynamics or even stochastic differential equations , we use in its analysis methods from dynamical systems , stochastic processes and complex systems theory .tables [ table1 ] & [ table2 ] and fig .[ fig : rp ] summarize the main results .we will make frequent reference to the sp500 index since the minority game model is supposed to reproduce the dynamical evolution of financial markets and this index is used as a benchmark for comparison ..bds statistic [ cols= " < , < , < , < " , ] the lempel - ziv complexity is an important parameter that can be used in the analysis of complex systems .its advantage is that it does not require embeddings and can be easily employed in conjunction with other methods .the results in table [ table1 ] supports the idea that there is a stochastic mechanism in operation driving the mg model .there is a higher degree of indeterminacy in the sp500 and this is perhaps due to the fact that in this index there is a certain amount of measurement noise .the surprisingly low complexity of the lorenz system is comparable to that of limit cycles , e.g. van der pol oscillator .the explanation for this comes about when we compute its complexity for shorter time series .for example at 2000 points the complexity is about times higher than the complexity for a 10000 length series as reported in table [ table2 ] .this phenomenon does not occur for the discrete system like henon attractor .due to the fact that the lorenz attractor contains a dense set of unstable periodic orbits , long time evolution affects the computation of the complexity and reveals some resemblance with periodic systems .effects of this magnitude did not appear in the other time series analysed . in the deterministic intervals of the mg model, the complexity has a value comparable to that of the chaotic henon map . the lempel - ziv complexity , as any othertest or statistic , should always be used in conjunction with other diagnostic tools . in all simulationsperformed so far we have never found a stochastic process with complexity less than 0.8 .however there are chaotic systems with complexity beyond this value , for example the family of maps , a prime number .another result is that known chaotic systems described by differential equations do not have high complexity . to conclude that a system is stochastic we employ recurrence plots and the cluster weighted modelling approach .the complexity is then used as a confirmation and , more importantly , to associate a level of stochasticity .this is done in the same way as using lyapunov exponents to quantify chaos once determinism has been found .the recurrence plot is a visual method which helps in the identification of similarities and differences amongst diverse modes of evolution .several tests with differential equations and maps , represented here by the lorenz and henon systems , show that it is unlikely that low dimensional chaos can describe the kind of evolution found in the markets and in the mg model . in particular ,recurrences are clearly identified in the lorenz system and nothing of the kind will ever appear in stochastic models . in this sense stochastic processesare better suited to describe the financial index and the mg model .however , in the deterministic regime one can identify close return patterns similar to those of low dimensional chaotic systems .cluster weighted analysis reveals another aspect of the minority game behaviour .its use in the modelling of general deterministic systems produces clusters whose variances are always smaller than the variance of the data . in contrast to this ,when applied to a stochastic system the variances of the clusters are comparable to the variances of the data .such distinction is preserved when we vary the number of clusters .the issue of nonstationarity is a subtle one .we limited ourselves in this study to unit root stationarity , but more sophisticated methods need to be used for the several brands of mg models and financial indices . using the complexity parameter we found that long time evolution reveals some intrinsic features of chaotic attractors described by differential equations . also , this parameter associated a higher complexity to the sp500 index as compared to the mg model and a possible explanation for this was given above .the recurrence plots confirm that , in general , the mg model can not be described by low dimensional chaotic systems .the nonlinear character of the model and the index are clearly indicated by the bds test and surrogate data analysis .the mg model parameters employed in our simulations were chosen in the information efficient region . in the inefficient caseour simulations show that the system is a non random process with complexity close to zero .the recurrence plot in this case provides a strong evidence of recurring orbits with several traces of horizontal segments .thus , randomness is directly related to efficiency .an important finding in our analysis of the mg game is that low dimensional chaotic regimes are possible during the evolution .this occurs for intervals of about 1000 iterations , while shorter intervals of deterministic evolution are difficult to classify .a complete study of the probabilistic structure of the deterministic phases , and their statistical significance , requires a more extensive investigation . for the full motion , comprised of transitions between deterministic modes induced by random perturbations ,the time series analysis has indicated the operation of a nonlinear stochastic process which is similar , but not identical , to the sp500 index . in real marketsit is possible to find periods of deterministic behaviour in exchange rates for certain instruments and specific periods of time . in the sp500such occurrence is unlikely but an extensive search in the data base for several lengths and starting points is feasible . even if deterministic modes can be found in historical stock prices , indices or rates , turning this into potential profit is a remote , but not an entirely discarded , possibility worthwhile pursuing .another interest in this investigation is that it provides a test amongst several models . in this senseone selects the models whose time series properties reproduce better real market evolution . in derivatives pricing a widely used test , and parameter estimation procedure ,is to minimize the hedging error .this can be implemented only if a model is available , e.g. black - scholes . in a non - gaussian contextthere is no consensus on which model is appropriate .an extension of this work , which will be pursued in a future work , is to use time series methods in the several brands of agents based models and to compare them with market data .in addition to the stylized facts we should include nonlinearity , at least for the sp500 , and this is an additional reason to discard simple models based on the geometric brownian motion .additional investigations in this study is to model the trajectories of the minority game model having in view the pricing of derivatives instruments , and to use a more extensive set of statistics to examine other financial series in order to confront the similarities and differences of this model with real market data .the time series used as benchmarks were chosen to represent the kind of behaviour we intend to identify in the evolution of the minority game model . the lorenz system and the henon mapping are prototypes of deterministic behavior generated by differential equations and differentiable mappings .arma models and the nlma are examples of linear and nonlinear stochastic processes . in the followingwe describe briefly the models used in the present study .nonlinear differential equations the parameter values are chosen as , and .the data series is obtained by solving eqs .( [ eq : lorenz ] ) numerically using fourth order runge - kutta method .f. takens , detecting strange attractors in turbulence , in : d. rand , l .- s .young ( eds . ) , dynamical systems and turbulence , vol .898 of lecture notes in mathematics , springer - verlag , berlin , 1981 , pp . 366381 .p. e. rapp , c. j. cellucci , t. a. a. watanabe , a. m. albano , t. i. schmah , surrogate data pathologies and the false - positive rejection of the null hypothesis , int .. chaos 11 ( 2001 ) 983997 .
the minority game ( mg ) model introduced recently provides promising insights into the understanding of the evolution of prices , indices and rates in the financial markets . in this paper we perform a time series analysis of the model employing tools from statistics , dynamical systems theory and stochastic processes . using benchmark systems and a financial index for comparison , several conclusions are obtained about the generating mechanism for this kind of evolution . the motion is deterministic , driven by occasional random external perturbation . when the interval between two successive perturbations is sufficiently large , one can find low dimensional chaos in this regime . however , the full motion of the mg model is found to be similar to that of the first differences of the sp500 index : stochastic , nonlinear and ( unit root ) stationary . , , , and
the evolution of the cellular network generations is influenced primarily by continuous growth in wireless user devices , data usage , and the need for a better quality of experience ( qoe ) .more than 50 billion connected devices are expected to utilize the cellular network services by the end of the year 2020 , which would result in a tremendous increase in data traffic , as compared to the year 2014 . however , state - of - the - art solutions are not sufficient for the challenges mentioned above . in short , the increase of 3d ( device , data , and data transfer rate ) encourages the development of 5 g networks .specifically , the fifth generation ( 5 g ) of the cellular networks will highlight and address the following three broad views : ( _ i _ ) user - centric ( by providing 24 device connectivity , uninterrupted communication services , and a smooth consumer experience ) , ( _ ii _ ) service - provider - centric ( by providing a connected intelligent transportation systems , road - side service units , sensors , and mission critical monitoring / tracking services ) , and ( _ iii _ ) network - operator - centric ( by providing an energy - efficient , scalable , low - cost , uniformly - monitored , programmable , and secure communication infrastructure ) .therefore , 5 g networks are perceived to realize the three main features as below : * _ ubiquitous connectivity _ : in future , many types of devices will connect ubiquitously and provide an uninterrupted user experience .in fact , the user - centric view will be realized by ubiquitous connectivity .* _ zero latency _ : the 5 g networks will support life - critical systems , real - time applications , and services with zero delay tolerance .hence , it is envisioned that 5 g networks will realize zero latency , _i_._e_. , extremely low latency of the order of 1 millisecond .in fact , the service - provider - centric view will be realized by the zero latency . * _ high - speed gigabit connection _ : the zero latency property could be achieved using a high - speed connection for fast data transmission and reception , which will be of the order of gigabits per second to users and machines .a few more _ key features of 5 g networks _ are enlisted and compared to the fourth generation ( 4 g ) of the cellular networks , as below : ( _ i _ ) 10- number of connected devices , ( _ ii _ ) 1000 higher mobile data volume per area , ( _ iii _ ) 10- higher data rate , ( _ iv _ ) 1 millisecond latency , ( _ v _ ) 99.99% availability , ( _ vi _ ) 100% coverage , ( _ vii _ ) energy consumption as compared to the year 2010 , ( _ viii _ ) real - time information processing and transmission , ( _ ix _ ) network management operation expenses , and ( _ x _ ) seamless integration of the current wireless technologies .r9 cm [ fig : circle ] the revolutionary scope and the consequent advantages of the envisioned 5 g networks , therefore , demand new architectures , methodologies , and technologies ( see figure [ fig : circle ] ) , _ e_._g_. , energy - efficient heterogeneous frameworks , cloud - based communication ( software - defined networks ( sdn ) and network function virtualization ( nfv ) ) , full duplex radio , self - interference cancellation ( sic ) , device - to - device ( d2d ) communications , machine - to - machine ( m2 m ) communications , access protocols , cheap devices , cognitive networks ( for accessing licensed , unlicensed , and shared frequency bands ) , dense - deployment , security - privacy protocols for communication and data transfer , backhaul connections , massive multiple - input and multiple - output ( mmimo ) , multi - radio access technology ( rat ) architectures , and technologies for working on millimeter wave ( mmwave ) 30300 ghz .interestingly , the 5 g networks will not be a mere enhancement of 4 g networks in terms of additional capacity ; they will encompass a system architecture visualization , conceptualization , and redesigning at every communication layer .several industries , _ e_._g_. , alcatel - lucent , docomo , gsma intelligence , huawei , nokia siemens networks , qualcomm , samsung , vodafone , the european commission supported 5 g infrastructure public private partnership ( 5gppp ) , and mobile and wireless communications enablers for the twenty - twenty information society ( metis ) , are brainstorming with the development of 5 g networks .currently , the industry standards are yet to be evolved about the expected designs and architectures for 5 g networks .* scope of the paper .* in this paper , we will review the vision of the 5 g networks , advantages , applications , proposed architectures , implementation issues , real demonstrations , and testbeds .the outline of the paper is provided in figure [ fig : outline of the paper ] . in section [section : promises of 5 g ] , we will elaborate the vision of 5 g networks .section [ section : challenges in the development of 5 g ] presents challenges in the development of 5 g networks .section [ sec : architectures of the future 5 g mobile cellular networks ] addresses the currently proposed architectures for 5 g networks , _e_._g_. , multi - tier , cognitive radio based , cloud - based , device proximity based , and energy - efficient architectures . section [ sec : management issues in 5 g networks ] presents issues regarding interference , handoff , quality of services , load balancing , channel access , and security - privacy of the network .sections [ sec : methodology and technology for 5 g networks ] , [ sec : applications of 5 g networks ] , and [ sec : real demonstrations and test - beds for 5 g networks ] present several methodologies and technologies involved in 5 g networks , applications of 5 g networks , and real demonstrations and testbeds of 5 g networks , respectively .we would like to emphasize that there do exist some review works on 5 g networks by andrews et al . , chvez - santiago et al . , and gavrilovska et al . , to the best of our knowledge . however , our perspective about 5 g networks is different , as we deal with a variety of architectures and discuss several implementation affairs , technologies in 5 g networks along with applications and real - testbed demonstrations .in addition , we intentionally avoid an mmwave oriented discussion in this paper , unlike the current work .we encourage our readers to see an overview about the generations of the cellular networks ( see table [ table : the generation of cellular networks ] ) and the crucial limitations of current cellular networks in the next section ..the generations of the cellular networks . [ cols="<,<,<,<",options="header " , ] wang et al . suggested a way for separating indoor and outdoor users and using a _ mobile small - cell _ on a train or a bus . for separating indoor and outdoor users, a mbs holds large antenna arrays with some antenna elements distributed around the macrocell and connected to the mbs using optical fibers . a sbs and large antenna arrays are deployed in each building for communicating with the mbs .all ues inside a building can have a connection to another ue either through the sbs or by using wifi , mmwave , or vlc .thus , the separation of users results in less load on a mbs .wang et al . also suggested to use a mobile small - cell that is located inside a vehicle to allow communication among internal ues , while large antenna arrays are located outside the vehicle to communicate with a mbs .thus , all the ues inside a vehicle ( or a building ) appear to be a single unit with respect to the corresponding mbs , and clearly , the sbs appears as a mbs to all these ues . in ,a two - tier architecture is deployed as a process of _ network densification _ that is a combination of _ spatial densification _ ( increasing the number of antennas per ue and mbs , and increasing the density of bss ) and _ spectral aggregation _ ( using higher frequency bands 3 ghz ) .a tradeoff between the transmission power of a macrocell and the coverage area of small - cells is presented in ; _i_._e_. , on the one hand , if the transmission power of a macrocell is high , then many adjacent ues to a small - cell may find themselves in the service area of the macrocell , and hence , it will decrease the coverage area of that small - cell . on the other hand , if the transmission power of a macrocell is low , then the coverage area of the small - cell will increase .therefore , _ cell range expansion _( _ i_._e_. , a biased handoff in the favor of small - cells ) is carried out to serve more ues by small - cells to which they are closer .moreover , sbss , deployed in offices or homes , can be used to serve outdoor users , _e_._g_. , pedestrians and low - mobility vehicles , in their neighborhoods , and the approach is called _ indoor - to - outdoor user service _ .hossain et al . presented a multi - tier architecture consisting of several types of small - cells , relays , and d2d communication for serving users with different qos requirements in spectrum - efficient and energy - efficient manners .interestingly , all of these architectures consider that ues spontaneously discover a sbs .zhang et al . proposed a centralized system in which a mbs assists ues to have connections to particular sbss , thereby interference between ues and sbss is reduced . however , this approach overburdens the mbs .* advantage of the deployment of small - cells . * * _ high data rate and efficient spectrum use _ : the small physical separation between a sbs and ues ( served by the same sbs ) leads to a higher data rate and a better indoor coverage . also , the spectrum efficiency increases due to fewer ues in direct communication with a mbs . *_ energy saving _ :the use of small - cells reduces the energy consumption of the network ( by not involving mbss ) and of ues ( by allowing ues to communicate at a shorter range with low signaling overhead ) . *_ money saving _ : it is more economical to install a sbs without any cumbersome planning as compared to a mbs , and also the operational - management cost is much lower than the cost associated with a mbs .* the plug - and - play utility of small - cells boosts the on - demand network capacity . * _ less congestion to a mbs _ : sbss offload ues from a mbs so that the mbs is lightly loaded and less congested , and hence , improve the system capacity . * _ easy handoff _ : mobile small - cells also follow the advantages of small - cells . moreover , they provide an attractive solution to highly mobile ues by reducing handoff time overheads , since a mobile small - cell is capable to do the handoff on behalf of all related ues .* disadvantage of small - cells . * despite numerous prominent benefits as mentioned above , there are a few realistic issues such as implementation cost and operational reliability .the small - cells indeed impose an initial cost to the infrastructure , but less than the cost associated with a mbs .moreover , a frequent authentication is mandatory due to frequent handoff operations .further , an active or passive ( on / off ) state update of any small - cell would definitely result in frequent topological updates . * open issues in the deployment of 2-tier architectures using small - cells . * * _ interference management _ : the deployment of small - cells results in several types of interferences , as : _ inter - tier interference _ ( _ i_._e_. , interference from a mbs to a sbs , interference from a mbs to a sbs s ues , and interference from a sbs to a mbs s ues ) , and _ intra - tier interference _ ( _ i_._e_. , interference from a sbs to other sbss ues ) .hence , it is also required to develop models and algorithms to handle these interferences . *_ backhaul data transfer _ : though we have models to transfer data from a sbs to the core network , which we will discuss next in section [ subsubsec : backhaul data transfer from small - cells ] , an extremely dense - deployment of small - cells requires a huge amount of data transfer , and certainly , requires cost efficient architectures .data transfer from a sbs to the core network is a challenging task , and in general , there may be three approaches to transfer ( backhaul ) data to the core network , as follows : * _ wired optical fiber _ : by establishing a wired optical fiber link from each sbs to a mbs ; however , it is time - consuming and expensive .* _ wireless point - to - multipoint _ ( ptmp ) : by deploying a ptmp - bs at a mbs that communicates with sbss and transfers data to the core network. * _ wireless point - to - point _ ( ptp ) : by using directional antennas in line - of - sight ( los ) environments ; hence , it provides high capacity ptp links ( same as with wired optical fibers ) , at a significantly lower cost .ge et al . presented two architectures based on the wireless ptmp approach . in the first ( centralized ) architecture ,all sbss send data using mmwave to a mbs that eventually aggregates the received data and forwards the same to the core network using fiber . in the second ( distributed ) architecture , all small - cells cooperatively forward data using mmwave to a specified sbs that transfers data to the core network using fiber without the explicit involvement of a mbs .ni et al . proposed an _ adaptive backhaul architecture _ based on the wireless ptp approach and frequency division duplex for ul and dl channels .a tree structure is used , where the root node is connected to the core network using fiber , the leaf nodes represent ues , and other nodes represent sbss .the data is transferred from the leaf nodes to the root node that transfers the same to the core network .the bandwidth is selected dynamically for backhaul links , as per the current bandwidth requirements , interference conditions , and the network topology .a similar approach is also presented in .an automatic detection and recovery of a failed cell is an important issue in densely deployed multi - tier architectures .wang and zhang provided three approaches for designing a self - healing architecture such as below : 1 ._ centralized approach _ : a dedicated server is responsible for detecting a failed cell by measuring and analyzing abnormal behavior of users , _e_._g_. , received signal strengths ( rsss ) at users and handoff by several users at any time from a particular cell .the server collects global information and reconfigures the failed cell .however , the approach suffers with a high communication overhead and a high computational cost .2 . _ distributed approach _ :each sbs detects failed small - cells in neighborhoods by measuring and analyzing users handoff behavior and the neighboring small - cells signals .consequently , on detecting a failed cell , a sbs might increase the transmission power in order to incorporate users of the failed cell .however , the approach might not work efficiently in case that users are scattered sparsely .3 . _ local cooperative or hybrid approach _ : this approach combines the benefits of both the previous approaches , and therefore , minimizes the drawback . essentially , two steps are utilized , namely distributed trigger and cooperative detection . in the distributed trigger ,each sbs collects information about users behavior .subsequently , a trigger message is sent to a dedicated server in case the received information thrives below a certain threshold .hence , it does not require communication among small - cells . in the cooperative detection, the dedicated server takes the final decision based on the information received from several small - cells , resulting in a high accuracy and lower latency . a cognitive radio network ( crn ) is a collection of cognitive radio nodes ( or processors ) , called secondary users ( sus ) that exploit the existing spectrum opportunistically .the sus have the _ leira _ ( learning , efficiency , intelligence , reliability , and adaptively ) property for scanning and operating on multiple heterogeneous channels ( or frequency bands ) in the absence of the licensed user(s ) , termed as primary user(s ) ( pus ) , of the respective bands .each pu has a fixed bandwidth , high transmit power , and high reliability ; however , the sus work on a broad range of bandwidth with low transmit power and low reliability .a crn in 5 g networks is used for designing multi - tier architectures , removing interference among cells , and minimizing energy consumption in the network .a crn creates a 2-tier architecture , similar to architectures discussed in section [ subsec : two - tier architectures ] ; however , it is assumed that either a mbs or a sbs has cognitive properties for working on different channels .hong et al . presented two types of crn - based architectures for 5 g networks , as : ( _ i _ ) non - cooperative and ( _ ii _ ) cooperative crns .the _ non - cooperative crn _ establishes a multi - rats system , having two separate radio interfaces that operate at the licensed and temporary unoccupied channels by pus , called cognitive channels .the sus work only on cognitive channels and form a crn , which overlays on the existing licensed cellular network .the two networks can be integrated in the upper layers while must be separated in the physical layer .this architecture can be used in different manners , as : ( _ i _ ) the cognitive and licensed channels are used by users near a mbs and users far away from the mbs , respectively , ( _ ii _ ) the cognitive and licensed channels are used for relaxed qos and strict qos , respectively .the _ cooperative crn _ uses only a licensed channel , where sus access the channel in an opportunistic fashion when the pu of the channel is absent .this architecture can be used in different manners , as : ( _ i _ ) a sbs communicates with a mbs using the licensed channel and provides service to its ues via an opportunistic access to the licensed frequency band , ( _ ii _ ) a licensed channel is used to serve ues by a sbs and the opportunistic access to the licensed channel is used to transfer backhaul data to the mbs . in short , the cooperative crn provides a real intuition of incorporating crns in 5 g networks , where a sbs works as a su , which scans activities of a macrocell and works on temporarily unoccupied frequency bands ( known as _ spectrum holes _ ) by a pu to provide services to their ues with minimally disrupting macrocell activities .a dynamic pricing model based on a game theoretic framework for cognitive small - cells is suggested in . since in realitysbss operators and mbss operators may not be identical and small - cells ues may achieve a higher data rate as compared to macrocells ues , the pricing model for both ues must be different .huang et al . provided an approach for avoiding inter - tier interference by integrating a _ cognitive technique _ at a sbs .the cognitive technique consists of three components , as : ( _ i _ ) a cognitive module , which senses the environment and collects information about spectrum holes , collision probability , qos requirements , macrocell activities , and channel gains , ( _ ii _ ) a cognitive engine , which analyzes and stores the collected information for estimating available resources , and ( _ iii _ ) a self - configuration module , which uses the stored information for dynamically optimizing several parameters for efficient handoff , interference , and power management .further , the channel allocation to a small - cell is done in a manner to avoid inter - tier and intra - tier interferences , based on gale - shapley spectrum sharing scheme , which avoids collisions by not assigning an identical channel to neighboring small - cells .wang et al . suggested an approach for mitigating inter - tier interference based on spectrum sensing , spectrum sharing , and cognitive relay , where links between a mbs and its ues are considered as pus and links between a sbs and its ues are considered as sus .cognitive techniques are used for detecting interference from a mbs to a sbs and vice versa , and a path loss estimation algorithm is provided for detecting interference from a small - cell s ues to a macrocell s ues . after detecting inter - tier interference , a small - cell shares spectrum with a macrocell using either _ overlay spectrum sharing scheme _ ( _ i_._e_. , sus utilize unoccupied channels , and it is applicable when a mbs and a sbs s ues are very close or no interference is required by a macrocell s ues ) or _ underlay spectrum sharing scheme _( _ i_._e_. , sus and pus transmit on an identical channel while restricting transmit power of sus , and hence , resulting in a higher spectrum utilization ) . notethat a crn can be used to support d2d communication and mitigate interferences caused by d2d communication , which we will see in section [ subsec : device - to - device communication architectures ] . * advantages of crns in 5 g networks .* * _ minimizing interference _ : by implementing a crn at small - cells , cognitive small - cells can avoid interference very efficiently by not selecting identical channels as the channels of neighboring small - cells . * _ increase network capacity _ : the spectrum holes can be exploited for supporting a higher data transfer rate and enhancing bandwidth utilization .* open issues . *usually , cellular networks are not energy - efficient as they consume energy in circuits , cooling systems , and also radiate in air .hence , an energy - efficient deployment of a crn in a cellular network is of utmost importance .further , there is a tradeoff between the spatial frequency reuse and the outage probability , which requires the selection of an efficient spectrum sensing algorithm .device - to - device ( d2d ) communication allows close proximity ues to communicate with each other on a licensed cellular bandwidth without involving a mbs or with a very controlled involvement of a mbs .the standards and frameworks for d2d communication are in an early stage of research . in this section, we will review d2d communication networks in short . fora detailed review of d2d communication , interested readers may refer to . * challenges in d2d communication .* * _ interference management _ : ues involved in d2d communication , say d - ues , face ( or create ) interference from ( or to ) other ues , or from ( or to ) a bs , based on the selection of a dl or ul channel , respectively .the following types of interferences are investigated in : * * when using a dl channel : ( _ i _ ) interference from bss in the same cell , ( _ ii _ ) interference from other co - channel d - ues in the same cell , and ( _ iii _ ) interference from bss and co - channel d - ues from other cells . * * when using a ul channel : ( _ i _ ) interference from all co - channel c - ues in the same cell and other cells , and ( _ ii _ ) interference from all co - channel d - ues in the same cell and other cells .+ _ proposed solutions _ : a simple solution may exist by implementing crns in d2d communication , as : d - ues are considered as sus and c - ues are considered as pus that should not be interfered .consequently , any mechanism of crns can be implemented in d2d communication for interference removal . * _ resource allocation _ : when ues involved in d2d communication , it is required to allocate a sufficient amount of resources , particularly bandwidth and channels .however , the allocation of optimum resources to d - ues must be carried out in a fashion that c - ues do not have interference from d - ues , and d - ues can also communicate and exchange data efficiently . + _ proposed solutions _: sara , frame - by - frame and slot - by - slot channel allocation methods , and a social - aware channel allocation . *_ delay - sensitive processing _ : audio , video streaming , and online gaming , which are natural in close proximity ues , require real - time and delay - sensitive processing .hence , it is required to consider delay - sensitive and real - time processing in d2d communication .+ _ proposed solutions _ : solutions based on channel state information ( csi ) and qos are provided in . *_ pricing _ : sometimes a d - ue uses resources ( _ e_._g_. , battery and data storage ) of other ues for relaying information , where the other ue may charge for providing its resources .hence , the design of a pricing model is needed , thereby a d - ue is not charged more money than that involved to communicate through a mbs . + _ proposed solutions _ : some solutions based on game theory , auction theory , and bargaining are suggested in .* d2d communication types .* d2d communication can be done in the following four ways , as follows : 1 . _ device relaying with operator controlled link establishment _( dr - oc ) : a ue at the edge of a cell or in a poor coverage area can communicate with a mbs by relaying its information via other ues , which are within the stronger coverage area and not at the edge ; see figure [ fig : two - tier architecture for 5 g networks with small - cells ] .2 . _ direct d2d communication with operator controlled link establishment _ ( dc - oc ) : source and destination uescommunicate directly with each other without involving a mbs , but they are _ assisted _ by the mbs for link establishment ; see figure [ fig : two - tier architecture for 5 g networks with small - cells ] .device relaying with device controlled link establishment _ ( dr - dc ) : source and destination ues communicate _ through a relay _ without involving a mbs , and they are also responsible for link establishment ; see figure [ fig : two - tier architecture for 5 g networks with small - cells ] .4 . _ direct d2d communication with device controlled link establishment _ ( dc - dc ) : source and destination ues communicate _ directly _ with each other without involving a mbs , and they are also responsible for link establishment ; see figure [ fig : two - tier architecture for 5 g networks with small - cells ] .note that dr - oc and dc - oc involve a mbs for resource allocation and call setup , and hence , prevent interference among devices to some extent .r10 cm [ fig : d2d communication architecture based on a social networking ] two types of coding schemes ( or communication types ) are described in : ( _ i _ ) two - way relay channel ( trc ) , where a source and a destination communicate through a relay , and ( _ ii _ ) multiple - access relay channel ( mrc ) , where multiple sources communicate to a destination through a relay with direct links .note that the workings of dr - oc and mrc , and dr - dc and trc are identical .two types of node discovery and d2d communication methods are also studied in , namely network - controlled approach and ad hoc network approach that work in a similar manner as dc - oc and dc - dc , respectively . * resource allocation methods .* now , we will review an architecture and some methods for resource allocation in d2d communication . _social - aware d2d architecture _ : as d2d communication is very efficient for close proximity ues , keeping this fact in view , li et al . suggested a social - aware d2d communication architecture based on the social networking .the architecture , see figure [ fig : d2d communication architecture based on a social networking ] , has four major components as : * _ ties _ : they are similar to friend relations in a social media , and hence , may be used as a trust measurement between two ues . allocating more spectrum and energy resources to ues with strong tiescan increase the peer discovery ratio , avoid congestion , and improve spectral efficiency . * _community _ : it is similar to a group on facebook and helps in allocating more resources to all the ues in a community to decrease content duplication and increase the network throughput . * _ centrality _ : it is similar to a node that has more communication links / friends in a social network .the concept of centrality in d2d communication reduces congestion and increases the network throughput by allocating more resources to a central node . *_ bridges _ : they are similar to a connection between two communities .hence , two devices forming a bridge can be allocated more resources as compared to other devices . _ channel allocation methods _ : two cooperative channel allocation methods , _ frame - by - frame _ and _ slot - by - slot _ , are given in .consider three zones , , and with some ues such that and , and intersect , but and do not intersect , and holds a ue that communicates with ues of and . in the frame - by - frame channel allocation method , ues of and do intra - zone communication at different frames , and the ues of also communicate at different frames .however , in the slot - by - slot channel allocation method , ues of and do intra - zone communication at an identical time , and of course , ues of communicate at a different time .both the methods improve the efficiency of frequency division multiplexing and increase the network throughput .hoang et al . provided an iterative algorithm for subcarrier and power allocation such that the minimal individual link data rates and proportionate fairness among d2d links are obtained .a 2-phase service - aware resource allocation scheme , called _ sara _ , is proposed . in the first phase of sara, resources are allocated on - demand to meet different service requirements of d - ues , and in the second phase of sara , the remaining resources are allocated to d - ues such that the system throughput increases .wang et al . provided a delay - aware and dynamic power control mechanism that adapts the transmit power of d - ues based on instantaneous values of csi , and hence , finds the urgency of the data flow .the dynamic power control selects a power control policy so that the long - term average delay and the long - term average power cost of all the flows minimize .* advantages of d2d communication .* d2d communication results in link reliability among d - ues , a higher data rate to d - ues , instant communication , an easy way for peer - to - peer file sharing , local voice services , local video streaming , local online gaming , an improved spectral efficiency , decreased power consumption of d - ues , and the traffic offload from a mbs .* open issues .* * _ security and privacy _ : in d2d communication , d - ues may take helps from other ues as relay nodes ; hence , it is required to communicate and transfer data in secure and privacy - preserving manners . consequently , the designing of energy - efficient and trust - making protocols is an open issue . * _ network coding scheme _ :when d2d communication uses relay nodes , an efficient network coding scheme may be utilized for improving the throughput . *_ multi - mode selection _ : in the current design of d2d communication , ues can do either d2d communication or communication to a bs ; however , it is not efficient . hence , there is a need to design a system that allows a ue to engage the two types of communication modes ( _ i_._e_. , d2d communication and communication to a bs ) simultaneously .r7.5 cm [ fig : c - ran - whole - cloud ] cloud computing infrastructure provides on - demand , easy , and scalable access to a shared pool of configurable resources , without worrying about the management of resources .the inclusion of the cloud in the mobile cellular communication can provide its benefits to the communication system . in this section, we will see cloud - based architectures or cloud - based radio access networks ( c - rans ) for 5 g networks .a detailed review of c - rans is given in .* the main idea of a c - ran .* the first c - ran is provided by china mobile research institute . the basic idea behindany c - ran is to execute most of the functions of a mbs in the cloud , and hence , divide the functionality of a mbs into a _ control layer _ and a _ data layer _ ; see figure [ fig : c - ran - whole - cloud ] .the functions of the control and the data layers are executed in a cloud and in a mbs , respectively .thus , a c - ran provides a _ dynamic _ service allocation scheme for scaling the network without installing costly network devices . specifically , a mbs has two main components , as : ( _ i _ ) a baseband unit ( bbu , for implementing baseband processing using baseband processors ) , and ( _ ii _ ) a remote radio head ( rrh , for performing radio functions ) . in most of the c - rans ,bbus are placed in the cloud and rrhs stay in mbss .thus , a c - ran provides an easily scalable and flexible architecture .we will see advantages of c - rans at the end of this section . * challenges in the deployment of a c - ran .* * _ an efficient fronthaul data transfer technique _ : a flexible cloudification of the functions of a mbs comes at the cost of efficient fronthaul data transfer from rrhs to bbus .the fast and efficient data transfer to the cloud has a proportionate impact on the performance of a c - ran . *_ real - time performance _ : since c - rans will be used instead of a mbs that provides all the services to users , it is required to transfer and process all the data in the cloud as fast as a mbs can do ; otherwise , it is hard to find solutions to real - time problems using a c - ran . *_ reliability _ : the cloud provider does not ensure any guarantee of failure - free executions of their hardware and software .thus , it is hard to simulate an error - free mbs using a c - ran . *_ security _ : the resources of the cloud are shared among several users and never be under the control of a single authority .hence , a malicious user may easily access the control layer of a c - ran , resulting in a more severe problem . *_ manageability _ : it is clear that a non - secure c - ran may be accessed by any cloud user , which poses an additional challenge in manageability of c - rans .further , the dynamic allocation of the cloud resources at a specific time interval is a critical issue ; otherwise , a c - ran may face additional latency .now , we will see some c - ran architectures in brief .* 2-layered c - ran architectures . * the authors provided two c - ran architectures based on the division of functionalities of a mbs , as : ( _ i _ ) full centralized c - ran , where a bbu and all the other higher level functionalities of a mbs are located in the cloud while a rrh is only located in the mbs , and ( _ ii _ ) partially centralized c - ran , where a rrh and some of the functionalities of a bbu are located in the mbs while all the remaining functions of the bbu and higher level functionalities of the mbs are located in the cloud . thus , the authors proposed the use of only two layers , namely a control layer and a data layer for implementing c - rans , as follows : 1 ._ data layer _ : it contains heterogeneous physical resources ( _ e_._g_. , radio interface equipment ) and performs signal processing tasks ( _ e_._g_. , channel decoding , demultiplexing , and fast fourier transformation ) .2 . _ control layer _ : it performs baseband processing and resource management ( application delivery , qos , real - time communication , seamless mobility , security , network management , regulation , and power control ) ; see figure [ fig : c - ran - whole - cloud ] .rost et al . introduced ran - as - a - service ( ranaas ) concept , having the control and the data layers .however , in ranaas , a cloud provides flexible and on - demand ran functionalities ( such as network management , congestion control , radio resource management , medium access control , and physical layer management ) , according to the network requirements and characteristics , unlike .hence , there is no need to split functionalities in advance to the control and the data layers , as a result ranaas provides more elasticity . till now , it is clear that how a c - ran will work .however , in order to achieve real - time performance , a rrh executing latency - critical applications may connect to a nearby small cloud while other rrhs that are not adhered to real - time applications may connect to a far larger cloud .softair is also a two - layered c - ran that performs mobility - management , resource - efficient network virtualization , and distributed and collaborative traffic balancing in the cloud .* 3-layered c - ran architectures . *the full - centralized c - ran architecture has some disadvantages , as : continuous exchange of raw baseband samples between the data and the control layers , and the control layer is usually far away from the data layer resulting in a processing delay . in order to remove these disadvantages , liu et al . proposed _ convergence of cloud and cellular systems _ ( concert ) . in this architecture , one more layer ,called a _ software - defined service layer _ , is introduced at the top of the control layer .the functioning of the layers in concert is as follows : 1 ._ data layer _ : is identical to the full centralized c - ran s data layer , having rrhs with less powerful _ computational resources _ for application level computations .control layer _ : works just as a logically centralized entity .the control layer coordinates with the data layer resources and presents them as virtual resources to the software - defined service layer .the control layer provides a few services as : radio interfacing management , wired networking management , and location - aware computing management to the data layer .software - defined services layer _ : works as a virtual bs and provides services ( _ e_._g_. , application delivery , qos , real - time communication , seamless mobility , security , network management , regulation , and power control ) to the data layer .wu et al . enhanced c - ran architecture and ranaas , by moving the whole ran to a cloud .the proposed architecture also has three layers , where the data layer and the control layer are same to the respective layers of c - ran . the third layer , called a _ service layer _ , executes in the cloud and provides some more functionalities than the software - defined services layer of , _e_._g_. , traffic management , the cell configuration , interference control , allocation of functional components to the physical elements , and video streaming services .the authors proposed an all - software - defined network using three types of hierarchical network controllers , namely mbs controller , ran controller , and network controller , where except the mbs controller all the others can be executed in the cloud , as follows : 1 ._ mbs controller _ : usually stays nearby ues , and performs wireless resource management and packet creation .2 . _ ran controller _ : stays at the top of mbs controllers , and performs connectivity , rat selection , handoff , qos , policies , mobility management .network controller _ : stays at the top of ran controllers , ensures end - to - end qos , and establishes application - aware routes .* advantages of c - rans in 5 g networks .* c - rans provide a variety of services as a software , power efficient , flexible , and scalable architecture for the future cellular communication . here , we enlist some advantages of c - rans , as follows : * _ an easy network management _ : c - rans facilitate on - demand installation of virtual resources and execute cloud - based resources that dynamically manage interference , traffic , load balance , mobility , and do coordinated signal processing . * _ reduce cost _ : it is very costly and time - consuming to deploy and install a mbs to increase the network capacity .however , the deployment of c - rans involves less cost , while it provides usual services like a mbs . as a result, operators are required to only deploy , install , and operate rrhs in mbss . * _ save energy of ues and a mbs _ : c - rans offload data - intensive computations from a mbs and may store data of ues and mbss .consequently , c - rans allow ues and mbss to offload their energy - consuming tasks to a nearby cloud , which saves energy of ues and mbss . *_ improved spectrum utilization _ : a c - ran enables sharing of csi , traffic data , and control information of mobile services among participating mbss , and hence , results in increased cooperation among mbss and reduced interference .* open issues . * transferring data from rrhs to bbus , _i_._e_. , from the data layer to the control layer , is a crucial step based on the selection of the functions of a mbs that has to be sent to a cloud , resulting in the minimal data movement in the network .however , the selection of functions to be executed in a cloud and a mbs is a non - trivial affair .the security and privacy issues involved in the cloud computing effect c - rans , and hence , the development of a c - ran has to deal with inherent challenges associated with the cloud and the wireless cellular communication simultaneously .energy - efficient infrastructures are a vital goal of 5 g networks .researchers have proposed a few ways of reducing energy in the infrastructure .rowell et al . considered a joint optimization of energy - efficiency and spectral - efficiency .a user - centric 5 g network is suggested in so that ues are allowed to select ul and dl channels from different bss depending on the load , channel conditions , services and application requirements . in a similar manner , decoupling of signaling anddata is useful for energy saving ; for example , a mbs may become a signaling bs while sbss may serve all data requests .thus , when there is no data traffic in a sbs , it can be turned off . a similar approach for decoupling of signaling and data is presented in .however , in , a ue gets connected to a sbs according to instructions by a mbs , and hence , it results in less energy consumption at ues side due to less interference , faster small - cells discovery , and mbs - assisted handover .hu and qian provided an energy - efficient c - ran architecture in a manner that rrhs serve almost a same number of ues .they also present an interference management approach so that the power consumption of sbss and mbss can be decreased . like rowell et al . , hu and qian also suggested that the association of a ue can not be done based on entirely a dl channel or a ul channel , and a ue must consider both the channels at the time of association with a bs .lin et al . suggested to include an energy harvesting device ( to collect energy ) and a spectrum harvesting controller ( to collect spectrum ) at sbss .in this section , we will see issues regarding the interference , handoff , qos , load balancing , and channel access management in the context of 5 g networks .we have already seen challenges in interference management ( section [ section : challenges in the development of 5 g ] ) . in this section, we will review some techniques / methods for interference management in 5 g networks .nam et al . handled ue - side interference by using a new type of receiver equipment , called an advanced receiver , which detects , decodes , and removes interference from receiving signals .in addition , the network - side interference is managed by a joint scheduling , which selects each ue according to the resources needed ( _ e_._g_. , time , frequency , transmission rate , and schemes of multiple cells ) for its association with a bs .hence , the joint scheduling , which can be implemented in a centralized or distributed manner , requires a coordination mechanism among the neighboring cells .hossain et al . proposed distributed cell access and power control ( capc ) schemes for handling interference in multi - tier architectures .capc involves : ( _ i _ ) prioritized power control ( ppc ) , which assumes that ues working under a sbs have a low - priority than ues working under a mbs , and hence , low - priority ues set their power so that the resulting interference must not exceed a predefined threshold ; ( _ ii _ ) cell association ( ca ) , which regards dynamic values of resources , traffic , distance to a mbs , and available channels at a mbs for selecting a mbs with the optimum values of the parameters ; and ( _ iii _ ) resource - aware ca and ppc ( rca - ppc ) , which is a combination of the first two approaches and allows a ue to connect simultaneously with multiple bss for a ul channel and a dl channel based on criteria of ppc and ca .hong et al . suggested to use self - interference cancellation ( sic ) in small - cells networks .as we have seen that sbss require methods to transfer backhaul data to a mbs ( section [ subsubsec : backhaul data transfer from small - cells ] ) , the use of sic can eliminate the need of such methods and result in _ self - backhauled small - cells _ ( sscs ) .sscs use sic for providing services and backhaul data transfer , and more importantly , they gain almost the same performance as having a small - cell connected with a wired optical fiber .it works as : in the dl channel , a sbs may receive from a mbs and simultaneously transmit to ues . in the ul channel ,a sbs may receive from ues and simultaneously transmit data to the mbs .therefore , a small - cell can completely remove the need of a separate backhaul data transfer method , resulting in reduced cost .the authors suggested that the measurement of inter - user interference channel , and then , the allocation of ul and dl channels by a mbs can mitigate _inter - user ul - to - dl interference _ in a _ single - cell _ full duplex radio network .however , in the case of a _ multi - cell _ full duplex radio network , interference mitigation becomes more complex , because of the existence of interference in ul and dl channels between multi - cells ues that work on identical frequency and time .* open issues .* interference cancellation in a full duplex radio network is still an open problem for multi - cell .the design of algorithms for inter - bs dl - to - ul - interference and inter - user ul - to - dl - interference cancellation in a full duplex radio network is still open and to be explored .handoff provides a way to ues connected to a bs to move to another bs without disconnecting their sessions .* challenges in the handoff process in 5 g networks . *the handoff management in 5 g networks has inherent challenges associated with the current cellular networks , _e_._g_. , minimum latency , improved routing , security , and less uncertainty of having no services .the network densification , very high mobility , the zero latency , and accessing multi - rats make handoff management in 5 g networks harder .also , the current cellular networks do not provide efficient load balancing for a bs at the time of handoff .for example , movement of ues from houses to offices in the morning creates a load imbalance at bss of respective areas .* types of handoff in 5 g networks . *three types of handoffs are presented in the context of 5 g networks , as follows : 1 ._ intra - macrocell handoff _ : refers to handoff between small - cells that are working under a single mbs .inter - macrocell handoff _ : refers to handoff between macrocells .it may also lead to handoff between two small - cells that are working under different mbss .note that if the handover between small - cells of two different mbss is not done properly , then the inter - macrocell handoff also fails .multi - rats handoff _ : refers to handoff of a ue from a rat to other rat .song et al . provided a handoff mechanism for highly mobile users , where a ue sends some parameters ( _ e_._g_. qos , signal - to - interference ratio ( sir ) , and time to handoff ) in a measurement report to the current mbs .sir is considered as a primary factor for finding a situation for an initiation of the handoff .the gray system model predicts the measurement report from the measurement report .the predicted value is used for the final decision for the handover process .zhang et al . proposed a handoff mechanism assisted by a mbs .the mbs collects several parameters from ues , and if the mbs finds the values of the received parameters below a threshold , then it finds a new sbs or mbs for handoff and informs to the ues . for handoff over different rats , orsino et al . proposed a handoff procedure so that a ue can select the most suitable rat without any performance loss .a ue collects received signal strength ( _ i_._e_. , rsrp ) or quality ( _ i_._e_. , rsrq ) from the current mbs , and then , it initiates handoff if rsrq is below a threshold .the ue collects several parameters ( _ e_._g_. , transmitted power , the cell s traffic load , and ue requested spectral efficiency ) from adjacent bss , and then , selects the most suitable bs .duan et al . provided an authenticated handoff procedure for c - rans and multi - tier 5 g networks .the control layer holds an authentication handover module ( ahm ) for monitoring and predicting the future location of ues ( based on the current location ) and preparing relevant cells before ues arrival in that .the ahm holds a master public - private key pair with each rrh , which are authenticated by the ahm in off - peak hours , and ues are verified before accessing the network services by rrhs .ues sends i d , the physical layer s attributes , location , speed , and direction to the control layer in a secure manner for the handoff process .the proposed approach reduces the risk of impersonation and man - in - the - middle attacks .giust et al . provided three distributed mobility management protocols , the first is based on the existing proxy mobile ipv6 ( pmipv6 ) , the second is based on sdn , and the third is based on routing protocols .* open issues .* handoff mechanisms are yet to be explored. it will be interesting to find solutions to extremely dense hetnets .furthermore , the handoff process may also create interference to other ues ; hence , it is required to develop algorithms while considering different types of interferences in 5 g networks and a tradeoff between the number of handoffs and the level of interference in the network .we have already seen challenges in qos management in section [ section : challenges in the development of 5 g ] . in this section, we will review some techniques / methods for qos management in 5 g networks .zhang et al . provided a mechanism for different delay - bounded qos for various types of services , applications , and users having diverse requirements , called heterogeneous statistical delay - bounded qos provisioning ( hsp ) .hsp maximizes the aggregate effective capacity of different types of 5 g architectures , reviewed in sections [ subsec : two - tier architectures ] , [ subsec : cognitive radio network based architectures ] , and [ subsec : device - to - device communication architectures ] .hsp algorithm claims better performance over other approaches ; however , it imposes new challenges in terms of the assignment of different resources for different links under the cover of delay - bounded qos requirements .the authors proposed the deployment of a quality management element ( qme ) in the cloud for monitoring inter - ues and inter - layer ( the control and the data layers in c - rans ) qos .rrus send wireless information ( _ e_._g_. , csi , reference signal received quality , and resource block utilization ) to the qme .consequently , the qme executes service control algorithms ( to manage qos and some other activities like traffic offloading and customized scheduling ) , and then , sends scheduling strategies to the rrus to achieve a desired level of qos .kim et al . provided a routing algorithm for multi - hop d2d communication .the algorithm takes into account different qos for each link so that it can achieve better performance than max - min routing algorithms .the algorithm increases flow until it provides the desired qos or reaches the maximum capacity of the link .since the algorithm considers individual links , there is a high probability that some of the channels will serve multiple - links with the desired qos .zhou et al . provided a qos - aware and energy - efficient resource allocation algorithm for dl channels , where ues are allocated an identical power in one case and non - identical power in the second case .the algorithm maximizes energy - efficiency while minimizes transmit power .hu and qian suggested that a ue must consider the source data rate , delay bound , and delay - violation probability before connecting to a mbs . * open issues . *the 5 g networks are supposed to satisfy the highest level of qos .the tactile internet requires the best qos , especially , latency of the order of 1 millisecond for senses such as touching , seeing , and hearing objects far away , as precise as human perception .however , the current proposed architectures do not support efficient tactile internet services . in future, it would be a promising area as to encode senses , exchange data satisfying the zero latency , and enable the user to receive the sensation .load balancing means allocation of resources to a cell such that all the users meet their demands .it is an important issue in the cellular wireless networks . in a 2-tier architecture , discussed in section [ subsec : two - tier architectures ] , user offloading to a small - cell is useless if there is no resource partitioning .singh and andrews provided an analytical and tractable framework for modeling and analyzing joint resource partitioning and offloading in a 2-tier architecture .hossain et al . provided a technique for cell association based on dynamic resources and traffic in a cell , as we have already discussed in section [ subsec : interference management in 5 g networks ] . in a fast moving vehicle , _e_._g_. , a train , it is very hard to allocate resources without any service interruption at all .a distributed load balancing algorithm for fast moving vehicles is presented in .load balancing architectures for d2d communication are given in , which we discussed in section [ subsec : device - to - device communication architectures ] .interested readers may refer to for further details of load balancing in 5 g networks .channel access protocols allow several ues to share a transmission channel without any collision while utilizing the maximum channel capacity . * challenges in channel access control management in 5 g networks .* channel access control management in 5 g networks faces inherent challenges associated with the current cellular networks , _e_._g_. , synchronization , fairness , adaptive rate control , resource reservation , real - time traffic support , scalability , throughput , and delay .in addition , providing the currently available best channel in 5 g networks is prone to additional challenges , as : high mobility of ues , working at the higher frequencies ( 3 ghz ) , different rats , dense networks , high qos , high link reliability , and the zero latency for applications and services .the authors proposed a frame - based medium access control ( fd - mac ) protocol for mmwave - based small - cells .fd - mac consists of two phases , as : ( _ i _ ) scheduling phase , when a sbs collects the traffic demands from the supported ues and computes a schedule for data transmission , and ( _ ii _ ) transmission phase , when the ues start concurrent transmissions following the schedule .a schedule , which is computed using a graph - edge coloring algorithm , consists of a sequence of topologies and a sequence of time intervals for indicating how long each topology should sustain .liu et al . provided mac protocols for ues of small - cells and macrocells .two types of mac protocols for sbs s ues are suggested , as : contention - based random channel access ( crca ) , where ues randomly access the channel and send messages if the channel is available , and reservation - based channel access ( rca ) , which uses time division multiple access . for macrocells ues , they provided a mac protocol for crn - based 5 g networks , where sus sense a licensed channel until it is free or the residual energy of sus exceeds a predetermined threshold , for saving their battery .further , they evaluated a tradeoff between the network throughput and sensing overhead .the authors suggested two crn - based channel access techniques for cognitive sbss .the first channel access scheme is termed as contention - resolution - based channel access ( cca ) , which is similar to crca , is based on carrier sense multiple access .the other channel access scheme is termed as uncoordinated aggressive channel access ( uca ) , which works identically as rca , for aggressively using channels in a small - cell for increasing opportunistic spectrum access performance .nikopour et al . provided a multi - user sparse code multiple access ( mu - scma ) for increasing dl s spectral efficiency .mu - scma does not require the complete csi , and hence , provides high data rate and the robustness to mobility .nikopour et al . also provided an uplink contention based scma for massive connectivity , data transmission with low signaling overhead , low delay , and diverse traffic connectivity requirements .* open issues .* the current channel access protocols do not regard qos and latency challenges in 5 g networks .hence , there is a scope of designing algorithms for finding multiple reliable links with the desired qos and the zero latency . in this section ,we present security and privacy related challenges and a discussion of security and privacy protocols in the context of 5 g networks .* challenges in security and privacy in 5 g networks . *_ authentication _ is a vital issue in any network . due to the zero latency guarantee of 5 g networks , authentication of ues and network devicesis very challenging , since the current authentication mechanisms use an authentication server that takes hundreds of milliseconds delay in a preliminary authentication phase .a _ fast and frequent handover _ of ues over small - cells requires for a robust , efficient , and secure handoff process for transferring context information .security to _ multi - rats _selection is also challenging , since each rat has its own challenges and certain methods to provide security ; clearly , there is a need to provide overlapped security solutions across the various types of rats . _c - rans _ also inherit all the challenges associated with the cloud computing and wireless networks .in addition , several other challenges ( _ e_._g_. , authorization and access control of ues , availability of the network , confidentiality of communication and data transfer , integrity of communication and data transmission , accounting and auditing of a task , low computation complexity , and communication cost ) require sophisticated solutions to make a secure 5 g network . the security and latency are correlated as a higher level of security and privacy results in increased latencies .therefore , the communication satisfying the zero latency is cumbersome when combined with secure and privacy - preserving 5 g networks .monitoring is suggested for securing the network and detecting intruders . however , monitoring a large number of ues ( by a trusted authority ) is not a trivial task ; hence , we do not see monitoring as a preferred way to secure networks .yang et al . focused on the physical layer security , which is independent of computational complexity and easily handles any number of devices .the physical layer security protocol considers locations of ues and provides the best way for ues for securely selecting a mbs or a sbs without overloading the network .tehrani et al . provided a method for secure and private d2d communications , called close access , where d - ues have a list of other trusted d - ues devices , and all such ues can communicate directly using an encryption scheme while the remaining ues not in the list utilize a mbs - assisted communication .an encryption based video sharing scheme is also presented in .kantola et al . proposed a policy - based scheme that can prevent dos and spoofing attacks . a solution to secure handoffis given in and discussed in section [ subsec : handoff management in 5 g networks ] .* open issues .* the current security and privacy solutions to 5 g networks are not impressive and , hopefully , unable to handle massive connections .we can clearly visualize a potential scope for developing _ latency - aware protocols _ along with security awareness that must consider secure data transmission , end - to - end security , secure and private storage , threats resistant ues , and valid network and software access .it is already mentioned in section [ section : introduction ] that the development of 5 g networks requires the design and implementation of new methodologies , techniques , and architectures .we have reviewed some of the methodologies and technologies in the previous sections , such as : ( _ i _ ) full duplex radios in sections [ section : challenges in the development of 5 g ] and [ subsec : interference management in 5 g networks ] , ( _ ii _ ) crns in section [ subsec : cognitive radio network based architectures ] , ( _ iii _ ) d2d communication in section [ subsec : device - to - device communication architectures ] , ( _ iv _ ) multi - tier heterogeneous deployment or dense - deployment techniques in sections [ subsec : two - tier architectures ] , [ subsec : cognitive radio network based architectures ] , and [ subsec : device - to - device communication architectures ] , ( _ v _ ) c - rans in section [ subsec : cloud - based architectures ] , ( _ vi _ ) ` green ' communication systems in section [ subsec : energy - efficient architectures for 5 g networks ] , and ( _ vii _ ) techniques related to interference , qos , handoff , channel access , load balancing in section [ sec : management issues in 5 g networks ] . in this section , we will briefly describe some techniques that are mentioned but not explained in earlier sections .* self - interference cancellation ( sic ) .* when a full duplex radio receives signals from another radio , it also receives interference signals by its own transmission , resulting in self - interference .hence , a full duplex radio has to implement techniques to cancel self - interference .sic techniques are classified into passive and active cancellations ; see . as advantages ,the implementation of sic enables seamless global roaming , high - throughput services , and low - latency applications in a cost effective manner .* downlink and uplink decoupling ( dud ) .* in the current cellular networks , a ue is associated with a bs based on the received signal power in its dl channel , and then , uses the same bs for ul channel transmission .dud allows a ue to select a dl channel and a ul channel from two different bss , based on the link quality , the cell load , and the cell backhaul capacity .therefore , a ue may have the dl channel connected through a bs and the ul channel connected through a different bs , resulting in a user - centric 5 g architecture and improving the capacity of ul channels , which is a prime concern .* network function virtualization ( nfv ) . *nfv implements network functions such as network address translation , firewalls , intrusion detection , domain name service , the traffic load management , and caching through software running on commodity servers .however , the conventional networks implement these functions on dedicated and application specific servers .hence , nfv decreases the burden on network operators by not updating dedicated servers / hardware , thereby saves cost .* software - defined networking ( sdn ) . * sdn architectures partition network control functions and data forwarding functions , thereby the network control functions are programmable , and the network infrastructure handles applications and network services .sdn architectures can be divided into three parts , as : ( _ i _ ) _ the software controller _ : holds network control functions such as the network manager , apis , network operating system , and maintaining the global view of the network ; ( _ ii _ ) _ the southbound part _ : provides an interface and a protocol between the controller and sdn - enable infrastructure , where openflow is the most famous protocol that provides communication between the controller and the southbound part ; ( _ iii _ ) _ the northbound part _ : provides an interface between sdn applications and the controller .interested readers may refer to to find details of sdn , challenges in sdn , and applications of sdn .note that sdn , nfv , and c - rans offload functionalities to software running on commodity servers .however , sdn separates network control functions from data forwarding functions , while nfv implements network functions in software . besides that c - rans integrate both sdn and nfv to meet the scalability and flexibility requirements in the future mobile networks . * millimeter waves ( mmwave ) . *the current wireless bandwidth is not able to support a huge number of ues in 5 g networks .hence , researchers are looking at 30 ghz to 300 ghz frequency bands , where mmwave communication is proposed for achieving high - speed data transfer .the current research focuses on 28 ghz band , 38 ghz band , 60 ghz band , and the e - band ( 7176 ghz and 8186 ghz ) .however , mmwave has several challenges at the physical , mac , and network layers .there are a number of papers about mmwave , and hence , we are not discussing mmwave in details .interested readers may refer to . * machine - to - machine ( m2 m ) communication . *m2 m communication refers to the communication between ( network ) devices ( _ e_._g_. , sensors / monitoring - devices with a cloud ) without human intervention .some examples of m2 m communication are intelligent transport systems , health measurement , monitoring of buildings , gas and oil pipelines , intelligent retail systems , and security and safety systems .however , the development of m2 m communication involves several challenges to be handled in the future , as follows : connectivity of massive devices , bursty data , the zero latency , scalability in terms of supporting devices , technologies and diverse applications , fast and reliable delivery of messages , and cost of devices .in addition , the development of efficient algorithms for location , time , group , priority , and multi - hop data transmission management for m2 m communication is needed .interested readers may refer to .* massive mimo ( mmimo ) .* mmimo systems are also known as large - scale antenna systems , very large mimo , hyper - mimo , and full - dimension mimo .mmimo systems use antenna arrays with hundreds of antennas at mbs for simultaneously serving many ues with a single antenna in identical time and frequency .hence , expensive equipment are mounted on a mbs .a mmimo system relies on spatial multiplexing , which in turn relies on the channel knowledge at mbs , on both ul and dl channels .a mmimo system reduces latency and energy , simplifies the mac layer , shows robustness against intentional jamming , and increases the capacity due to spatial multiplexing . * visual light communication ( vlc ) .* vlc is a high - speed data transfer medium for short range los optical links in the future cellular networks .the light - emitting diodes ( leds ) provide vlc using amplitude modulation at higher frequencies and achieve higher data rates while keeping the led s primary illumination function unaffected .vlc can be used for outdoor applications , where high power laser - based equipment provide transmission links , and for indoor application , where leds provide short distance transmission links .vlc is an energy - efficient technology , works on a wider range of unregulated frequency bands , shows high spatial reuse , and inherits security due to los .however , vlc is sensitive to sunlight and not able to work for a long range without los , and hence , confined coverage .the implementation of vlc has still some unanswered questions , such as : how vlc will work in a long range without los and will it work for backhaul data transfer in multihop ? * fast caching .* caching is a way for storing temporary data for reducing data access from slow memory or the network . in a network ,content caching is popular and answers the request while responding in place of the application servers as a proxy , thereby reducing the amount of hits that are directly sent to the ultimate backend server .in caching , three decisions are prominent as : what to cache , where to cache , and how to cache ?the authors suggested that ues will have enough memory in the future , and they can work as a cache for any other ue , since a small amount of popular data requires to be cached .wang et al . provided a caching mechanism based content centric networking ( ccn ) , assuming 5 g networks will include ccn - capable gateways , routers , and mbss .ccn provides in - network data storage , also known as _universal caching _ , at every node in the network .the cached data is uniquely identified at each node .accordingly , a user can request for a particular content from the content cache of any device within the network , or the request is forwarded to the actual source of content storage .interested readers may refer to for finding details about some of the above mentioned methodologies and technologies .the zero latency , high speed data transfer , and ubiquitous connectivity are the salient features of 5 g networks that are expected to serve a wide range of applications and services . in this section, we enumerate the most prominent applications of 5 g networks , as follows : * personal usages . *this domain of 5 g networks would be capable of supporting a wide range of ues , from scalable to heterogeneous devices . also the data demands ( _ e_._g_. , multimedia data , voice communication , and web surfing ) , would be satisfied while keeping the qos requirements. * virtualized homes . * due to c - ran architectures , users may have only low cost ues ( _ e_._g_. , set - top box for tvs and residential gateways for accessing the internet ) with services of the physical and data link layers .all the other higher layers applications may move to the cloud for universal access and outsourced computation services .* smart societies .* it is an abstract term for connected virtualized homes , offices , and stores .accordingly , every digital and electronic services / appliances , _e_._g_. , temperature maintenance , warning alarms , printers , lcds , air conditioners , physical workout equipment , and door locks , would be interconnected in a way that the collaborative actions would enhance the user experience .similarly , smart stores would assist in filtering out irrelevant product details , sale advertisements , and item suggestions on the go .* smart grids .* the smart grids would decentralize the energy distribution and better analyze the energy consumption .this would allow the smart grids to improve efficiency and economic benefits .the 5 g networks would allow a rapid and frequent statistical data observation , analysis , and fetching from remote sensors and would adjust the energy distribution accordingly . * the tactile internet . *the tactile internet improves the user experience in a virtual environment to an extent of only milliseconds of interaction latency .the futuristic applications such as automated vehicle platooning , self - organizing transportation , the ability to acquire a virtual sense for physically challenged patients , synchronized remote smart - grids , remote robotics , and image processing with customized / panoramic view would use the tactile internet protocols . * automation . *self - driving vehicles would take place in the near future , and as a requirement , vehicles would communicate with each other in real - time .moreover , they would communicate with other devices on the roads , homes , and offices with a requirement of almost zero latency .hence , an interconnected vehicular environment would provide a safe and efficient integration with other information systems . * healthcare systems .* a reliable , secure , and fast mobile communication can strengthen medical services , _e_._g_. , frequent data transfer from patients body to the cloud or health care centers .therefore , the relevant and urgent medical services could be predicted and delivered to the patients very fast .* logistics and tracking . * the future mobile communication would also assist in inventory or package tracking using location based information systems .the most popular way would be to embed a radio frequency identification ( rfid ) tag and to provide a continuous connectivity irrespective of the geographic locations .* industrial usages . *the zero latency property of 5 g networks would help robots , sensors , drones , mobile devices , users , and data collector devices to have real - time data without any delay , which would help to manage and operate industrial functions quickly while preserving energy .in this section , we present some real demonstrations and testbeds for 5 g networks .docomo is developing a real - time simulator for evaluating and simulating mmimo , small - cell , and mmwave .the simulator has shown 1000-times increase in the system capacity , and 90% users achieved 1gbps data rate .docomo also performed a real experiment in the year 2012 , where data was uploaded at the speed of 10gbps .samsung performed data transmission experiment using mmwave at 28 ghz frequency band and achieved the world s first highest data rate of 1.2gbps on a vehicle running at the speed of 100 km/h .further , when the vehicle was nearby a stop , the data transmission speed was achieved up to 7.5gbps . in the experiment ,the peak data rate was more than 30-times faster as compared to the state - of - the - art 4 g technology .samsung is also developing array antennas that have nearly zero - footprint and reconfigurable antenna modes .ericsson has achieved the speed of 5gbps in a demonstration .huawei at university of surrey in guildford is developing a testbed , which would be used for developing methodologies , validating them , and verifying a c - ran for an ultra - dense network .the european commission funded metis has developed more than 140 technical components , including : air interface technologies , new waveforms , multiple access and mac schemes , multi - antenna and mmimo technologies , multi - hop communications , interference management , resource allocation schemes , mobility management , robustness enhancements , context aware approaches , d2d communication , dynamic reconfiguration enablers , and spectrum management technologies .they have also implemented some testbeds , as : three d2d communication related testbeds , one massive machine communications related testbed , and one related to waveform design .there are other projects funded by the european commission working on the development of 5 g networks , as : 5gnow ( http://www.5gnow.eu/ ) , tropic ( www.ict-tropic.eu ) , mcn ( www.mobile-cloud-networking.eu ) , combo ( www.ict-combo.eu ) , moto ( www.fp7-moto.eu ) , and phylaws ( www.phylaws-ict.org ) . _ small - cells _ :sprint , verizon , and at&t in the united states , vodafone in europe , and softbank in japan are developing femtocells .alcatel - lucent , huawei , and nokia siemens networks has been involved in the development of plug - and - play sbss ._ mmimo _ : titanmimo by nutaq is a testbed for 5 g mmimo .the titanmimo-4 testbed provides a realistic throughput by aggregating the entire mmimo channel into a central baseband processing engine .lund university and national instruments , austin are also involved in the development of testbeds for mmimo .docomo is developing a real - time simulator for mmimo ._ c - rans _ : china mobile research institute has developed c - rans .the european commission supported ijoin ( http://www.ict-ijoin.eu/ ) is also involved in developing c - ran architectures , especially , ranaas .ibm , intel , huawei , and zte are developing c - rans .openairinterface ( oai ) is an open experimentation and prototyping platform created by the mobile communications department at eurecom .oai used for two purposes as : c - rans and m2 m communication ._ full duplex radio _ :groups at stanford and rice universities are focusing on the development of full duplex radios ._ crns _ : vodafone chair mobile communications systems at the tu dresden has created a testbed for studying crns in the future cellular networks . _mmwave _ : docomo , huawei , the european commission funded projects , and new york university are developing methodologies and testbeds for mmwave and have successfully performed various experiments .in this survey , we discussed salient features , requirements , applications , and challenges involved in the development of the fifth generation ( 5 g ) of cellular mobile communication that is expected to provide very high speed data transfer and ubiquitous connectivity among various types of devices .we reviewed some architectures for 5 g networks based on the inclusion of small - cells , cognitive radio networks , device - to - device communication , and cloud - based radio access networks .we find out that energy consumption by the infrastructure is going to be a major concern in 5 g networks , and hence , reviewed energy - efficient architectures .we figured out several open issues , which may drive the future inventions and research , in all the architectures .the development of new architectures is not only a concern in 5 g networks ; there will be a need for handling other implementation issues in the context of users , _e_._g_. , interference removal , handoff management , qos guarantee , channel accessing , and in the context of infrastructures , _e_._g_. , load balancing . during our illustration, we included several new techniques , _e_._g_. , full duplex radios , dense - deployment techniques , sic , dud , mmwave , mmimo , and vlc .we also discussed the current trends in research industries and academia in the context of 5 g networks based on real - testbeds and experiments for 5 g networks .we conclude our discussion with a resonating notion that the design of 5 g infrastructure is still under progress .the most prominent issues are enlisted below , and to provide elegant solutions to these issues would contribute in early deployment as well as in long run growth of 5 g networks . * the security and privacy of devices , infrastructures , communication , and data transfer is yet to be explored .we believe that the current solutions based on encryption would not suffice in the future due to a huge number of devices. intuitively , a solution that would use an authenticated certificate may be feasible .* the development of network devices , infrastructures , and algorithms must be self - healing , self - configuring , and self - optimizing to preform dynamic operations as per the need , for example , dynamic load balancing , qos guarantee , traffic management , and pooling of residual resources . *the cloud computing is an attractive technology in the current trend for various applications .we have reviewed c - rans ; however , the current solutions do not consider the impact of virtualization for backhaul data transfer , the trust of the cloud , inter - cloud communication , ubiquitous service guarantee , and real - time performance guarantee with zero - latency .thus , the development of c - rans must address the big question that how much virtualization is good ? * the multi - rats are attractive solutions to access different rats. however , would it be possible for devices to use more than one rat at an identical time for uplink and downlink channels ?further , the network densification must quantify that how much network density is good ? * the design , development , and usage of user devices , service - application models , and , especially , the network devices must be affordable to cater the needs of overwhelming users , service providers , and network providers .* the zero latency is a primary concern in most of the real - time applications and services , especially , in the tactile internet .however , all the existing architectures and implementations of 5 g networks are far from achieving the zero latency .therefore , it is utmost desired that the latest real - time and ultra - reliable network configurations must be improved to a latency free environment . _ more than 50 billion connected devices_ , ericsson , white paper , 2011 .available at : http://www.akos-rs.si/files/telekomunikacije/digitalna_agenda/internetni_protokol_ipv6/more-than-50-billion-connected-devices.pdf ._ the next generation of communication networks and services _ , the 5 g infrastructure public private partnership ( 5gppp ) , european commission , 2015 .available at : http://5g-ppp.eu/wp-content/uploads/2015/02/5g-vision-brochure-v1.pdf .available at : http://research.microsoft.com/en-us/projects/spectrum/fcc_dynamic_spectrum_access_noi_comments.pdf .[ http://research.microsoft.com/en-us/projects/spectrum/fcc_dynamic_spectrum_access_noi_comments.pdf .] _ requirements and vision for ng - wireless _ , outlook : visions and research directions for the wireless world , wireless world research forum , 2011 .available at : http://www.wwrf.ch/files/wwrf/content/files/publications/outlook/outlook7.pdf .n. bhushan , j. li , d. malladi , r. gilmore , d. brenner , a. damnjanovic , r. sukhavasi , c. patel , and s. geirhofer .network densification : the dominant theme for wireless evolution into 5 g ., 52(2):8289 , 2014 .h. elshaer , f. boccardi , m. dohler , and r. irmer .load & backhaul aware decoupled downlink / uplink access in 5 g systems . in _ 2015 ieee international conference on communications , icc 2015 , london , united kingdom , june 8 - 12 , 2015 _ , pages 53805385 , 2015 . c. x. mavromoustakis , a. bourdena , g. mastorakis , e. pallis , and g. kormentzas . an energy - aware scheme for efficient spectrum utilization in a 5 g mobile cognitive radio network architecture ., 59(1):6375 , 2015 .a. orsino , g. araniti , a. molinaro , and a. iera .effective rat selection approach for 5 g dense wireless networks . in _ieee 81st vehicular technology conference , vtc spring 2015 , glasgow , united kingdom , 11 - 14 may , 2015 _ , pages 15 , 2015 .a. osseiran , f. boccardi , v. braun , k. kusume , p. marsch , m. maternia , o. queseth , m. schellmann , h. d. schotten , t. hidekazu , h. m. tullberg , m. a. uusitalo , b. timus , and m. fallgren .scenarios for 5 g mobile and wireless communications : the vision of the metis project . , 52(5):2635 , 2014 .t. s. rappaport , s. sun , r. mayzus , h. zhao , y. azar , k. wang , g. n. wong , j. k. schulz , m. samimi , and f. g. jr .millimeter wave mobile communications for 5 g cellular : it will work ! , 1:335349 , 2013 .j. vieira , s. malkowsky , k. nieman , z. miers , n. kundargi , l. liu , i. wong , v. wall , o. edfors , and f. tufvesson .a flexible 100-antenna testbed for massive mimo . in _ieee globecom workshops _ , pages 287293 , 2014 . c. wang , f. haider , x. gao , x. you , y. yang , d. yuan , h. m. aggoune , h. haas , s. fletcher , and e. hepsaydir .cellular architecture and key technologies for 5 g wireless communication networks . , 52(2):122130 , 2014 .y. zhou , d. li , h. wang , a. yang , and s. guo .-aware energy - efficient optimization for massive mimo systems in 5 g . in _sixth international conference on wireless communications and signal processing , wcsp _, pages 15 , 2014 .
[ abs : abstrat ] the rapidly increasing number of mobile devices , voluminous data , and higher data rate are pushing to rethink the current generation of the cellular mobile communication . the next or fifth generation ( 5 g ) cellular networks are expected to meet high - end requirements . the 5 g networks are broadly characterized by three unique features : ubiquitous connectivity , extremely low latency , and very high - speed data transfer . the 5 g networks would provide novel architectures and technologies beyond state - of - the - art architectures and technologies . in this paper , our intent is to find an answer to the question : `` _ _ what will be done by 5 g and how _ _ ? '' we investigate and discuss serious limitations of the fourth generation ( 4 g ) cellular networks and corresponding new features of 5 g networks . we identify challenges in 5 g networks , new technologies for 5 g networks , and present a comparative study of the proposed architectures that can be categorized on the basis of energy - efficiency , network hierarchy , and network types . interestingly , the implementation issues , _ e_._g_. , interference , qos , handoff , security - privacy , channel access , and load balancing , hugely effect the realization of 5 g networks . furthermore , our illustrations highlight the feasibility of these models through an evaluation of existing real - experiments and testbeds . : cloud radio access networks ; cognitive radio networks ; d2d communication ; dense deployment ; multi - tier heterogeneous network ; privacy ; security ; tactile internet .
pattern formation in reaction - diffusion systems has emerged as a mathematical paradigm to understand the connection between pattern and process in natural and sociotechnical systems .the basic mechanisms of pattern formation by local self - activation and lateral inhibition , or short - range positive feedback and long - range negative feedback are ubiquitous in ecological and biological spatial systems , from morphogenesis and developmental biology to adaptive strategies in living organisms and spatial heterogeneity in predator - prey systems .heterogeneity and patchiness in vegetation dynamics , associated with turing patterns in vegetation dyanmics have been proposed as a connection between pattern and process in ecosystems , suggesting a link between spatial vegetation patterns and vulnerability to catastrophic shifts in water - stressed ecosystems .the theory of non - equilibrium self - organization and turing patterns has been recently extended to network - organized natural and socio - technical systems , including complex topological structures such as multiplex , directed and cartesian product networks .self - organization is rapidly emerging as a central paradigm to understand neural computation .the dynamics of neuron activation , and the emergence of collective processing and activation in the brain , are often conceptualized as dynamical processes in network theory .self - organized activation has been shown to emerge spontaneously from the heterogenous interaction among neurons , and is often described as pattern formation in two - population networks .localization of neural activation patterns is a conceptually challenging feature in neuroscience .cell assemblies , or small subsets of neurons that fire synchronously , are the functional unit of the cerebral cortex in the hebbian theory of mental representation and learning .associative learning forms the basis of our current understanding of the structure and function of neural systems .it is also the modeling paradigm for information - processing artificial neural networks .the emergence of cell assemblies in complex neural networks is a fascinating example of pattern formation arising from the collective dynamics of interconnected units .understanding the mechanisms leading to pattern localization remains a long - standing problem in neuroscience .here we show that simple mechanisms of nodal interaction in heterogeneous networks allow for the emergence of robust local activation patterns through self - organization .the simplicity and robustness of the proposed single - species pattern - forming mechanisms suggest that analogous dynamics may explain localized patterns of activity emerging in many network - organized natural and socio - technical systems .we demonstrate that robust local , quantized activation structures emerge in the dynamics of network - organized systems , even for relatively simple dynamics . we propose a minimal - ingredients , phenomenological model of nodal excitation and interaction within a network with heterogeneous connectivity .our goal is to demonstrate that a simple combination of local excitation of individual units , combined with generic excitatory / inhibitory interactions between connected units , leads to self - organization , and can explain the spontaneous formation of cell assemblies without the need for synaptic plasticity or reinforcement .our model can be understood as a network analogue of the swift - hohenberg continuum model , and is able to produce a complex suite of localized patterns .the requirements are minimal and general : simple local dynamics based on canonical activation potentials , and interactions between nodes that induce short - range anti correlation and long - range correlation in activation . because of their robustness and localization , self - organized structures may provide an encoding mechanism for information processing and computation in neural networks .we restrict our analysis to the simplified case of symmetric networks , but our main results can be generalized to other network topologies , including directed and multiplex networks .a node s state of activation , measured through a potential - like variable , is driven by local excitation dynamics and by the interaction with other nodes in the network via exchanges through the links connecting them . in dimensionless quantities , the proposed excitation - inhibition model for the evolution of potential , , in each node , is given by the model where is a dynamic forcing term , representing a double well potential , and is a bifurcation parameter that will be used to establish the conditions for stability and localization of the response patterns ( fig .the currents , , represent the excitatory / inhibitory interactions among nodes in the network .the structure of these nodal interactions is one of the key pattern forming mechanisms in the present model .we consider short - range anti - correlation , and higher - order , longer - range dissipative interactions .this two - level interaction structure , which induces anti correlation in the short range ( nearest - neighbors , or first - order connectivity ) , and long - range correlation ( second - nearest neighbors , or second - order connectivity ) is represented in fig .mathematically , we express the integration of synaptic contributions as the structure of the above nodal interactions turns the dynamics into a network anologue of the swift - hohenberg continuum model , which is a paradigm for pattern - forming systems . the simplest form for the interaction matrices representing these correlation / anti - correlation effects (while ensuring that the interaction fluxes conserve mass or charge ) is based on network representation of laplacian and bi - laplacian operators , and , respectively .the network laplacian , , is a real , symmetric and negative semi - definite matrix , whose elements are given by , where is the adjacency matrix of the network , is the degree ( connectivity ) of node and is the kronecker delta .a diffusive , fickian - type flux of the activation potential to node is expressed as ( see figure [ fig : fig1]a top ) .plain waves and wavenumbers on a network topology are represented by the eigenvectors and the eigenvalues and of the laplacian matrix , which are determined by the equation , with .all eigenvalues are real and non - positive and the eigenvectors are orthonormalized as , where .the elements of the bi - laplacian matrix of a network can be expressed as where the matrix has information about second order nodal connectivity and takes nonzero values if node is two jumps away from node .the operation models negative diffusion ( inhibition ) from the first neighbors of node and at the same time diffusion from its two - jump neighborhood ( see figure [ fig : fig1]a bottom ) .the bi - laplacian , , has the same eigenvectors as ( i.e. ) and its eigenvalues are the square of those of , . to understand the properties and pattern - forming mechanisms in our model, we first investigate the stability of flat states of the dynamical system : flat , stationary solutions of eq .( 4 ) satisfy , where the nodal state of activation is equal for all nodes in the network , . for ,there are three uniform solution branches given by and /2 $ ] .it is well known in one and two dimensional continuum spaces that these uniform states can become unstable and a wealth of self - organized patterns can arise . in a linear stability analysis ,the stability of flat stationary solutions to small perturbations is determined by the eigenvalues of the laplacian and bi - laplacian matrices . introducing small perturbations , , to the uniform state , , the linearized version of eq .( 4 ) takes the form , where . after expanding the perturbation over the set of the laplacian eigenvectors , , where is the expansion coefficient ,the linearized equation is transformed into a set of independent linear equations for the different normal modes : where are the eigenvalues of the laplacian matrix .the -mode is unstable when re is positive .instability occurs when one of the modes ( the critical mode ) begins to grow . at the instability threshold , re for some and re for all other modes . in figure 1b and c we summarize the linear stability analysis of the flat states of our model on a scale - free network constructed using the barabsi - albert model ( ba ) of network growth and preferential attachment .we find that , indeed , there is a large parameter range for which the resting potential is stable .as we demonstrate below , in the stable regime , input stimuli may trigger localized patterns of activation .localized activation patterns are possible due to the particular structure of the model , with short- and long - range nodal interactions .mathematically , the localized states are homoclinic orbits in the network space around the base resting state , .the existence of these homoclinic orbits can be studied using the technology developed for the linear stability analysis .since homoclinic orbits leave the flat state as we approach a small neighborhood ( cluster ) of the network , the fixed point must have both stable and unstable eigenvalues .we linearize eq .( 4 ) around and write , , arriving to the relation . since the laplacian eigenvalues are real and non - positive values , we can write them in the form .if the topological eigenvalues of form a complex quartet . for collide pairwise on the imaginary axis , and for they split and remain on the imaginary axis . for two of the topological eigenvaluescollide at the origin and for they move onto the real axis .these results are summarized in figure [ fig : fig2]a .the topological eigenvalues in the neighborhood of are characteristics of the reversible resonance bifurcation .theory shows that under certain conditions the hyperbolic regime contains a large variety of topologically localized states . to understand the onset of localized patterns for different model parameters and input stimuli, we construct the bifurcation diagram of the resting state , as a function of the total potential energy of the stimulus and bifurcation parameter , in the vicinity of .a single bifurcation branch constructed using a pseudo - arclength continuation method has a characteristic snaking " structure of localized states with varying activation energy ( fig .[ fig : fig2]b ) . as the system jumps from one steady state branch to the next one , a new neighborhood in the network is being activated .figure [ fig : fig2]c visualizes the different steady localized states of the six different branches as they are spotted in the diagram of figure [ fig : fig2]b .the response of the system is _ quantized _ : the transition from one pattern of activation to another one is discontinuous as we vary the activation energy , or the parameter ( fig .[ fig : fig2]b ) .these jumps in activation energy correspond to the addition of neighbor nodes to the cluster ( fig .[ fig : fig2]c ) . the discontinuous quantized nature of the network response leads to _ robustness _ in the local , final equilibrium patterns with respect to the input signal amplitude . to gain insight into the robustness of the localized patterns of activation, we performed a synthetic test in which we initially stimulate a specific neighborhood in the network , where we set ( i.e. a step - like function signal in network topology ) and let the system evolve to equilibrium without decay .we gradually increase the amplitude of the initial signal , and record the final energy values of the equilibrium , localized states . for small amplitudesthe perturbation relaxes back to the resting state , and no activation pattern is elicited .there is a threshold in the energy of the input stimulus beyond which robust quantized states are form .the states are robust in the sense that further increments in the input signal amplitude do not change the final equilibrium pattern ( fig . [fig : fig3]a ) .the self - organized local structures are also robust with respect to random noise in the initial stimulus .we perform monte carlo simulations that probe the impact of the noise - to - signal ratio on the energy of the emerging quantized state .we have confirmed that the presence of small - amplitude noise has no effect on the equilibrium states of nodal activity .as can be expected , we do observe a departure from the energy of the base equilibrium state when the noise - to - signal ratio is sufficiently large , thereby masking the base stimulus altogether ( fig .[ fig : fig3]b ) .our model predicts a range of parameter values where localized states disappear , and are replaced by _ global activation patterns_. mathematically , global patterns are possible when the non - active stationary solution is perturbed outside the parameter region of localized patterns ( ) .these global turing patterns can be understood and modeled using the mean - field approximation ( mfa ) , a method that segregates nodes according to their degree and has been successfully used to approximate a wide variety of dynamical processes in heterogeneous networks , like epidemic spreading , activator - inhibitor models and voter models .this theory allow us to reduce the problem to a single equation for the membrane potential for all the nodes in the system .since in our model both the degree and two - jump degree play important role in the formation of patterns , we use a mfa where we assume that all the nodes with the same degree and two - jump degree behave in the same way .we start by writing eq .( 3 ) in the form where the local fields felt by each node , , and are introduced . these local fieldsare then approximated as , and , where is the degree and is the number of secondary connections of node ( two - jump degree ) .the global mean fields are defined by where and , where . here, is the number of nodes with degree , is the number of nodes with number of two - jump neighbors and is the size of the network . in the above expressions , with we denote the sum over the nodes with degree and with the sum over the nodes with two - jump nodal connectivity . with this approximation ,the individual model equation on each node interacts only with the global mean fields and and its dynamics is described by : .\end{aligned}\ ] ] since all nodes obey the same equation , we have dropped the index and introduced the parameters and .the activation potential depends now on the global fields and as well as on the parameter compination ( ), i.e. . if the global mean fields and are given , the combination ( ) plays the role of a bifurcation parameter that controls the dynamics of each node in the system . the time independent version of above mean field equation can be written as a third degree algebraic equation that we solve times for the nodes in the system .for each node , we get three solutions , that can be stable or unstable depending on the sign ( negative or positive respectively ) of the operator . after tuning the bifurcation parameter to a negative value, we can compute the global turing pattern from direct numerical simulations and determined the global mean fields and .each node in the network is characterized by its degree and second nodal connectivity , so that it possesses a certain parameter combination , ( ) . substitutingthese computed global mean fields as well as the values of and into equation ( 7 ) , bifurcation diagrams of a single node can be obtained and projected onto the turing pattern . in figure( 4a ) we show for our toy network model " that the stable brunches of the nodal bifurcation diagrams calculated using the mfa fit very well the computed turing pattern .we further assess the dependence of the network topology on the global pattern formation and we find that when the degree distribution is narrower compared to a scale - free network , the distribution of the ( ) is more homogeneous and therefore the stationary turing patterns look smoother ( figure 4b and c ) .therefore , global network turing patterns are essentially explained by the bifurcation diagrams of individual nodes coupled to the global mean fields , with the coupling strength determined by their degree and two - jump connectivity .our results suggest a new mechanism for the formation of localized nodal assemblies in networks , arising from long - range second neighbor interactions . rather than relying on reinforcing mechanisms synaptic plasticity , we show that localized , robust nodal assemblies are possible due to self - organization . the emergence of localized activation patterns derived from the simple and general functional structure of our proposed conceptual model : local dynamics based on activation potentials , and interactions between nodes that induce short - range anticorrelation and long - range correlation in node - to - node exchanges .the proposed system is a network analogue of the swift - hohenberg continuum model , and is able to produce a complex suite of robust , localized patterns .hence , the spontaneous formation of robust operational cell assemblies in complex networks can be explained as the result of self - organization , even in the absence of synaptic reinforcements .hence , these self - organized , local structures can provide robust functional units to understand natural and technical network - organized processes .funding for this work was provided by an mit vergottis graduate fellowship and a mcdonnell postdoctoral fellowship in studying complex systems ( to cn ) , the us department of energy through a doe career award ( grant de - sc0003907 ) and a doe mathematical multifaceted integrated capability center ( grant de - sc0009286 ) ( to rj ) , and a ramn y cajal fellowship from the spanish ministry of economy and competitiveness ( to lcf ) .10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) . , , , &_ _ * * , ( ) . , , & ._ _ * * , ( ) . , , &_ _ * * , ( ) ._ et al . _ . __ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , , , & ._ _ * * , ( ) ._ _ * * , ( ) . ._ _ * * , ( ) ._ _ * * , ( ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , &_ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . ._ _ * * , ( ) ._ _ * * , ( ) . , , & ._ _ * * , ( ) . , &_ _ * * , ( ) . ._ _ * * , ( ) ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . &. _ _ * * , ( ) . ._ _ * * , ( ) . ._ _ * * , ( ) . , , , &_ _ * * , ( ) . & ._ _ * * , ( ) . , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) . , , & ._ _ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ ( ) . , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) ._ _ * * , ( ) . , & ._ _ * * , ( ) .cn , rj and lcf designed the research , performed the analysis and wrote the manuscript .the authors declare no competing financial interests . . in the insetwe show the potential landscape minus the integral of with respect to u , which exhibits a single well ( at ) with an inflection point , a necessary condition for localized patterns to exist .nodes interact in the network through diffusion - like exchanges via the links connecting them .the network laplacian operator , , represents short range diffusion of the species in the system ( top ) .the network bi - laplacian operator , , induces short range anti - correlation with the nearest - neighbors , and long - range correlation with the second - nearest neighbors ( bottom ) . *( b c ) * _ linear stability analysis of the flat stationary solutions of our model . _ * ( b ) * the maximum value of the growth rate as a function of the bifurcation parameter for the two flat stationary states ( brown ) and ( blue ) on a barabsi - albert network model with mean degree and size =2000 .when the maximum value of is negative , the state is stable with respect to small non uniform perturbation .( inset ) the growth rate as a function of the laplacian eigenvalue ( eq .( 5 ) ) for three different values of the bifurcation parameter as they indicated in the main diagram for the flat stationary solution . *( c ) * the flat stationary solutions and as a function of on the same network .solid ( dotted ) lines represent stability ( instability ) with respect to small non - uniform perturbations .the labelled bifurcation points are , and and .the pink shaded region is where we observe localized self - organization patterns with respect to the trivial solution . for values of outside that region we get either global activation patterns ( for ) or any perturbation relaxes back to the flat stationary solution ( for ).,width=552 ] .for positive values of the trivial stationary solution is stable with respect to uniform small random perturbations ( solid line ) while for negative values of this state becomes unstable ( dotted line ) .also shown in the insets are the topological eigenvalues of the trivial state as we tune the bifurcation parameter .the behavior of eigenvalues in the neighborhood of indicates the possibility for localized patterns in the neighborhood of small positive values of ( pink shaded region ) . *( b ) * a single branch of the bifurcation diagram in a barabsi - albert network model of size =200 with mean degree equal to and minimum degree equal to 1 .solid ( dotted ) lines represent stable ( unstable ) localized solutions . *( c ) * visualization of the localized patterns corresponding to the states indicated on the bifurcation diagram ( b ) .gray - colored nodes are non - active ( ) , red - colored nodes are active with and blue - colored nodes are active with .the size of the node is proportional to its eigenvalue centrality.,width=600 ] at the nearest and next - nearest neighbors of the best connected node in the system .when the amplitude is very small , the initial perturbation relaxes back to the trivial solution and no quantized state is formed ( _ i _ ) . as the amplitude of the input signal is increased , fragile quantized states are formed ( _ ii _ ) . when the amplitude of the input signal is larger than a threshold value , a very robust quantized state is formed ( _ iii _ ) .further increases in the input signal amplitude lead to the same quantized state .( insets ) visualization of the input signal in our network topology ( the amplitude increases from left to right ) as well as the resulting equilibrium state . *( b ) * the energy of the resulting quantized state with respect to the ratio between the signal amplitude and the noise amplitude . starting from the step - like input signal that gives the robust quantized state , we add random noise at the already perturbed neighborhood and we compute the energy of the resulting quantized state over 100 realizations .we use a barabsi - albert scale - free network of size =200 and mean degree =4 ., width=504 ] ) .the initial exponential growth of the perturbation is followed by a nonlinear process leading to the formation of stationary turing patterns .* ( a ) * ( left ) the activation profile as a function of the node index of a global stationary turing pattern from direct simulation ( blue crosses ) is compared with the mean - field bifurcation diagram .black curves indicate stable branches while grey curves correspond to unstable branches of a single activator inhibitor system coupled to the computed global mean fields .we sort the node index in increasing connectivity .nodes with the same degree are sorted with increasing two - jump connectivity ( see inset ) .we use the same barabsi - albert network model as in fig . 2 and we set the bifurcation parameter equal to .we have confirmed that similar results hold for larger network sizes .( right ) visualization of the global activity pattern on the network topology . *( b ) * the activation profile as a function of the node index of global stationary turing patterns from direct simulation for bifurcation parameter on an erds - rnyi random network with size and mean degree ( blue curve ) along with the stable branch of the mean field approximation ( black curve ) . * ( c ) * the activation profile as a function of the node index of global stationary turing patterns from direct simulation on a barabsi - albert scale free network with the same mean degree and the same number of node as the erds - rnyi of b. we sort the node index in increasing connectivity and two - jump connectivity ., width=456 ]
self - organization and pattern formation in network - organized systems emerges from the collective activation and interaction of many interconnected units . a striking feature of these non - equilibrium structures is that they are often localized and robust : only a small subset of the nodes , or cell assembly , is activated . understanding the role of cell assemblies as basic functional units in neural networks and socio - technical systems emerges as a fundamental challenge in network theory . a key open question is how these elementary building blocks emerge , and how they operate , linking structure and function in complex networks . here we show that a network analogue of the swift - hohenberg continuum model a minimal - ingredients model of nodal activation and interaction within a complex network is able to produce a complex suite of localized patterns . hence , the spontaneous formation of robust operational cell assemblies in complex networks can be explained as the result of self - organization , even in the absence of synaptic reinforcements . our results show that these self - organized , local structures can provide robust functional units to understand natural and socio - technical network - organized processes .
quantum teleportation is the most fundamental protocol in quantum information science , and indeed has always played a crucial role in the progress of quantum information theory and technology ( for a good review , see ) .the standard teleportation scheme ( sts ) transfers an unknown quantum state from alice to bob as follows : alice performs a joint measurement on the state to be teleported and half of the previously shared entangled state , tells the outcome of the measurement to bob , and bob applies a unitary transformation , depending on the outcome , to the remaining half of the entangled state . to ensure no - signaling ( no faster than light communication ) ,every teleportation scheme must accompany some sort of communication and bob s operation depending on the communication content . in so - called port - based teleportation ( pbt ) , bob s operation is quite simple : he regards his half of ( large ) entangled state as a collection of output ports , and he only picks up an output port ( and discards all the other ports ) .the output port contains the teleported state as it is , without any correcting operation on the output port .the absence of the correcting operation leads to an application of pbt as a universal programmable quantum processor , which is a device to play back the record of the ( past ) experiences of a quantum object .the universality of the device is so powerful that arbitrary experiences can be recoded and played back ( just like a machine in science fiction to relive your childhood ) , including not only those described as unitary evolution but also measurements ( working as a quantum multimeter in this case ) .the drawback is that huge amount of entanglement is necessary to increase the fidelity or success probability .it has been shown , however , that the most of the huge amount of entanglement can be recycled for subsequent pbt .moreover , it has been shown in that a combined protocol of pbt and sts works as if it could break the barrier of spacetime as follows : suppose that alice and bob , separated in spacetime , each has a quantum system and , respectively .alice can then consider , _ without waiting for communication _ , that a port of her half of the shared entangled state contains the state of , i.e. she already has the non - local state of somewhere in her hand ( though she can know which port contains the state only after the communication from bob ) .this technique is used for attacking position - based cryptography and for instantaneous non - local quantum computation .moreover , pbt has been used as a tool to investigate the relation of quantum communication complexity and the bell non - locality .however , the properties of pbt have not been completely clarified yet , in particular , for teleporting a high dimensional quantum state .this is because , in contrast to sts , the simple multiple use of pbt for a qubit ( quantum bit ) does not result in pbt for higher dimension . for teleporting a state of a qudit ( -dimensional system ) , only a lower bound of the teleportation fidelity of deterministic pbt and an upper bound of the success probability of probabilistic pbt been obtained so far .more studies will be necessary to clarify the properties of pbt . in this paper , we make some remarks on pbt . after recalling the formulation of pbt in sec .[ sec : formulation of pbt ] , in sec .[ sec : remarks for , we pay attention to the fact that , in most cases of , the optimal measurements of alice agree with each other . in sec . [sec : recoverable pbt ] , we propose a hybrid protocol between pbt and sts ( say recoverable pbt ) , where bob has another choice ( in addition to adopt usual pbt ) to adopt a faithful teleportation by utilizing all the output ports . in sec .[ sec : rederivationo of probability bound ] , we consider the setting of the port - based superdense coding , a dual protocol to pbt , and rederive the upper bound of success probability of probabilistic pbt. this bound is tight even for and . in sec .[ sec : fidelity bound due to monogamy ] , we obtain an upper bound of the teleportation fidelity by using the entanglement monogamy relation in asymmetric universal cloning . in sec .[ sec : port - based superdense coding ] , we finally remak that the superdense coding capacity can be asymptotically achieved in a limit different from the fidelity , and hence port - based superdense coding is possible .a summary is given in sec .[ sec : summary ] .to begin with , let us recall the formulation of ( deterministic ) pbt , where bob has output ports and a teleported state appears in one of the ports without any correcting operation on each port . as a preparation of pbt , bob has qudits : , , , , where each corresponds to the output port of pbt . in this paper , , , denoted by as a whole .alice also has qudits : , , , , which are denoted by as a whole .let us then describe an entangled state between and used for pbt as -spins ( ) , and and will be used interchangeably .the spin basis is denoted by ( , , ) .then , is a state of spin 0 in two -spins , which is maximally entangled between the two .the operator specifies the actual form of , and so that is normalized .note that , in pbt , the teleportation fidelity is maximized when in general , i.e. when is not maximally entangled . to teleport the state of the qudit , alice performs a joint measurement with possible outcomes ( , , , ) on the and qudits .let us denote the positive operator valued measure ( povm ) of her measurement by and hence . when alice obtains the outcome , the state of qudit is close to the state of the qudit as it is .it is then found that the entanglement fidelity of pbt is given by ,\ ] ] where , and denotes the qudits except for ( i.e. ) . in analyzing the properties of pbt , the following operator : frequently plays a crucial role . indeed , alice s optimal measurement in the case of and is the square - root measurement ( srm ) [ also known as a pretty good measurement ( pgm ) or least - squares measurement ( lsm ) ] for distinguishing the quantum signals , and the corresponding entanglement fidelity is given by to investigate the general properties of , let us decompose into the spin components : where is an identity on the subspace where the total spin angular momentum of is ( is the possible minimum value ) .then , is also decomposed into since the addition of spin 0 and spin results in spin only , each term is an operator on the subspace of total spin ( though the spin function constructed by the addition is different , depending on ) , and hence is also an operator on the subspace of total spin .therefore , for , i.e. is block diagonal with respect to the total spin angular momentum ( and clearly its -component also ) of spins ( ) , but the maximum momentum is limited to . as far as we know ,the eigenvalues and eigenstates of for have not been obtained yet , which leads to the difficulty in analyzing pbt in higher dimension .the only exception is the maximum eigenvalue , which is proved in appendix [ sec : maximum eigenvalue of rho ] to be this leads to a known monogamy relation of singlet fraction for a multipartite state such that which explains the fidelity limit of symmetric universal cloning in a simpler way .fortunately , all the eigenvalues and eigenstates of for can be analytically obtained as we showed in , but the results are not simple . therefore , it may be worth to summarize it again in a tractable way .a special property held for is that the spin angular mometum of the spins ( denoted by hereafter ) is a good quantum number , i.e. is block diagonal with respect to also .indeed , for is written as where is an identity on the subspace where total spin angular momentum of is and .note that , since the total spin of is the result of the addition of and -spin of , holds .when used for pbt is fixed to a maximally entangled state ( ) , the optimal measurement of alice to provide the maximum entanglement fidelity is srm , i.e. the povm elements are given by here , it is implicitly assumed that the excess term is added to every so that . since in takes a value only for as mentioned before , we have where is an identity on the support of . the corresponding entanglement fidelity can be more increased by optimizing .the optimization result is }(j)(n+2)}}\sin\frac{\pi(2j+1)}{n+2}\openone(j)_a , \label{eq : deterministic optimal o}\ ] ] where }(j) ] .we showed in the corresponding optimal measurement of alice in the form of .however , when the actual povm elements are derived from , it will be found that those agree with eq .( [ eq : srm ] ) . note that , since is block diagonal with respect to and the optimal is an identity on each subspace, we have =0 ] for the other output port because .namely , the fully entangled fraction of can take this value , at least .putting and into eq .( [ eq : singlet monogamy ] ) , we obtain in this way , the monogamy relation in asymmetric universal cloning bounds the entanglement fidelity of pbt from above by .note that this bound is tight ( leaving for the coefficient ) for , where .superdense coding is a protocol dual to quantum teleportation , where the classical information capacity of bits is achieved per qudit sent from bob to alice . in this section, we remark that the capacity bits can be asymptotically achieved , i.e. port - based superdense coding is possible in the setting of fig .[ fig : pbsdc ] . when bob sends qudit to alice , the probability that alice can obtain the outcome by the same measurement as deterministic pbt is given by eq .( [ eq : probablity from fidelity ] ) .the entanglement fidelity employing srm and maximally entangled is lower bounded by , but this bound has been slighly improved in as the derivation of this bound using a convenient property of , instead of using , is given in appendix [ sec : derivation of fidelity lower bound ] .we then have using this no - error probability , the mutual information between bob and alice , which takes maximum for bob s equal prior probability , is at first glance , port - based superdense coding seems impossible because in the limit of , in quite contrast to in the same limit .however , takes the maximum at ] also holds , and where is the eigenstate with a zero eigenvalue of the operator , the following convenient relations hold : by using for , and by using defined in appendix [ sec : maximum eigenvalue of rho ] , we have |\xi^{(1)}(j , m,\alpha)\rangle \right|^2 \cr & = \frac{na}{d^{2n } } \sum_{jm\alpha}\left| \langle j , m,\alpha|[\frac{3}{2}\openone-\frac{a(n+d^2 - 1)}{2d^{n+1}}\openone]|j , m,\alpha)\rangle \right|^2 \cr & = \frac{na}{d^{n+1}}\left(\frac{3}{2}-\frac{a(n+d^2 - 1)}{2d^{n+1}}\right)^2,\end{aligned}\ ] ] where see eq .( [ eq : gram matrix ] ) for the second equality .this lower bound is maximized when , and hence we obtain eq .( [ eq : fidelity bound ] ) .
port - based teleportation ( pbt ) is a teleportation scheme such that the teleported state appears in one of receiver s multiple output ports without any correcting operation on the output port . in this paper , we make some remarks on pbt . those include the possibility of recoverable pbt ( a hybrid protocol between pbt and the standard teleportation scheme ) , the possibility of port - based superdense coding ( a dual protocol to pbt ) , and the fidelily upper bound expected from the entanglement monogamy relation in asymmetric universal cloning .
the ` breeding method ' is a well - established and computationally inexpensive procedure for generating perturbations for ensemble integrations .bred vectors ( bvs ) are finite perturbations periodically rescaled to a certain magnitude that have been prominently used in probabilistic weather forecasting with ensembles . the breeding method and variants of itare applied in operative ensemble forecast systems , such as national centers for environmental predictions ( ncep , usa ) , see e.g. .moreover , breeding continues to be a popular tool to study the predictability of a variety of systems such as the baroclinic rotating annulus or the atmosphere of mars .different initial bv perturbations all generally tend to become aligned with the fastest growing modes .if different bvs were globally quasi - orthogonal to each other , one might expect they would automatically provide a good sample of the different dominant growing error directions , without the need for additional computation .a closer inspection reveals that the bv perturbations are often locally rather similar in shape , differing only in sign and amplitude .in fact , a major modification of the bv implementation at ncep has recently been implemented by replacing the bvs given by the ensemble forecast with some ` ensemble transform ' that orthogonalizes the ensemble with respect to the metric defined by the inverse covariance matrix .other metrics can be used and lead to different ensembles of bvs .orthogonalization with respect to a given metric generally enhances the statistical diversity of the ensemble by making the bv perturbations globally more dissimilar . in this paperwe show how the ensemble diversity can be enhanced by using the geometric norm with no further transforms or orthogonalizations needed .we first show that the bvs dynamics and the statistical properties of the ensemble strongly depend on the norm definition used to construct them .so far euclidean - type norms are widely used in applications . however , our results demonstrate that , among a spectrum of studied norms , the geometric norm is the most convenient because it provides a greater statistical diversity of the ensemble , while it enhances the projection of the ensemble as a whole on the most unstable direction . with other norm choices , like the standard euclidean one , a good projection on the leading lyapunov vector ( lv )is always associated with the collapse of all the bvs , i.e. the complete loss of the ensemble diversity .we illustrate our study with numerical integrations of the well - known lorenz-96 model that has been used by various authors as a low order testbed for atmospheric prediction and assimilation studies .this model is defined by the set of variables and evolves according to \nonumber\\ & - & u(x , t)+f,\quad \mbox{with}\quad x=1, ... ,l . \label{lorenz}\end{aligned}\ ] ] with periodic boundary conditions in the discrete spatial variable . hereafter we adopt a system size of and a forcing constant .for these values the system exhibits well developed chaos .a good description of the chaotic dynamics can be achieved by understanding the behavior of initial infinitesimal perturbations , which are governed by the ` tangent linear model ' : \nonumber\\ - u(x-1,t)\left[\delta u(x-2,t)- \delta u(x+1,t)\right ] -\delta u(x , t ) .\label{tangent}\end{aligned}\ ] ] after some transient any infinitesimal perturbation becomes permanently aligned along the most unstable direction. this direction defines , disregarding an arbitrary nonzero constant factor , the leading lv , and hereafter denoted .obtaining the tangent linear ( and adjoint ) models can be however extremely difficult in operative weather models and one has to resort to analyzing finite perturbations , which are evolved with the full nonlinear model .this is for instance the situation at ncep , where ensembles of bvs are used .bvs are finite perturbations obtained after periodic rescaling , say at times ( ) . a control trajectory and a perturbed one , are simultaneously integrated [ via eq . ( [ lorenz ] ) ] and at the scheduled times the difference between themis calculated and rescaled to a given amplitude , obtaining the bv this bv is then used to redefine the perturbed system : with .the perturbed and control states are then evolved in time according to the model equations , eq .( [ lorenz ] ) , until the next scheduled rescaling . at the next scheduled time the breeding cycle , eqs .( [ dif])-([pert ] ) , is repeated .after several breeding cycles , the perturbations generated by this procedure acquire a large growth rate , which makes them suitable for ensemble forecasting .usually a set of bvs is evolved from different initial random perturbations and this constitutes the ensemble .ideally a good ensemble of bvs should span the most unstable directions in phase space well enough to capture the main instabilities .there are three basic ingredients in the definition of the bv : ( i ) the rescaling interval , ( ii ) the perturbation amplitude , and ( iii ) the choice of the norm used in eq .( [ scaling ] ) .the rescaling interval has a negligible influence in the results as long as it remains small say , smaller than the doubling time , which is on the order of time units ( t.u . ) for the lorenz-96 model .we have used t.u . , which corresponds to day in the time scale assumed by .the perturbation amplitude controls the `` finiteness '' of the perturbations ; a sufficiently small makes the perturbation quasi - infinitesimal , and in the limit the bv perfectly aligns with the leading lv of the system .however , very little is known about the effect of the norm choice on the properties of the resulting ensemble and we discuss this issue in detail in the incoming sections .the choice of the norm is probably the more obscure element determining the bvs nature .bvs have often been claimed to be insensitive to the choice of norm .however , this belief is not actually based on any rigorous argument . herewe show that the effect of changing the norm type has a dramatic impact on bvs .we will show that different norms lead to different ensemble properties and it is not a mere change of the ` ruler ' or metrics .there are intrinsic and genuine effects on the statistics of the bvs for each particular norm type .intuitively , for a homogeneous system like the lorenz-96 model , any definition for the norm one wants to use should be homogeneous in the sense that it weights equally all sites . to see why this constraint is relevantlet us consider a particularly illustrative example .think of a norm arising from some scalar product with a very `` unbalanced '' metric matrix , e.g. .this choice would result in very dissimilar bvs depending if the site is more or less unstable at a given time . for a given , at some times the vector dynamics could be infinitesimal - like while at other moments it would be clearly finite . for spatially homogeneous systems ,it is reasonable to restrict ourselves to homogeneous norms that produce a bv that is statistically equivalent up to a high degree at different times and we do so in our study . in this workwe compare the performance of -norms , which are defined as ^{1/q}\ ] ] note that for the norm is an energy - like norm , analogous to those used in atmospheric models . in the limit the -norm becomes the supremum norm : moreover , the geometric mean is obtained in the limit= \lim_{q\to 0 } \exp[q^{-1 } \ln(l^{-1 } \sum_{x_=1}^l e^{q\ln|\delta u(x)|})] ] where the brackets denote a temporal average .the results are depicted in fig .[ sigma ] , where we plot the relative fluctuations of the ensemble dimension versus to better compare different -norms .one can readily see that the 0-norm produces the ensemble with the smallest fluctuations for most values .+ ideally ( i.e. disregarding limitations by numerical accuracy ) the bvs become perfectly aligned with the main lv as . to determine quantitatively the degree of alignment with the lv , , we have measured the instantaneous angle between each bv of the ensemble , , and at breeding times as customary in a -dimensional euclidean space in the range $ ] .] : the ensemble and time average angle is shown in fig .[ phiq ] , and demonstrates that the logarithmic bvs ( ) are able to achieve a considerable degree of alignment with the lv on average , while retaining some degree of diversity .one clearly sees that bvs constructed with norms become strongly aligned among themselves while still keep a high degree of transversality with the main lv , as reflected by the high average angle of the ensembles ( ) in fig .[ phiq ] for .in contrast , the ` logarithmic ensemble ' ( ) exhibits a lower angle with the main lv , even if the statistical diversity is high .we claim that the higher diversity and lower exhibited by the ensemble of logarithmic bvs ( ) , as compared with the ensembles with , indicates that this ensemble is spanning a sub - space formed by a narrow hyper - cone around the main lv , while ensembles with tend to lie in a lower dimension subspace that is more transverse to the lv .+ also the growth rate of the ensemble members can be used to compare with that of the main lv , reflecting again the different behavior for different norm choices .the average exponential growth rate of the bred vectors is notice that , for the sake of clarity we are using the same norm type ( ) to measure the exponential growth rate in all cases ( nevertheless due to the long averaging the norm type is irrelevant ) .figure [ leq ] shows the dependence of on the ensemble dimension .one can see that the logarithmic bvs ( ) exhibit the largest amplification rate for a given ensemble dimension , which is in agreement with the results discussed in the preceding subsections showing that the logarithmic ensemble ( ) , among all ensemble choices , exhibited the greatest projection on the lv .conversely , given an exponential growth rate , using the -norm will result in the most diverse ensemble . as a function of the average ensemble dimension .the dotted line indicates the value of the lyapunov exponent .( b ) zoom of panel ( a).,title="fig : " ] +we have studied the effect of different norms on the construction of ensembles of bvs .the geometric ( ) norm outperforms other norms ( like the euclidean one , ) for constructing ensembles of bvs in spatially extended systems .the enhancement of performance ( in terms of root - mean square error , ensemble spread , and calibration time ) of ensembles of logarithmic ( ) bred vectors with respect to standard `` euclidean '' ( ) bred vectors was already uncovered by . in the present work we give a rationale behind those results .we show that an ensemble of logarithmic bvs ( obtained with the 0-norm ) exhibits greater diversity larger ensemble dimension while its members are more strongly projected on the leading lv and have growth rates that rapidly approach the leading lyapunov exponent . in comparison ,ensembles based on bvs with perform rather poorly .they tend to collapse in one single direction ( i.e. , ) very abruptly as the bv amplitude is diminished and , even when all the statistical diversity is lost , they remain rather transverse to the leading lv as demonstrated by the angle with the main lv shown in fig .moreover , the geometric norm also leads to the least fluctuating ensemble dimension among all the possible -norms . in the view of these resultstwo prominent questions remain open . on the one hand, it would be very interesting to evaluate the performance of -norm bvs in real applications .the study by of 0-norm bvs already showed promissing , albeit preliminary , results .clearly , more research is needed in this direction . on the other hand ,there is the problem of analyzing the potential advantages of ensemble kalman filters based on -norm bvs .our results show that logarithmic bvs have very nice properties regarding statistical diversity , growth rates , and projection onto the main lv .therefore , a natural question that arises is : to what extent can these features translate into a better performance of ensemble kalman filtering methods ? .we believe our results may serve as a basis for future research along these lines .+ d.p . acknowledges support by csic under the junta de ampliacin de estudios programme ( jae - doc ) .financial support from the ministerio de ciencia e innovacin ( spain ) under projects no .fis2009 - 12964-c05 - 05 and no .cgl2010 - 21869/cli is acknowledged .wei , m. , z. toth , r. wobus , y. zhu , c. bishop , and x. wang , 2006 : ensemble transform kalman filter - based ensemble perturbations in an operational global prediction system at ncep ._ tellus _ , * 58a * , 128 .
we show that the choice of the norm has a great impact on the construction of ensembles of bred vectors . the geometric norm maximizes ( in comparison with other norms like the euclidean one ) the statistical diversity of the ensemble while , at the same time , enhances the growth rate of the bred vector and its projection on the linearly most unstable direction , _ i.e. _ the lyapunov vector . the geometric norm is also optimal in providing the least fluctuating ensemble dimension among all the spectrum of -norms studied . we exemplify our results with numerical integrations of a toy model of the atmosphere ( the lorenz-96 model ) , but our findings are expected to be generic for spatially extended chaotic systems .
most of the difficulty of quantum many - body physics stems from the complexity of its fundamental mathematical objects : many - body wavefunctions and density matrices . in the simplest case , where we have qubits , a wavefunction ( pure state )can be considered as a function mapping .therefore , it is characterized by complex parameters .density matrices ( mixed states ) have even greater mathematical complexity , mapping , i.e. complex parameters .the aim of this work is to describe a pictorial representation of quantum many - body wavefunctions , in which a wavefunction characterizing a chain of qubits maps into an image with pixels .thus , an increase in the number of qubits reflects itself in an increase in the resolution of the image .these images are typically fractal , and sometimes self - similar .extension to higher spin _ qudits _is straightforward , and is also explored . some physical properties of the wavefunction become visually apprehensible : magnetization ( ferro or antiferromagnetic character ) , criticality , entanglement , translation invariance , permutation invariance , etc .visualization of complex data is a common problem in many branches of science and technology .let us review here some of the relevant hallmarks that preceded our work .historically , it can be argued that the single most relevant advance in calculus was the discovery of the relation between algebraic functions and curves in the plane in the xvii century .function visualization provided an insight which guided most of the subsequent development of calculus , not only by helping solve established problems , but also by suggesting new ones . with the advent of the new information technologies, complex data visualization has developed into a full - fledged field of research .the reader is directed to for a recent review of state - of - the - art techniques , and for a historical perspective . as a relevant example , the problem of visualization of dna and protein sequences was addressed in 1990 by jeffrey making use of the so - called _ chaos game representation _ ( cgr ) .dna sequences are long , highly correlated strings of four symbols , .let us label the four corners of a square with them .now , select the central point of the square and proceed as follows .pick the next symbol from the string .find the point midway between the selected point and the corner which corresponds to the symbol .mark that point , and make it your new selected point .if the sequence is genuinely random , the points will cover the square uniformly .otherwise , patterns will emerge , very often with fractal structure .the original purpose of the technique was mere visualization , but it evolved to provide quantitative measurements , such as shannon entropies , which help researchers to characterize dna and protein sequences . in 2000 ,hao and coworkers developed a different representation technique for long dna sequences that also had fractal properties . given a certain value of , they computed the frequency of every subsequence of length within the global sequence , thus obtaining a mathematical object which is similar to a many - body wavefunction , only mapping from .the number of different subsequences of length is .hao and coworkers represented the subsequence probability distribution by dividing a unit square in a recursive way , into small squares , and attaching a color to each of them .the resulting images have fractal appearance , as remarked by the authors , but their quantification is not pursued .their purpose is to identify which types of subsequences are under - represented , and to this end they analyse the corresponding patterns of low frequency . in 2005latorre , unaware of the work of hao et al . , developed independently a mapping between bitmap images and many - body wavefunctions which has a similar philosophy , and applied quantum information techniques in order to develop an image compression algorithm .although the compression rate was not competitive with standard _, the insight provided by the mapping was of high value . a crucial insight for the present work was the idea that latorre s mapping might be inverted , in order to obtain bitmap images out of many - body wavefunctions .focusing on quantum mechanics , the simplest visualization technique is provided by the representation of a qubit as a point on a bloch sphere .early work of ettore majorana proved that a permutation - invariant system of spins-1/2 can be represented as a set of points on the bloch sphere .this majorana representation has proved very useful in characterizations of entanglement . a different approach that can provide visualization schemes of quantum many - body systems was introduced by wootters and coworkers in 2004 .the idea is to set a bidimensional array of operators which fulfill certain properties , and measure their expectation values in the given state .those values , displayed in a 2d lattice , generate a discrete analogue of a wigner function . in this work ,we describe a set of techniques which provide graphical representations of many - body wavefunctions , which share many features with the schemes of latorre and hao and coworkers .the main insight is that the increase in complexity as we add more qubits is mapped into an increase in the resolution of the corresponding image .thus , the thermodynamic limit , when the number of qubits tends to infinity , corresponds to the continuum limit for the images .the scheme is recursive in scales , and this makes the images look fractal in a natural way .in fact , as we will discuss , exact self - similarity of the image implies that the wavefunction is factorizable . in section [ 2dplots ]we describe the basic wavefunction plotting scheme , while section [ examples ] is devoted to providing several examples ( heisenberg , itf , dicke states , product states , etc . ) emphasizing how physical features map into plot features .the procedure is generalized in section [ otherplots ] , and some alternative plotting schemes are described , which allow us to try states of spin-1 systems , such as the aklt state .section [ selfsim ] , on the other hand , deals with the fractal properties of the plots and extracts useful information from them . in section [ entanglement ] discusses how to recognize entangled states in a wavefunction plot , along with a simple technique to estimate entanglement by inspection .a different plotting scheme , based upon the frame representation and related to the wootters group approach is succintly described in section [ pauli ] , and a few pictures are provided for the sake of comparison .the article finishes with conclusions and a description of future work .let us consider a couple of qubits .the tensor basis is composed of four states : , , and .consider also a unit square , \times[0,1] ] and any two numbers , and , characterized by their binary expansion : , .the value attached to the point in the plot will be given by the expectation value in of the operator : \label{tracea}\ ] ] where is given by : in other words , we plot the expected value of every combination of tensor products of . in particular , on the line we get uniquely correlations in ; on , those in , and on those corresponding to .such representation is unique for every density matrix , and can be reverted as follows : in order to attain some intuition about the representation , figure [ fig.frame ] illustrates it for one and two qubits . at each cell , we depict the expected value of a `` string '' operator , as shown .( 30,0)(-7,-20 ) ( 47,-20 ) ( 0,0)(40,-40)(0,-20)(40,-20)(20,0)(20,-40)(10,-10 ) ( 10,-30 ) ( 30,-10 ) ( 30,-30 ) ( 30,-42.5)(-7,-20 ) ( 47,-20 ) ( 0,0)(40,-40 ) ( 0,-20)(40,-20)(20,0)(20,-40)(10,0)(10,-40)(30,0)(30,-40)(0,-10)(40,-10)(0,-30)(40,-30)(5,-5)(0,0) ( 10,0) ( 20,0) ( 30,0) ( 5,-15)(0,0) ( 10,0) ( 20,0) ( 30,0) ( 5,-25)(0,0) ( 10,0) ( 20,0) ( 30,0) ( 5,-35)(0,0) ( 10,0) ( 20,0) ( 30,0) figure [ fig.pauliprod ] shows our first example : the frame representation of a product state given by i.e. : a spin pointing half - way between the and axes . the plot shows a striking sierpiski - like structure , which can be fully understood by noticing that , in this state , and are nonzero , while . if , in figure [ fig.frame ] ( bottom ) we cross out all elements with a , the sierpiski - like structure will appear .self - similarity , therefore , is rooted in the plotting scheme , as in the previous case . as an example , we provide in figure [ fig.pauliitf ] images illustrating the itf quantum phase transition : above , is small and only correlations in the -axis are relevant .below , is large and correlations appear only in the -axis .the middle panel shows the critical case .in this work we have described a family of schemes which allow visualization of the information contained in quantum many - body wavefunctions , focusing on systems of many qubits .the schemes are self - similar by design : addition of new qubits results in a higher resolution of the plots .the thermodynamic limit , therefore , corresponds to the continuum limit .the philosophy behind the schemes is to start out with a region and divide it into several congruent subdomains , all of them similar to .this subdivision procedure can be iterated as many times as needed , producing an exponentially large amount of subdomains , each of them characterized by a geometrical index .this index can be now associated to an element of the tensor - basis of the hilbert space , and its corresponding wavefunction amplitude goes , through a certain color code , into that subdomain .the most simple example is with a square which splits into four equal quadrants , but we can also start with a right triangle , or even with a line segment . physical features of the wavefunctions translate naturally into visual features of the plot . for example , within the scheme in section [ 2dplots ] , the spin - liquid character of the ground state of the heisenberg model shows itself in a characteristic pattern of diagonal lines .this pattern is able to distinguish between open and periodic boundary conditions .other features which show up in the plots is magnetization , criticality , invariance under translations or permutation of the qubits , and marshall s sign rule .we have analysed the characteristic features of product states , the ground states of the ising model in a transverse field , the majumdar - ghosh hamiltonian or dicke states .we have also studied spin-1 systems , such as the aklt state .a very relevant physical feature which becomes apparent in the plots is entanglement .factorizability is straightforward to spot : a wavefunction is factorizable if all sub - images at a certain division level are equal , modulo normalization .the schmidt rank of a given left - right partition of the system is related to the dimension of the subspace spanned by all sub - images within the corresponding subdivision of the plot and , so , a crude method to obtain an upper bound is to count the number of different sub - images .the full information about entanglement is contained in the matrix that we have termed as cross - correlation , which contains the overlap between all subimages at a certain division level . in a very different spirit, we have illustrated the frame representations of quantum states of many qubits .this approach is related to wooters group ideas . in it ,the expectation values of a selected set of operators are shown in a 2d array , which is again displayed in a self - similar manner . in this workwe have taken the first steps in the exploration of an alternative strategy in the study of quantum many - body sytems , which can provide support to the corpus of methods in the field .regarding further work , we would like to stress the further exploration of interesting quantum many - body states which we have not done here , for example the ground states of fermionic hamiltonians , the hubbard model , the mott transition or the bec - bcs crossover .understanding the plotting structure of matrix product states of low dimension might also result profitable .moreover , the mathematical properties of the mapping itself are worth studying by themselves . as a final remark, we would like to announce that source code and further images can be found at http://qubism.wikidot.com , a webpage dedicated to qubism - related resources .this work has been supported by the spanish government grants fis2009 - 11654 , fis2009 - 12964-c05 - 01 , fis2008 - 00784 ( toqata ) and quitemad , and by erc grant quagatua .m.l . acknowledges the alexander von humboldt foundation and the hamburg theory award .l . acknowledges d. peralta and s.n .santalla for very useful discussions .
a visualization scheme for quantum many - body wavefunctions is described , which we have termed _ its main property is its _ recursivity _ : increasing the number of qubits reflects in an increase in the image resolution . thus , the plots are typically fractal . as examples , we provide images for the ground states of commonly used hamiltonians in condensed matter and cold atom physics , such as heisenberg or itf . many features of the wavefunction , such as magnetization , correlations and criticality , can be visualized as properties of the images . in particular , factorizability can be easily spotted , and a way to estimate the entanglement entropy from the image is provided .
recurrent neural networks are a prominent model for information processing and memory in the brain . .traditionally , these models assume synapses that may change on the time scale of learning , but that can be assumed constant during memory retrieval .however , synapses are reported to exhibit rapid time variations , and it is likely that this finding has important implications for our understanding of the way information is processed in the brain .for instance , hopfield like networks in which synapses undergo rather generic fluctuations have been shown to significantly improve the associative process , e.g. , .in addition , motivated by specific neurobiological observations and their theoretical interpretation , activity dependent synaptic changes which induce _ depression _ of the response have been considered .it was shown that synaptic depression induces , in addition to memories as stable attractors , special sensitivity of the network to changing stimuli as well as rapid switching of the activity among the stored patterns .this behaviour has been observed experimentally to occur during the processing of sensory information . in this paper , we present and study networks that are inspired in the observation of certain , more complex synaptic changes .that is , we assume that repeated presynaptic activation induces at short times not only depression but also _ facilitation _ of the postsynaptic potential .the question , which has not been quite addressed yet , is how a competition between depression and facilitation will affect the network performance .we here conclude that , as for the case of only depression , the system may exhibit up to three different _ phases _ or regimes , namely , one with standard associative memory , a disordered phase in which the network lacks this property , and an oscillatory phase in which activity switches between different memories . depending on the balance between facilitation and depression , novel intriguing behavior results in the oscillatory regime .in particular , as the degree of facilitation increases , both the sensitivity to external stimuli is enhanced and the frequency of the oscillations increases .it then follows that facilitation allows for recovering of information with less error , at least during a short interval of time and can therefore play an important role in short term memory processes .we are concerned in this paper with a network of binary neurons .previous studies have shown that the behaviour of such a simple network dynamics agree qualitatively with the behaviour that is observed in more realistic networks , such as integrate and fire neuron models of pyramidal cells .let us consider binary neurons , endowed of a probabilistic dynamics , namely, \right\ } , \label{s}\]]which is controlled by a _ temperature _parameter , see , for instance , for details .the function denotes a time dependent _ local field _ ,i.e. , the total presynaptic current arriving to the postsynaptic neuron this will be determined in the model following the phenomenological description of nonlinear synapses reported in , which was shown to capture well the experimentally observed properties of neocortical connections .accordingly , we assume that is a constant threshold associated to the firing of neuron and and are functions to be determined which describe the effect on the neuron activity of short term synaptic depression and facilitation , respectively .we further assume that the weight of the connection between the ( presynaptic ) neuron and the ( postsynaptic ) neuron are static and _ store _ a set of patterns of the network activity , namely , the familiar _ covariance rule _ : , with are different binary patterns of average activity .the standard hopfield model is recovered for , we next implement a dynamics for and after the prescription in . a description of varying synapses requires ,at least , three local variables , say , and to be associated to the fractions of neurotransmitters in recovered , active , and inactive states , respectively .a simpler picture consists in dealing with only the variable . this simplification , which seems to describe accurately both interpyramidal and pyramidal interneuron synapses , corresponds to the fact that the time in which the postsynaptic current decays is much shorter than the recovery time for synaptic depression , say ( time intervals are in milliseconds hereafter ) . within this approach, one may write that interpretation of this ansatz is as follows .concerning any presynaptic neuron the product stands for the total fraction of neurotransmitters in the recovered state which are activated either by incoming spikes , or by facilitation mechanisms , for simplicity , we are assuming that ] one may solve the model ( [ s])([u ] ) in the thermodynamic limit under the standard mean - field assumption that within this approximation , we may also substitute ( ) by the mean field values ( ) .( notice that one expects , and it will be confirmed below by comparisons with direct simulation results , that the mean field approximation is accurate away from any possible critical point . ) assuming further that patterns are random with mean activity one obtains the set of dynamic equations: \,x_{\pm } ^{\nu } ( t)\,m_{\pm } ^{\nu } ( t),\]]\,m_{\pm } ^{\nu } ( t),\qquad \qquad\]] \right ) \right\ } , \]] , \qquad \qquad \qquad \label{dyn}\]]where this is a coupled map whose analytical treatment is difficult for large but it may be integrated numerically , at least for not too large one may also find the fixed point equations for the coupled dynamics of neurons and synapses ; these are \,\tau _ { \mathrm{rec}}\,m_{\pm } ^{\nu } \right\ } ^{-1 } , \notag \\u_{\pm } ^{\nu } & = & u\,\,\tau _{ \mathrm{fac}}\,\,m_{\pm } ^{\nu } \left ( 1\,\,+\,\,u\,\,\tau _ { \mathrm{fac}}\,\,m_{\pm } ^{\nu } \right ) ^{-1 } , \notag \\2m_{\pm } ^{\nu } & = & 1\pm \frac{2}{n}{\sum_{i}}\tanh\left [ \beta \left ( m^{\nu } \pm \sum_{\mu \neq \nu } \epsilon _ { i}^{\mu } m^{\mu } \right ) \right ] , \notag \\ m^{\nu } & = & \frac{1}{n}\sum_{i}\epsilon _ { i}^{\nu }\tanh \left ( \beta \sum_{\mu } \epsilon _ { i}^{\mu } m^{\mu } \right ) .\label{steady}\end{aligned}\]]the numerical solution of these transcendental equations describes the resulting order as a function of the relevant parameters . determining the stability of these solutions for is a more difficult task ,because it requires to linearize ( [ dyn ] ) and the dimensionality diverges in the thermodynamical limit ( see however ) . in the next section we therefore deal with the case a finite number of stored patterns i.e. , in the thermodynamic limit . in practice , it is sufficient to deal with to illustrate the main results ( therefore , we shall suppress the index hereafter ) .let us define the vectors of order parameters , its stationary value that is given by the solution of eq .[ steady ] , and whose components are the functions on the right hand side of ( [ dyn ] ) .the stability of ( [ dyn ] ) around the steady state ( [ steady ] ) follows from the first derivative matrix this is \,x_{\pm } , ] and after noticing that one may numerically diagonalize and obtain the eigenvalues for a given set of control parameters for the system is stable ( unstable ) close to the fixed point .the maximum of determines the local stability : for the system ( [ dyn ] ) is locally stable , while for there is at least one direction of instability , and the system consequently becomes locally unstable .therefore , varying the control parameters one crosses the line that signals the bifurcation points . the resulting situation is summarized in figure [ fig1 ] for specific values of and .( [ steady ] ) have three solutions , two of which are memory states corresponding to the pattern and anti - pattern and the other a so - called paramagnetic state that has no overlap with the memory pattern .the stability of the two solutions depends on .the region corresponds to the non - retrieval phase , where the paramagnetic solution is stable and the memory solutions are unstable . in this phase, the average network behaviour has no significant overlap with the stored memory pattern .the region corresponds to the memory phase , where the paramagnetic solution is unstable and the memory solutions are stable .the network retrieves one of the stored memory patterns . for ( denoted `` o '' in the figure )none of the solutions is stable .the activity of the network in this regime keeps moving from one to the other fixed points neighborhood ( the pattern and anti - pattern in this simple example ) .this rapid switching behaviour is typical for dynamical synapses and does not occur for static synapses .a similar oscillatory behavior was reported in for the case of only synaptic depression .a main novelty is that the inclusion of facilitation importantly modifies the _ phase diagram _ , as discussed below ( figure [ fig2 ] ) . on the other hand ,the phases for ( f ) and ( p ) correspond , respectively , to a locally stable regime with associative memory ( ) and to a disordered regime without memory ( i.e. , ) .the values and which , as a function of and determine the limits of the oscillatory phase correspond to the onset of condition this condition defines lines in the parameter space ( ) that are illustrated in figure [ fig2 ] .this reveals that ( separation between the f and o regions ) in general decreases with increasing facilitation , which implies a larger oscillatory region and consequently a reduction of the memory phase . on the other hand , ( separation between o and p regions ) in general increases with facilitation , thus broadening further the width of the oscillatory phase the behavior of this quantity under different conditions is illustrated in the insets of figure [ fig2 ] .another interesting consequence of facilitation are the changes in the phase diagram as one varies the facilitation parameter which measures the fraction of neurotransmitter that are not activated by the facilitating mechanism**. * * in order to discuss this , we define the ratio between the time scales , and monitor the phase diagram ( ) for varying the result is also in figure [ fig2 ] see the bottom graphs for ( left ) and ( right ) which correspond , respectively , to a situation in which depression and facilitation occur in the same time scale and to a situation in which facilitation is four times faster .the two cases exhibit a similar behavior for large but they are qualitatively different for small in the case of faster facilitation , there is a range of values for which increases , in such a way that one passes from the oscillatory to the memory phase by slightly increasing this means that facilitation tries to drive the network activity to one of the attractors ( ) and , for weak depression ( small ) , the activity will remain there .decreasing further has then the effect of increasing effectively the system temperature , which destabilizes the attractor .this only requires small because the dynamics ( [ u ] ) rapidly decreases the second term in to zero .figure [ fig3 ] shows the variation with both and of the stationary locally stable solution with associative memory , , computed this time both in the mean field approximation and using monte carlo simulation .this monte carlo simulation consists of iterating eqs .( [ s ] ) , ( [ x ] ) and ( [ u ] ) using parallel dynamics .this shows a perfect agreement between our mean field approach above and monte carlo simulations as long as one is far from the transition , a fact which is confirmed below ( in figure [ figure5 ] ) .this is because , near the simulations describe hops between positive and negative which do not compare well with the mean field absolute value the most interesting behavior is perhaps the one revealed by the phase diagram in figure [ figure4 ] .here we depict a case with in order to clearly visualize the effect of facilitation facilitation has practically no effect for any as shown above and in order to compare with the situation of only depression in .a main result here is that , for appropriate values of the working temperature , one may force the system to undergo different types of transitions by simply varying first note , that the line corresponds roughly to the case of static synapses , since is very small . in this limitthe transition between retrieval ( f ) and non - retrieval ( p ) phases is at at low enough there is transition between the non retrieval ( p ) and retrieval phases ( f ) as facilitation is increased .this reveals a positive effect of facilitation on memory at low temperature , and suggests improvement of the network storage capacity which is usually measured at a prediction that we have confirmed in preliminary simulations . at intermediate temperatures , e.g. , for the systems shows no memory in the absence of facilitation , but increasing one may describe consecutive transitions to a retrieval phase ( f ) , to a disordered phase ( p ) , and then to an oscillatory phase ( o ) .the latter is associated to a new instability induced by a strong depression effect due to the further increase of facilitation . at higher facilitationmay drive the system directly from complete disorder to an oscillatory regime .in addition to its influence on the onset and width of the oscillatory region , determines the frequency of the oscillations of in order to study this effect , we computed the average time between consecutive minimum and maximum of these oscillations , i.e. , a half period .the result is illustrated in the left graph of figure [ figure5 ] .this shows that the frequency of the oscillations increases with the facilitation time .this means that the access of the network activity to the attractors is faster with increasing facilitation , thoughthe system then remains a shorter time near each attractor due to an stronger depression .on the other hand , we also computed the maximum of during oscillations , namely , this , which is depicted in the right graph of figure [ figure5 ] , also increases with the overall conclusion is that not only the access to the stored information is faster under facilitation but that increasing facilitation will also help to retrieve information with less error . in order to deepen further on some aspects of the system behavior, we present in figures [ figure6 ] and [ figure7 ] a detailed study of specific time series .the middle graph in figure [ figure6 ] corresponds to a simulation of the system evolution for increasing values of as one describes the horizontal line for in figure figure4 .the system thus visits consecutively the different regions ( separated by vertical lines ) as time goes on .that is , the simulation starts with the system in the stable _ paramagnetic _ phase , denoted p1 in the figure , and then successively moves by varying into the stable _ ferromagnetic _ phase f , into another paramagnetic phase , p2 , and , finally , into the oscillatory phase o. we interpret that the observed behavior in p2 is due to competition between the facilitation mechanism , which tries to bring the system to the fixed point attractors , and the depression mechanism , which tends to desestabilize the attractors .the result is a sort of intermittent behavior in which oscillations and convergence to a fixed point alternates , in a way which resembles ( but is not ) chaos .the top graph in figure [ figure6 ] , which corresponds to an average over independent runs , illustrates the typical behaviour of the system in these simulations ; the middle run depicts an individual run .further interesting behavior is shown in the bottom graph of figure figure6 .this corresponds to an individual run in the presence of a very small and irregular external stimulus which is represented by the ( green ) line around this consist of an irregular series of positive and negative pulses of intensity and duration of 20 ms .in addition to a great sensibility to weak inputs from the environment , this reveals that increasing facilitation tends to significantly enhance the system response .figure [ figure7 ] shows the power spectra of typical time series such as the ones in figure [ figure6 ] , namely , describing the horizontal line for in figure [ figure4 ] to visit the different regimes .we plot here time series obtained , respectively , for 20 , 50 and 100 and , on top of each of them , the corresponding spectra .this reveals a flat , white noise spectra for the p1 phase and also for the stable fixed point solution in the f regime .however , the case for the intermittent p2 phase depicts a small peak around 65 hz .the peak is much sharper and it occurs at 70 hz in the oscillatory case .we have shown that the dynamical properties of synapses have profound consequences on the behaviour , and the possible functional role , of recurrent neural networks . depending on the relative strength of the depression , the facilitation and the noise in the network , one observes attractor dynamics to one of the stored patterns , non - retrieval where the neurons fire largely at random in a fashion that is uncorrelated to the stored memory patterns , or switching where none of the stored patterns is stable and the network switches rapidly between ( the neighborhoods of ) all of them .these three behaviours were also observed in our previous work where we studied the role of depression .the particular role of facilitation is the following .the transitions between these possible phases are controlled by two facilitation parameters , namely , and analysis of the oscillatory phase reveals that the frequency of the oscillations , as well as the maximum retrieval during oscillations increase when the degree of facilitation increases .that is , facilitation favours in the model a faster access to the stored information with a noticeably smaller error .this suggests that synaptic facilitation might have an important role in short term memory processes. there is increasing evidence in the literature that similar jumping processes could be at the origin of the animals ability to adapt and rapidly response to the continuously changing stimuli in their environment .we therefore believe that the network behaviour that is the consequence of dynamic synapses as presented in this paper may have important functional implications .this work was supported by the _ meyc feder _ project fis2005 - 00791 , the _ junta de andaluca _ project fqm 165 and the epsrc - funded colamn project ref .ep / co 10841/1 .we thank useful discussion with jorge f. mejas .g. laurent , m. stopfer , r. w. friedrich , m. i. rabinovich , a. volkovskii , and h. d. i. abarbanel .odor encoding as an active , dynamical process : experiments , computation and theory ., 24:0 263297 , 2001 .
we study the effect of competition between short - term synaptic depression and facilitation on the dynamical properties of attractor neural networks , using monte carlo simulation and a mean field analysis . depending on the balance between depression , facilitation and the noise , the network displays different behaviours , including associative _ memory _ and switching of the activity between different attractors . we conclude that synaptic facilitation enhances the attractor instability in a way that ( _ i _ ) intensifies the system adaptability to external stimuli , which is in agreement with experiments , and ( _ ii _ ) favours the retrieval of information with less error during short time intervals .
in recent days diabetic retinopathy ( dr ) is one of the severe eye diseases causing blindness . with early stage detection and treatmentthe patient can be saved from losing sight .automatic computer aided diagnosis system will reduce burden on specialists . also for monitoring and checking the progress of disease efficiently , automatic system will perform much better than human in terms of manual evaluation time . since comparison and evaluation of images manually is a time consuming task and images are subject to various distortions . for accurate analysis of progress of diabetic retinopathy , detection of exudateis mandatory .exudates are primary clinical symptoms of diabetic retinopathy. two types of exudates namely soft exudate and hard exudate appear .hard exudates are visible in non - proliferative diabetic retinopathy and soft exudates ( cotton wool spots ) in proliferative diabetic retinopathy .hard exudates represent the accumulation of lipid in or under the retina secondary to vascular leakage and visible as discrete yellowish deposits in color fundus images .cotton - wool spots are nerve fibre layer infarcts and they are visible as pale white rather than yellow . also exudates are variable in sizes and shapes . a typical pathological retinal image is depicted in fig . 1 to show features like the optic disc , the macula , the blood vessels and exudates .+ for identifying the stage of dr , classification of soft and hard exudates is foremost important to distinguish them for other retinal pathological features like drusen , heamorrhages , microaneurysms etc .fundus images are prone to artifacts related to defocus , motion blur , fingerprints etc .this artifacts are prone to interpretation as pathological features because of their similarity with exudates and drusen. one important defect of fundus image is luminosity and contrast variation , which is improperly addressed in many exudate detection methods .several methods have been presented for detection of exudates in colour fundus photograph using image processing and machine learning algorithms .sanchez et al . use a mixture model to separate exudate from background .an edge detection based method was used to remove other outliers .giancardo et al . use a kirsch s edge method to assign score for exudate candidate to a pre - processed .they have used background estimation and image normalization for pre - processing . for classification and detection of drusen , exudates and cotton wool spots ,a pixel wise classification algorithm is presented by niemeijer et al .zhang et al . use mathematical morphological and contextual features for candidate extraction followed by random forest based classification method for exudate detection .fuzzy c - means clustering based method was used for segmentation of features and neural network classifier for exudate detection by osareh et al .rocha et al . have addressed the problem of detecting bright and red lession by using one novel algorithm , they made use of surf features with machine learning method .but they failed to properly address the problems of luminance variation and artifacts . in this work, we propose a novel exudate detection method using gaussian scale space based interest map ( gimap ) and mathematical morphology .this approach is robust to artefacts and illuminance variation .secondly a disease severity prediction method is developed by using information of exudate location with respect to the macula region and the optic disc .in addition to that we propose classification system using svm for hard and soft exudate .distinguishing exudate as hard and soft important for severity prediction also to identify type of diabetic retinopathy , whether it is non - proliferative or proliferative .section 2 presents our method . experimental setup and the resultsare discussed in section 3 and finally conclusions are are drawn in section 4 .exudate are the bright lessons found in retinal image , caused due to diabetic retinopathy , a most common disorder of eye with patient having diabetes .it is also a main reason of blindness . for detection of the optic disc and fovea , the method described by niemeijer et al . was used .steps involve in this method is shown in fig .2 . for correct detection and classification of hard and soft exudatesit is important to reduce noise while preserving edges . in this methodanisotropic diffusion filtering was used for reducing noise in the images and preserving edges .anisotropic diffusion filters have been proved to be succesful in edge preserving smooting and denoising of medical images .hard and soft exudates possesses discriminative edge structures , preserving this edge while reducing noise is also one of important step towards classification .step involves in filtering is as follows . where i is the input image , ( r , g , b ) is color channel of it , k is diffusion constant and is noise standard deviation over which algorithm will be iterated to find solution .the diffusion function is a monotonically decreasing function of the image gradient magnitude .gaussian scale space based interest map can capture local image structure and scale invariant image features . since exudate varies on the basis of size and hence different exudate will response to different scales of gaussian . for identifying all exudate interest map using gaussian scale space was constructed .first step in the construction of gimap is the computation of 1st derivative of gaussian as filters at several scales and smoothed the derivative using gaussian filter and take absolute values of derivatives .in addition to that laplacian of gaussian for each scale was computed .because of colour difference between hard and soft exudates , they respond variably to colour channels . now taking the maximum response of all the colour channels final interest map for the scaleis constructed .this process is repeated over different scales . for each scalewe have two filter 1st derivative gaussian and laplacian of gaussian , take maximum of absolute values from both filter ouputs in the interest map . since our concerned features may appear in variable scales so interest map of each individual map is combined using maximum operation to form a decision making interest map . scale used in these works are {2},k\sqrt[]{2 } ... k^{n}\sqrt[]{2}) ] is its three color channel at a scale using filter , then the interest map for this scale is obtained by selecting maximum response over color channel as follows . and the decision making interest map is obtained taking maximum over filter and then scale as follows . the decision map obtained from above procedure need to be improved by reducing outliers present .first step is to enhance the features by using grayscale morphological operations . hereclosing operation with a disk structuring element of size 2 and 3 was used . if and are obtained after closing operation , then resulting enhanced image is obatined by following operation . where are pixels position in both images . in the second stepconvert the interest map to binary map . forbinarization of the interest map sauvola s local adaptive thresholding technique is used .local threshholding is efficient in this particular situation beacuse pixels values of hard and soft exudates vary significantly and to get both of those in final map we need to binarize using local windows .let is interest map image , take a window size of , and be the mean and standard deviation of window cantered at , then threshold will be defined as follows .\ ] ] where is maximum of standard deviation of all windows and ] are two constant function of distance . where is a exudate pixel , denotes circle and its corresponding area .for analysing the accuracy of our method on different set of images , we have used publicly available diaretdb1 and e - ophthaex dataset . images with various difficulty from these databases have been chosen for testing our algorithm accuracy , this dataset include image with pathological features such as haemorrhages , exudates and microaneurysms .this dataset provided by experts in ophthalmology with proper pixel wise annotation of features location in images .this three dataset includes 400 retinal images with variety of pathological symptoms . for implemetation matlab platform in a windows 8.1 machine with intel i7 processorwas used . for detection of exudate gaussian scale spacewas constructed using 10 different scale . at first each imagesis resized by a factor of using cubic interpolation method for reducing computational time . for each scale first derivative of gaussian and laplacian of gaussianis computed for analysing structure of exudates present in retinal images . by using those values a decision making interest mapis formed .different types of outliers were removed using morphological connected component analysis and opening operation . for accuracy analysis of exudates detectionwe will compute true positive ( tp ) a number of exudates pixels correctly detected , false positive ( fp ) a number of non - exudate pixels which are detected wrongly as exudate pixels , false negative ( fn ) number of exudate pixels that were not detected and true negative ( tn ) a number of no exudates pixels which were correctly identified as non - exudate pixels .also sensitivity and specificity at pixel level is computed .thus the global sensivity se and the global specificty sp and accuracy ac for each image is defined as follows . a detailed result of accuracy obtained is illustrated in table 1 ..result of exudate detection [ cols="<,<,<,<,<",options="header " , ]a novel method for computer aided diagnosis of retinal image for exudate detection and analysis is proposed .we have got considerable accuracy over two different dataset comprised of several varieties of images specifically illuminance changes , with other pathological features etc . also our machine learning based classifier svm works very well for exudate classification with designed features .this system can be used for automated processing of pathological images related to diabetic retinopathy , also will be very effective for mass screening . in near future , we will incorporate microaneurysms and haemorrhages detection to the system to enhance its credibility to evaluate the degree of diabetic retinopathy snchez , c. , garca , m. , mayo , a. , lpez , m. , hornero , r. , 2009 . retinal image analysis based on mixture models to detect hard exudates .image anal .13 ( 4 ) , 650658 .giancardo , l. , meriaudeau , f. , karnowski , t. , li , y. , garg , s. , tobin , k. , chaum , e. , 2012 .exudate - based diabetic macular edema detection in fundus images using publicly available datasetsimage anal . 16 ( 1 ) , 216226 .niemeijer , m. , van ginneken , b. , russel , s. , suttorp - schulten , m. , abrmoff , m.d . , `` automated detection and differentiation of drusen , exudates , and cottonwool spots in digital color fundus photographs for diabetic retinopathy diagnosis '' , investigative ophthalmology and visual science 48 , 2007,22602267 .zhang et al .`` exudate detection in color retinal images for mass screening of diabetic retinopathy '' , med .image anal ., 18 ( 7 ) ( 2014 ) , pp .10261043 osareh , a. shadgar , b. ; markham , r. a computational - intelligence - based approach for detection of exudates in diabetic retinopathy images , information technology in biomedicine , ieee transactions on,2009 , 535 - 545 .meindert niemeijer , michael d. abrmoff , bram van ginneken , `` fast detection of the optic disc and fovea in color fundus photographs '' medical image analysis , 2009 .p perona , j malik , `` scale - space and edge detection using anisotropic diffusion '' , pattern analysis and machine intelligence,1990 t. lindeberg , `` scale space theory in computer vision , '' in kluwer , 1994 .rachid deriche .recursively implementating the gaussian and its derivatives .[ research report ] rr-1893 , 1993 , pp.24 .< inria-00074778 > j. sauvola and m. pietikainen , `` adaptive document image binarization , '' pattern recognition 33(2 ) , pp .225236 , 2000 kauppi , t. , kalesnykiene , v. , kamarainen , j .- k . , lensu , l. , sorri , i. , raninen a. , voutilainen r. , uusitalo , h. , klviinen , h. , pietil , j. , diaretdb1 diabetic retinopathy database and evaluation protocol , in proc of the 11th conf . on medical image understanding and analysis ( aberystwyth , wales , 2007 ) http://www.adcis.net/en/downloadthirdparty/messidor.html chang , chih - chung and lin , chih - jen , libsvm : a library for support vector machines , acm transactions on intelligent systems and technology,2011 .http://www.icoph.org/taskforce-documents/diabetic-retinopathy-guidelines.html rocha , a. , carvalho , t. , jelinek , h. f. , goldenstein , s. , wainer , j. , `` points of interest and visual dictionaries for automatic retinal lesion detection '' , biomedical engineering , ieee transactions on , 59(8 ) , 2244 - 2253 , 2012 . c. sinthanayothin, j. f. boyce , t. h. williamson , h. k. cook , e. mensah , s. lal , d. usher , `` automated detection of diabetic retinopathy on digital fundus images '' , diabet .19(2 ) ( 2002 ) 105 .t. walter , j. c. klein , p. massin , a. erginay , `` a contribution of image processing to the diagnosis of diabetic retinopathy detection of exudates in color fundus images of the human retina '' , ieee trans .imaging 21(10 ) ( 2002 ) 1236 .
in the context of computer aided diagnosis system for diabetic retinopathy , we present a novel method for detection of exudates and their classification for disease severity prediction . the method is based on gaussian scale space based interest map and mathematical morphology . iit makes use of support vector machine for classification and location information of the optic disc and the macula region for severity prediction . it can efficiently handle luminance variation and it is suitable for varied sized exudates . the method has been probed in publicly available diaretdb1v2 and e - ophthaex databases . for exudate detection the proposed method achieved a sensitivity of 96.54% and prediction of 98.35% in diaretdb1v2 database . exudate , diabetic rationopathy , image processing
these lecture notes were written for the american mathematical society ( ams ) short course on quantum computation held 17 - 18 january 2000 in conjunction with the annual meeting of the ams in washington , dc in january 2000 .the objective of this lecture is to discuss quantum entanglement from the perspective of the theory of lie groups .more specifically , the ultimate objective of this paper is to quantify quantum entanglement in terms of lie group invariants , and to make this material accessible to a larger audience than is currently the case .these notes depend extensively on the material presented in lecture i .it is assumed that the reader is familiar with the material on density operators and quantum entanglement given in the ams short course lecture i , i.e. , with sections 5 and 7 of . of necessity , the scope of this paperis eventually restricted to the study of qubit quantum systems , and to a specific problem called the _ restricted fundamental problem in quantum entanglement _ ( _ rfpqe _ ) .references to the broader scope of quantum entanglement are given toward the end of the paper .+ figure 1 . quantum entanglement lab ? ? ? at first sight , a physics research lab dedicated to the pursuit of quantum entanglement might look something like the drawing found in figure 1 , i.e. , like an indecipherable , incoherent jumble of wires , fiber optic cable , lasers , bean splitters , lenses . perhaps some large magnets for nmr equipment , or some supercooling equipment for rf squids are tossed in for good measure . whatever .. it is indeed a most impressive collection of adult `` toys . ''however , to a mathematician , such a lab appears very much like a well orchestrated collection of intriguing mathematical `` toys , '' just beckoning with new tantalizing mathematical challenges . in the hope of piquing your curiosity to read on , we give the following brief preview of what is to come : the rfpqe reduces to the mathematical problem of determining the orbits of the big adjoint action of the group of local unitary transformations on the lie algebra of the unitary group , as expressed by the following formula : where `` '' denotes the big adjoint operator , and where the remaining symbols are defined in the table below .{|c||c|}\hline & local unitary group\\\hline & lie algebra of \\\hline & unitary group\\\hline & lie algebra of \\\hline \end{tabular}\ ] ] we attack this problem by lifting the above big adjoint action to the * induced infinitesimal action * which , for a qubit density operator , is explicitly given by {c}\omega\left ( v\right ) \left ( i\rho\right ) = { \displaystyle\sum\limits_{q_{1},q_{2}=0}^{3 } } \left ( a^{(1)}\cdot x_{\ast q_{1}q_{2}}\times\frac{\partial}{\partial x_{\ast q_{1}q_{2}}}+a^{(2)}\cdot x_{q_{1}\ast q_{2}}\times\frac{\partial}{\partial x_{q_{1}\ast q_{2}}}+a^{(3)}\cdot x_{q_{1}\ast q_{2}}\times\frac{\partial } { \partial x_{q_{1}q_{2}\ast}}\right ) \end{array } \mathbb{q}\mathbb{e} \text{\textbf{two entangled qubits}} \mathcal{q}_{ab} \text{consisting of qubits} \mathcal{q}_{a}\text { and } \mathcal{q}_{b} \begin{array } [ c]{c}\text{{\scriptsize u.s .certified}}\\ \text{\textbf{contents}}\\ \quad\fbox{ } ^{(\ast)}\end{array } \begin{array } [ c]{c}\text{\textbf{hilb.}}\\ \text{\textbf{sp.}}\end{array } \begin{array } [ c]{c}\text{\textbf{unitary}}\\ \text{\textbf{transf.}}\end{array } \begin{array } [ c]{c}\text{\textbf{state}}\\ \text{\textbf{space}}\end{array } \mathcal{q}_{ab}\mathcal{h}_{ab}\rho_{ab}=\overset{}{\underset{}{\left ( \begin{array } [ c]{rrrr}\frac{1}{2 } & 0 & 0 & -\frac{1}{2}\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ -\frac{1}{2 } & 0 & 0 & \frac{1}{2}\end{array } \right ) } } \mathbb{u}(2^{2})_{ab}u(2^{2})_{ab} \mathcal{q}_{a}\mathcal{h}_{a}\rho_{a}=\overset{}{\underset{}{\left ( \begin{array } [ c]{cc}\frac{1}{2 } & 0\\ 0 & \frac{1}{2}\end{array } \right ) } } \mathbb{u}(2)_{a}u(2)_{a} \mathcal{q}_{b}\mathcal{h}_{b}\rho_{b}=\overset{}{\underset{}{\left ( \begin{array } [ c]{cc}\frac{1}{2 } & 0\\ 0 & \frac{1}{2}\end{array } \right ) } } \mathbb{u}(2)_{b}u(2)_{b} } \ ] ] we define the * standard local moves * as : the * standard local moves * are : * local unitary transformations of the form for example , for bipartite quantum systems , unitary transformations of the form , , * measurement of local observables of the form for example , for bipartite quantum systems , measurement of local observables of the form , , we also define the * extended local moves * as the * extended local moves * are : * extended local unitary transformations of the form where , , , , are distinct non - overhapping hilbert spaces * measurement of extended local observables of the form where , , , , are distinct non - overlapping hilbert spaces moves based on unitary transformation are called * reversible*. those based on measurement are called * irreversible . * the horodeckis , , , jonathan , , linden , , nielsen , , , , plenio , , popescu , , , have made some progress in understanding the fpqe in terms of all four of the above local moves . for the rest of the talk , we restrict our discussion to reversible standard local moves .before continuing , it should be mentioned that physics and mathematics approach quantum mechanics from two slightly different but equivalent viewpoints . to avoid possible confusion , we describe below the minor terminology differences that arise from these two slightly different perspectives .physics describes the state of a quantum system in terms of a traceless hermitian operator , called the density operator .observables are hermitian operators .quantum states change via unitary transformations according to the rubric on the other hand , mathematics describes the state of a quantum system in terms of a skew hermitian operator , also called the density operator .observables are skew hermitian operators .quantum dynamics are defined via the rule where is a unitary operator lying in the lie group of unitary transformations , and where denotes the big adjoint operator .please note that both density operators and the observables lie in the lie algebra of the unitary group .these minor , but nonetheless annoying differences are summarized in the table below .{||c||}\hline\hline\begin{tabular } [ c]{l||l}\textbf{physics}\hspace{1.5 in } & \hspace{1.5in}\textbf{math}\end{tabular } \\\hline\hline \fbox{\begin{tabular } [ c]{c}hilbert space \\ \\ \multicolumn{1}{l}{\begin{tabular } [ c]{c}unitary group\\ lie group \end{tabular }}\end{tabular } } \\\hline\hline\begin{tabular } [ c]{||l||l||}\hline\hline \quad\quad\quad\begin{tabular } [ c]{l}n\times na^{\dagger}=\overline{a}^{t}=a ] .[ c]llobservables : & + density ops : & } {c} u\in\mathbb{u}(n)\left| \psi\right\rangle \longmapsto u\left| \psi\right\rangle \rho\longmapsto u\rho u^{\dagger} u\in\mathbb{u}(n) \left| \psi\right\rangle \longmapsto u\left| \psi\right\rangle i\rho\longmapsto ad_{u}\left ( i\rho\right ) ad_{u}(i\rho)=u(i\rho)u^{-1} ] we will use the two different terminologies and conventions interchangeably .which terminology we are using should be clear from context . from [ physical density operator] we know that an element of the lie algebra is a physical density operator if and only if is positive semi - definite and of trace 1 .thus , the set of physical density operators is a convex subset of the lie algebra .for the sake of clarity of exposition and for the purpose of avoiding minor technicalities , from on we consider only qubit quantum systems , i.e. , quantum systems consisting of qubits .the reader , if he / she so wishes , should be able to easily rephrase the results of this paper to more general quantum systems .moreover , from this point on , we limit the scope of this talk to the study of quantum entanglement from the perspective of the standard local unitary transformations , i.e. , from the perspective of standard reversible local moves as defined in section 5 of this paper . to emphasize this point, we define the group of local unitary transformations as follows : the * group of local unitary transformations * is the subgroup of defined by where denotes the special unitary group .henceforth , the phrase * `` local move '' * will mean an element of the group of * local unitary transformations * .. * * convention . * * from this point on , * * {c}\text{\textbf{local moves } } = \mathbb{l}(2^{n } ) \end{array } } \ ] ]we are now in a position to state clearly the main objectives of this paper .namely , in regard to the * restricted fundamental problem of quantum entanglement ( rfpqe ) * , our objectives are twofold : * given a density operator , devise a means of determining the dimension of its entanglement class _ { e} ] to the manifold _ { e} ] .movement in all directions not in will force us to immediately leave _ { e} ] .objective 1 is achieved as follows : we begin by noting that is the same as the tangent space to at the point , and that is the same as the tangent space _ { e}\right ) ] at .hence , the dimension of _ { e} ] reduces to that of computing the dimension of the vector space . we will give examples of this dimension calculation in the next two sections .we next use the infinitesimal action to achieve : * given two states and , devise a means of determining whether they belong to the same or different entanglement class . as follows :we begin by noting that can be identified with the lie algebra of derivations on .next we recall that consists of all directions in that we can move without leaving an entanglement class that we are in .if is an entanglement invariant , then will not change if we move in any direction within . as a resultwe have the following theorem : let be a vector space basis of the ( real ) lie algebra .then for all , where is interpreted as a differential operator in .in other words , the task of finding entanglement invariants reduces to that of solving a system of linear partial differential equations. we will give examples of this calculation in the examples found in the next two sections of this paper .we now make use of the methods developed in the previous section to study the entanglement classes associated with qubits .this is a trivial but nonetheless instructive case .as we shall see , there is no entanglement in this case .but there are many entanglement classes ! for this example , the local unitary group is the same as the special unitary group .the corresponding lie algebra is the same as the lie algebra .each density operator lies in the lie algebra . as an immediate consequence of proposition [ ad ] of section [ section act def ] , the infinitesimal action is simply the small adjoint action , i.e. , for all .we can now use the bases . ]{c}\left\ { \xi_{1}=-\frac{1}{2}\sigma_{1},\ \xi_{2}=-\frac{1}{2}\sigma_{2},\ \xi_{3}=-\frac{1}{2}\sigma_{3}\right\ } \\\text { \ and \ } \\ \left\ { \xi_{0}=-\frac{1}{2}\sigma_{0},\ \xi_{1}=-\frac{1}{2}\sigma_{1},\ \xi_{2}=-\frac{1}{2}\sigma_{2},\ \xi_{3}=-\frac{1}{2}\sigma_{3}\right\ } \end{array}\ ] ] of the respective lie algebras and to find a more useful expression for .each element can be uniquely expressed in the form where and .thus , where moreover , each element can be uniquely written in terms of the basis of as where and . in termsof the basis of , {ll}\left ( \begin{array } [ c]{cc}0 & 0\\ 0 & l_{j}\end{array } \right ) = 0\oplus l_{j } & \text{if } j=1,2,3\\ & \\ \left ( \begin{array } [ c]{cc}0 & 0\\ 0 & 0 \end{array } \right ) = 0 & \text{if } j=0 \end{array } \right . \text { , } \ ] ] where {rrr}0 & 0 & 0\\ 0 & 0 & -1\\ 0 & 1 & 0 \end{array } \right ) \text { , } l_{2}=\left ( \begin{array } [ c]{rrr}0 & 0 & 1\\ 0 & 0 & 0\\ -1 & 0 & 0 \end{array } \right ) \text { , } l_{3}=\left ( \begin{array } [ c]{rrr}0 & -1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{array } \right)\ ] ] is the basis{cclllll}ad_{\xi_{j}}\left ( \xi_{k}\right ) & = & ad_{-i\sigma_{j}/2}\left ( -i\sigma_{k}/2\right ) & = & \left [ -i\sigma_{j}/2,-i\sigma_{k}/2\right ] & = & -\frac{1}{4}\left [ \sigma_{j},\sigma_{k}\right ] \\ & = & -\frac{1}{2}i\epsilon_{jkp}\sigma_{p } & = & \epsilon_{jkp}\xi_{p } & & \end{array}\ ] ] where .] of the lie algebra of the special orthogonal group given in appendix b. let denote the basis of induced by the chart {ccc}\mathbf{u}\left ( 2\right ) & \overset{\pi}{\longrightarrow } & \mathbb{r}^{4}\\ & & \\ i\rho=\sum_{j=0}^{3}x_{j}\xi_{j } & \longmapsto & \left ( x_{0},x_{1},x_{2},x_{3}\right ) = \left ( x_{0},x\right ) \end{array } \text { .}\ ] ] in other words , for each , denotes the vector field on defined at each point as the tangent vector to the curve at .then , {ccl}\omega\left ( v\right ) \left ( i\rho\right ) & = & \left ( x_{0},x\right ) \cdot\left ( 0\oplus a\cdot l\right ) \cdot\left ( \begin{array } [ c]{c}\partial/\partial x_{0}\\ \partial/\partial x_{1}\\ \partial/\partial x_{2}\\ \partial/\partial x_{3}\end{array } \right ) \\ & & \\ & = & x\cdot\left ( a\cdot l\right ) \cdot\bigtriangledown\\ & & \\ & = & a\cdot x\times\bigtriangledown\text { , } \end{array } \text { , } \ ] ] where ` ' denotes the vector cross product , and where {c}\partial/\partial x_{1}\\ \partial/\partial x_{2}\\ \partial/\partial x_{3}\end{array } \right ) \text { .}\ ] ] we can now achieve objective 1 .* objective 1 .* _ given an arbitrary density operator _ _ in _ , _ find the dimension of an arbitrary entanglement class _ _ { e} ] of the entanglement class _ { e} ] of _ { e} ] is the same as the dimension of its tangent space _ { e}\right ) ] is given by : _ { e}=\left\ { \begin{array } [ c]{ccc}2 & \text{if } & \left| x\right| \neq0\\ & & \\ 0 & \text{if } & \left| x\right| \neq0 \end{array } \right.\ ] ] we are now ready to achieve objective 2 : * objective 2 . *_ given two states _ _ and _ , _ devise a means of determining whether they belong to the same or different entanglement class_. we achieve this objective by determining a complete set of entanglement invariants qubits , the complete set of entanglement invariants consists of only one invariant . ] for one qubit quantum systems , i.e. , by determining a set of entanglement invariants such that .we begin by recalling that the lie algebra of vector fields on can be identified with the lie algebra of all derivations on the smooth real valued functions on .thus , the elements of can be viewed as directional derivatives , directional derivatives in those directions in which we can move and still remain in the same entanglement class . from theorem [ theorem ad ], it immediately follows that a real valued function is an entanglement invariant if and only it is a solution of the system of partial differential equations ( pdes ) : {c}\omega\left ( \xi_{1}\right ) f=0\\ \\\omega\left ( \xi_{2}\right ) f=0\\ \\ \omega\left ( \xi_{3}\right ) f=0 \end{array } \right.\ ] ] since from above we know that , we can write the above system of pdes more explicitly as : {c}x_{3}\frac{\partial f}{\partial x_{2}}-x_{2}\frac{\partial f}{\partial x_{3}}=0\\ \\ x_{1}\frac{\partial f}{\partial x_{3}}-x_{3}\frac{\partial f}{\partial x_{1}}=0\\ \\x_{2}\frac{\partial f}{\partial x_{1}}-x_{1}\frac{\partial f}{\partial x_{2}}=0 \end{array } \right .\text { , } \ ] ] where , as before , . from theorem [ theorem ad ] , we know that a complete set of quantum entanglement invariants for one qubit systems is the same as a complete functionally independent set of solutions of the above system of pdes . thus , solving the above system of pdes by standard methods found in the theory of differential equations , we find that is a complete set of entanglement invariants .a functionally equivalent complete set of entanglement invariants is which is also a basic set of entanglement invariants .fortunately , in this simplest case , a complete set of entanglement invariants and a basic set of entanglement invariants are one and the same. this will not be the case for quantum systems of more than one qubit . as a result of the previous calculation, we have a complete set of entanglement invariants , namely we have completely classified all the entanglement classes for 1 qubit quantum systems . for in this case , _ { e}=\left [ i\rho^{\prime}\right ] \longleftrightarrow f(i\rho)=f(i\rho^{\prime})\text { .}\ ] ] as a consequence of this result , the induced foliation of the space of all physical density operators lying in in the lie algebra can be visualized in terms of the 3-ball or radius 1 in , called the * bloch * `` * * sphere**. '' recall from remark on page that is a convex subset of the of the lie algebra . in this special case of qubit ,it is a straight forward exercise to show that thus , the convex subset of of all physical density operators in can naturally be identified with the 3-ball of radius one via the one - to - one correspondence as illustrated in figure 5 .+ * figure 5 . the bloch `` sphere '' * it follows that each entanglement class _ { e} ] to _ { e} ] .for the normal vector field is simply _ { e}}\ ] ] unfortunately , for quantum systems of more than one qubit , such a visualization is by no means as easy .as one might expect , the entanglement of two qubit quantum systems is much more complex than that of one qubit quantum systems .in fact , with each additional qubit , the entanglement becomes exponentially more complex than before .perhaps this is a strong hint as to where the power of quantum computation is coming from ? for this example , the local unitary group is the lie group .the corresponding lie algebra is kronecker sum of two matrices ( operators ) and is defined as where denotes the identity matrix ( operator ) . ] .each density operator lies in the lie algebra . as an immediate consequence of proposition [ ad ] given in section [ section act def ] ,the infinitesimal action is simply the small adjoint action , i.e. , for all .we can now use the bases . ]{c}\left\ { \xi_{10},\xi_{20},\xi_{30},\xi_{01},\xi_{02},\xi_{03},\right\ } \\ \text { \ and \ } \\ \left\ { \xi_{ij}\mid i , j=0,1,2,3\right\ } \end{array}\ ] ] of the respective lie algebras and to find a more useful expression for , where .each element can be uniquely expressed in the form where and lie in , where , and where is the identity matrix .thus , {ccl}\omega\left ( v\right ) & = & \omega\left ( \sum_{j=1}^{3}\left ( a_{j}\xi_{j0}+b_{j}\xi_{0j}\right ) \right ) \\ & & \\ & = & \omega\left ( a\cdot\xi\boxplus b\cdot\xi\right ) \\ & & \\ & = & ad_{a\cdot\xi\boxplus b\cdot\xi}\\ & & \\ & = & ad_{\left ( a\cdot\xi\right ) \otimes i_{4}}+ad_{i_{4}\otimes\left ( b\cdot\xi\right ) } \\ & & \\ & = & i_{4}\otimes\left ( a\cdot ad_{\xi}\right ) + \left ( b\cdot ad_{\xi } \right ) \otimes i_{4}\end{array}\ ] ] where but as in example 1 , {ll}\left ( \begin{array } [ c]{cc}0 & 0\\ 0 & l_{j}\end{array } \right ) = 0\oplus l_{j } & \text{if } j=1,2,3\\ & \\ \left ( \begin{array } [ c]{cc}0 & 0\\ 0 & 0 \end{array } \right ) = 0 & \text{if } j=0 \end{array } \right . \text { , } \ ] ] where {rrr}0 & 0 & 0\\ 0 & 0 & -1\\ 0 & 1 & 0 \end{array } \right ) \text { , } l_{2}=\left ( \begin{array } [ c]{rrr}0 & 0 & 1\\ 0 & 0 & 0\\ -1 & 0 & 0 \end{array } \right ) \text { , } l_{3}=\left ( \begin{array } [ c]{rrr}0 & -1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{array } \right)\ ] ] is the basis of the lie algebra of the special orthogonal group given in appendix b on page .let denote the basis of induced by the chart {ccc}\mathbf{u}\left ( 2^{2}\right ) & \overset{\pi}{\longrightarrow } & \mathbb{r}^{16}\\ & & \\ i\rho=\sum_{i , j=0}^{3}x_{ij}\xi_{ij } & \longmapsto & \left ( x_{00},x_{0\ast } , x_{10},x_{1\ast},x_{20},x_{2\ast},x_{30},x_{3\ast}\right ) \end{array}\ ] ] where {l}\left ( x_{00},x_{0\ast},x_{10},x_{1\ast},x_{20},x_{2\ast},x_{30},x_{3\ast } \right ) \\ \\= \left ( x_{00},\quad x_{01},x_{02},x_{03},\quad x_{10},\quad x_{11},x_{12},x_{13},\quad x_{20},\quad x_{21},x_{22},x_{23},\quad x_{30},\quad x_{31},x_{32},x_{33}\right ) \end{array}\ ] ] in other words , for each pair , denotes the vector field on defined at each point as the tangent vector to the curve at . in terms of the above chart , can be written as \cdot\left ( \begin{array } [ c]{c}\partial/\partial x_{00}\\ \partial/\partial x_{0\ast}\\ \partial/\partial x_{10}\\ \partial/\partial x_{1\ast}\\ \partial/\partial x_{20}\\ \partial/\partial x_{2\ast}\\ \partial/\partial x_{30}\\ \partial/\partial x_{3\ast}\end{array } \right ) \text { , } \ ] ] which simplifies to where ` ' denotes the vector cross product .we can now achieve objective 1 .* objective 1 .* _ given an arbitrary density operator _ _ in _ , _ find the dimension of an arbitrary entanglement class __ { e} ] to the entanglement class _ { e} tr\left ( z\right ) \overset{}{\underset{}{tr\left ( z^{2}\right ) } } \det\left ( x_{\ast\ast}\right ) x_{0\ast}x_{0\ast}^{t} \overset{}{\underset{}{x_{0\ast}zx_{0\ast}^{t}}} x_{0\ast}z^{2}x_{0\ast}^{t} x_{0\ast}x_{\ast\ast}x_{\ast0}^{t} \overset{}{\underset{}{x_{0\ast } zx_{\ast\ast}x_{\ast0}^{t}}} x_{0\ast}z^{2}x_{\ast\ast}x_{\ast0}^{t} } \ ] ] where and are given by {ccl}v & = & { \displaystyle\sum\limits_{k=1}^{n } } a^{(k)}\cdot\xi\underset{\ast\text { in } k\text{-th position}}{_{\underbrace { 00\cdots0\ast0\cdots0}}}\\ & & \\ i\rho & = & { \displaystyle\sum\limits_{r_{1},r_{2},\cdots , r_{n}=0}^{3 } } x_{r_{1}r_{2}\cdots r_{n}}\xi_{r_{1}r_{2}\cdots r_{n}}\end{array } \right.\ ] ] we will leave the solution to the corresponding system of pdes to future papers .there is much more that could be said about quantum entanglement .this paper presents only a small part of the big picture .but hopefully this paper will provide the reader with some insight into this rapidly growing research field .since this paper was written , research in quantum entanglement has literally had an explosive expansion , and even now continues to do so .we refer the reader to the references at the end of this paper , which represent only a few of the many papers in this rapidly expanding field .a topological space is an **-dimensional manifold * * if it is locally homeomorphic to , i.e. , if there exists an open cover of such that for each , there is associated a homeomorphism which maps onto an open subset of .we call a * chart on * , and an * atlas * on .an atlas is said to be * smooth * ( ) , if whenever is defined , is a smooth ( ) map of into . a * smooth * ( ) *manifold * is a topological manifold with a smooth atlas .+ * figure 7 .an atlas is smooth if every * * is smooth when defined . *let and be smooth manifolds .then a map is said to be smooth if for every there exist charts of and of containing and respectively such that is smooth .let x be an element of a smooth manifold , and let and be smooth curves in which pass through , i.e. , such that there exists for which then and are said to be * tangentially equivalent * at , written if they are tangent at the point , i.e. , if there is a chart on containing such that it can easily be shown that the relation is independent of the chart selected .a * * tangent vector * * ( also written simply as ) to at is a tangential equivalence class at .the tangent space of at , denoted by , is the set of tangent vectors to at . can be shown to be an -dimensional vector space .let and let be the map if is a chart on , then can be shown to be a chart on . in this way, becomes a smooth manifold and becomes a smooth map . together with the map is called the * tangent bundle * of . a * vector field * on a smooth manifold is a smooth map be the set of all vector fields on the smooth manifold .this is easily seen to be a vector space where , for example , the sum of two vector fields is defined by for all .we will now consider the charts of the tangent bundle in a more explicit way .let be a chart on the smooth manifold , and let be an arbitrary point in .thus , for each ( ) consider the smooth curve in which passes through the point at time .then for each such , let [ vec basis ] denote the tangent vector to the curve at .it can be shown that is a vector space basis of the tangent space . moreover , since this construction is respect to an arbitrary point in , it can be shown that we have actually constructed for each a smooth vector field in fact , it can be shown that is a basis of , and hence a local basis of .we can now express each chart explicitly as : where s on the left denote functions of , and where s on the right denote functions of .let and be smooth manifolds , and let be a smooth map , and let an arbitrary point of .we define a vector space morphism as follows : for each , there is a representative smooth curve in which passes through the point and which has as its tangent vector at the point .it follows that is a smooth curve in passing through the point .we define as the tangent vector to at the point .it is then a simple exercise to show that is a vector space morphism .since was an arbitrary point of , this leads to the definition of a smooth map , called the differential of , such that the following diagram is commutative : {cccc}tm & \overset{df}{\longrightarrow } & tn & \\ \downarrow & & \downarrow & \\ m & \overset{f}{\longrightarrow } & n & \text{.}\end{array}\ ] ] in local coordinates , maps the tangent vector to the tangent vector thus , the matrix expression of the linear transformation is just the jacobian matrix let be a smooth manifold , and let be a smooth vector field on .a curve in is said to be an * integral curve * of if is the tangent vector to for each for which is defined . in terms of local coordinates ,an integral curve of a smooth vector field is a solution to the system of ordinary differential equations since is smooth , its coefficients are smooth functions .consequently , it follows from the standard existence and uniqueness theorems for systems of ordinary differential equations that there exists a unique solution for each set of initial conditions .thus , for each in , there exists a unique maximal integral curve passing through at time , and call the * flow * generated by the vector field .we call the * infinitesimal generator * of the flow .it can be easily shown that hence , we are justified in adopting the following suggestive notation : for the flow . in terms of our new notation , the properties of the flow can be expressed as * * * .we now show how vector fields can be viewed as partial differential operators .a * derivation * on an algebra is a map such that * ( linearity ) * ( leibnitz rule ) a * lie algebra * is a vector space together with a binary operation : \mathbb{a}\times\mathbb{a\rightarrow a}\text { , } \ ] ] called a * lie bracket * for , such that * ( bilinearity ) {ccc}\left [ \lambda_{1}a_{1}+\lambda_{2}a_{2},\ b\right ] & = & \lambda_{1 } \left [ a_{1},b\right ] + \lambda_{2}\left [ a_{2},b\right ] \\ & & \\ \left [ a,\ \lambda_{1}b_{1}+\lambda_{2}b_{2}\right ] & = & \lambda_{1 } \left [ a , b_{1}\right ] + \lambda_{2}\left [ a , b_{2}\right ] \end{array}\ ] ] * ( skew - symmetry ) = -\left [ b , a\right]\ ] ] * ( jacobi identity ) \right ] + \left [ c,\left [ a , b\right ] \right ] + \left [ b,\left [ c , a\right ] \right ] = 0\ ] ] the set of derivations on an algebra is a lie algebra with lie bracket given by : = d_{1}\circ d_{2}-d_{2}\circ d_{1}\ ] ] let denote the * algebra of real valued functions * on the smooth manifold .then , it follows that is a lie algebra .we will now show how to identify the elements of with derivations in , and thereby show that is more than a vector space .it is actually a lie algebra .each smooth vector field on can be thought of as a directional derivative in the direction as follows : let and let .define as : thus , we have : is a lie algebra of derivations on the algebra .it is enlightening , to view the above in terms of local coordinates . from this perspective , thus , if we use the chain rule and the fact that we have hence , acts as a first order partial differential operator , thereby justifying the notation .so viewing as a first order partial differential operator , we can write and , in particular , where now denotes ( locally ) evaluated at .* the * real general linear group * of all automorphisms of the vector space .this can be identified with the group of all nonsingular matrices over the reals . * the * real orthogonal group * is the group of all automorphisms which preserve the inner product .this can be identified with the group of orthogonal matrices , i.e. , matrices of the form where the superscript denotes the matrix transpose . * the * real special linear group * is the group of all real matrices of determinant is the group of all rigid motions in hyperbolic -space . * the * special orthogonal group * is the group of all orthogonal real matrices of determinant . * * * * this group can be identified with the group of all rotations in about a fixed point such as the origin .* the * complex general linear group * of all automorphisms of the vector space .this can be identified with the group of all nonsingular matrices over the complexes . * the * complex special linear group * is the group of all complex matrices of determinant * the * unitary group * is the group of all unitary matrices over the complex numbers , i.e. , all complex matrices such that where denotes the conjugate transpose . * the special unitary group is the group of all unitary matrices of determinant 1 .let be a lie group . for each element , we define the * right multiplication map * , written , as the map is an autodiffeomorphism of . we let denote the corresponding differential of this diffeomorphism. let denote the set of right invariant vector fields on .then as a subset of the lie algebra inherits the structure of a lie algebra .we call the * lie algebra * of the lie group .let denote the identity element of the lie group . since a right invariant vector field completely determined by its restriction to the tangent space via we can , and do , identify the lie algebra with the tangent space , i.e. , the tangent bundle of a lie group is trivial . for it can be shown that is bundle isomorphic to .however , there is some additional and useful structure induced on the lie algebra by the lie group structure of , i.e. , the exponential map .it can be shown that the exponential map is a local diffeomorphism .it also follows that , for each , is a one parameter subgroup of .all one parameter subgroups are of this form .[ [ example - the - lie - algebra - mathbfuleft - nright - of - the - unitary - group - mathbbuleft - nright- . ] ] example : the lie algebra of the unitary group .^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the lie algebra is the tangent space to at the identity matrix .hence , consists of all tangent vectors of all curves in which pass through at , i.e. , which satisfy .let be an arbitrary skew hermitian matrix .then is a curve in which passes through at for which .hence , is the lie algebra of all skew hermitian matrices over .let {cc}0 & 1\\ 1 & 0 \end{array } \right ) \text { , } \sigma_{2}=\left ( \begin{array } [ c]{cc}0 & -i\\ i & 0 \end{array } \right ) \text { , } \sigma_{3}=\left ( \begin{array } [ c]{cc}1 & 0\\ 0 & -1 \end{array } \right)\ ] ] denote the pauli spin matrices , and let {cc}1 & 0\\ 0 & 1 \end{array } \right)\ ] ] denote the identity matrix .then the following is a basis of the lie algebra where please note that , although is a lie algebra of complex matrices , it is nonetheless a real lie algebra .thus , the above basis of is a basis of over the reals .but the matrices in are still matrices of complex numbers ![ [ example - the - lie - algebra - mathbfsuleft - nright - of - the - special - unitary - group - mathbbsuleft - nright- . ] ] example : the lie algebra of the special unitary group .^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the lie algebra for the special unitary group is the same as the lie algebra of all traceless skew hermitian matrices , i.e. , of all skew hermitian matrices such that .a basis of the lie algebra is [ [ example - the - lie - algebra - mathbfsoleft-3right - of - the - special - unitary - group - mathbbsoleft-3right- . ]] example : the lie algebra of the special unitary group .^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ finally , we should mention that the lie algebra of the special orthogonal group is the lie algebra of all skew symmetric matrices over the reals .the following three matrices form a basis for [ so3 basis ] {rrr}0 & 0 & 0\\ 0 & 0 & -1\\ 0 & 1 & 0 \end{array } \right ) \text { , } l_{2}=\left ( \begin{array } [ c]{rrr}0 & 0 & 1\\ 0 & 0 & 0\\ -1 & 0 & 0 \end{array } \right ) \text { , } l_{3}=\left ( \begin{array } [ c]{rrr}0 & -1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{array } \right ) \text { \ .}\ ] ] let be a smooth manifold , and let be a lie group acting on .then the action induces an * infinitesimal action * where is the tangent vector to the curve in at , i.e. , for each element , consider the inner automorphism : and let denote the corresponding differential. we can now define the * big adjoint representation * by where denotes the identity of , and where denotes the group of automorphisms of the lie algebra .we can now in turn define the * little adjoint representation * of the lie algebra by \text { , } \ ] ] where $ ] denotes the lie bracket , and where denotes the ring of endomorphisms of the lie algebra .as the story goes , is actually the lie algebra of the lie group , and we have the following commutative diagram {ccc}\mathfrak{g } & \overset{ad}{\longrightarrow } & end\left ( \mathfrak{g}\right ) \\\exp\downarrow\qquad & & \quad\downarrow\exp\\ g & \overset{ad}{\longrightarrow } & aut(\mathfrak{g } ) \end{array}\ ] ] which relates the big and little adjoints .little adjoint is actually the differential restricted to the identity of the big adjoint .let be the special unitary group .let denote its lie algebra .then is the special orthogonal group and is the lie algebra of .thus , we have the familiar commutative diagram {ccc}su(2 ) & \overset{ad}{\longrightarrow } & so(3)\\ \exp\downarrow\qquad & & \quad\downarrow\exp\\ \mathbb{su}(2 ) & \overset{ad}{\longrightarrow } & \mathbb{so}(3 ) \end{array}\ ] ] used in quantum mechanics and in quantum computation .bennett , charles h. , david p. divincenzo , christopher a. fuchs , tal mor , eric rains , peter w. shor , john a. smolin , and william k. wootters , * quantum nonlocality without entanglement , quant - ph/9804053 .* cerf , nicholas j. and chris adami , `` * * quantum information theory of entanglement and measurement * * , '' in * proceedings of physics and computation , physcomp96 * , edited by j. leao t. toffoli , pp 65 - 71 .see also quant - ph/9605039 .einstein , a. , b. podosky , and n. rosen , * can quantum mechanical description of physical reality be considered complete ? * , phys .rev . * 47 * , 777 ( 1935 ) ; d. bohm , `` quantum theory , '' prentice - hall , englewood cliffs , nj ( 1951 ) .lomonaco , samuel j. , jr . , * a rosetta stone for quantum mechanics with an introduction to quantum computation : lecture notes for the ams short course on quantum computation , washington , dc , january 2000 * , to appear in the ams psamp series .( quant - ph/0007045 ) lomonaco , samuel j. , jr . , * the shor / simon algorithm from the perspective of group representation theory * , to appear in `` * * quantum computation and information * * , '' ams contemporary mathematics series ( 2001 ) .
these lecture notes give an overview from the perspective of lie group theory of some of the recent advances in the rapidly expanding research area of quantum entanglement . this paper is a written version of the last of eight one hour lectures given in the american mathematical society ( ams ) short course on quantum computation held in conjunction with the annual meeting of the ams in washington , dc , usa in january 2000 . more information about the ams short course can be found at the website : http://www.csee.umbc.edu/lomonaco/ams/announce.html
one of the methods of extensive air shower ( eas ) detection is recording fluorescence light emitted by nitrogen molecules in the air along the shower path . for very high energies of the primary particle , enough fluorescence light is produced so that the shower can be recorded from a distance of many kilometers by an appropriate optical detector system .as the amount of fluorescence light is closely correlated to the ionization energy deposit in air , it provides a calorimetric measure of the primary energy .the field of view of a fluorescence detector ( fd ) telescope is divided into many pixels .for example , in case of the pierre auger observatory ( pao ) each pixel views of the sky and records the received light in 100 ns time intervals .a shower passing through the telescope field of view triggers some pixels , which form together a `` shower track '' .the lateral width of this track depends on shower geometry but can well be larger than the pixel size . for a precise energy determination one needs to collect the available signal as completely as possible , i.e. from all detector pixels which receive light from the shower . on the other hand , adding signals from many pixels implies adding the background noise as well .therefore it is important to include in the analysis only a small number of pixels which contain the true shower signal . in this paper , montecarlo simulations of the shower image are presented .based on the spatial energy deposit of shower particles as calculated by corsika it is shown that the lateral shower spread can be well parameterized as a function of the shower age parameter only .the derived parameterization can be used for reconstruction of shower profiles .this is illustrated by applying the new parametrization to the reconstruction of the primary shower energy for several simulated events in a fluorescence detector .the plan of the paper is the following : the definition of the shower width and algorithm of fluorescence light production based on the corsika simulation of energy deposit density are described in section 2 . in section 3 an analytical parametrization is derived and its implementation in energy reconstruction procedure is discussed .conclusions are given in section 4 .photons which constitute an instantaneous image of the shower originate from a range of shower development stages , namely from the surface shown in figure 1 .these simultaneous photons are defined as those which arrive at the fd during a short time window . during this ( corresponding to a small change of the shower position in the sky by ) the showerfront moves downward along the shower axis by a small distance , where is the distance from fd to the volume and is the angle between the shower axis and the direction towards fd .this means that the small element of surface corresponds to a small volume .the number of photons which arrive to the fd from volume can be calculated as : where is the distribution of light emitted , is the projection of the surface onto a surface perpendicular to direction of the shower axis , is the light collecting area of the detector , is the light transmission factor , is the normalized fluorescence wavelength spectrum .these photons form an instantaneous image of the shower which can be described by the angular distribution of light recorded by the fd where is the small angle between the direction to the center of the image spot and the direction to volume , is the azimuth angle .the size of shower image is defined as the minimum angular diameter of the image spot containing a certain fraction of the total light recorded by the fd .a shower viewed from a large distance has , to a very good approximation , a circular image , independent of the direction of shower axis .the intensity distribution of light in this image is proportional to the lateral distribution of the emitted fluorescence light in the shower at the viewed stage of evolution .therefore the fraction of light received can be obtained from the corresponding fraction of light emitted around the shower axis where is the ( normalized ) lateral distribution of fluorescence light emitted .here we have neglected the fact that photons from the side of the shower front facing the detector have been emitted at a time later than photons from the farther side of the shower . as will be shown later ,relevant lateral distances are in the range 150 to 300 m and hence this is also the distance scale for the maximum time differences of emission . as there is no significant change of the lateral distribution expected of a shower traversing m of air , the effect of different emission is negligible in our calculation . in the followingwe shall consider showers close to the fd .for these showers , the optical image size is mainly determined by the geometric size of the shower disk .light absorption and multiple scattering cause only a minor , negligible modification of the shower image .the main task is therefore to derive , which is also referred to as the shape function , since the brightness distribution of the shower image depends on the shape of . at the first approximation ,the is proportional to the number of particles in the shower at a given lateral distance , assuming a constant fluorescence yield per particle in the shower . for an electromagnetic showerthis number of charged particles is given by the nishimura - kamata - greisen ( nkg ) function .in this case the can be determined analytically , as has been shown in ref .. however , derived in this way does not describe the light distribution well in case of hadronic showers .this is due to the fact that the number of particles in a hadronic shower does not follow the nkg distribution well .although the assumption that the amount of emitted light is proportional to the number of particles is adequate for a determination of the fluorescence signal in many cases , it is here not suited for the following reasons .the fluorescence yield is proportional to the ionization energy deposited by the shower rather than to the total number of charged particles .furthermore , the simulated number of particles in a monte carlo calculation depends on the threshold energies chosen by the user , above which particles are simulated .particles falling below the threshold energy are discarded .a better approximation for the fluorescence yield can be obtained by using the energy deposit as a function of atmospheric slant depth interval together with a density- and temperature - dependent fluorescence yield . in this approximationthe distribution of photons emitted around the shower axis is proportional to the lateral distribution of energy deposit , , where is the vertical depth interval and is the shower zenith angle . the distribution of energy deposit is calculated with the corsika shower simulation program as the sum of the energy released by charged particles with energies above the simulation threshold plus the releasable energy fraction of particles discarded due to the energy cut . more specifically , the following approximation is used : where is the ionization energy deposit of all charged particles traversing the depth interval . denotes the energy of particles of type falling below the simulation threshold within this interval . in the followingwe will study the lateral distribution of energy deposit density in air showers , as it is directly proportional to the number of expected fluorescence photons .using corsika a two - dimensional energy deposit distribution around the shower axis is stored in histograms during the simulation process for 20 different vertical atmospheric depths .each of the 20 horizontal layers has a thickness of g/ and corresponds to a certain atmospheric depth : the first one to g/ and the last one to g/ .linear interpolation between the observation levels is performed in order to get the lateral distribution at a given vertical depth located between two corsika observation levels and .the fraction of energy deposit is calculated by numerically integrating the histograms up to the lateral distance .the shower simulations are performed with the hadronic interaction models gheisha ( for interactions below 80 gev ) and qgsjet 01 .electromagnetic interactions are treated by a customized version of the egs4 code . to reduce computing time ,a thinning algorithm is selected within corsika .the thinning level of has been chosen with the so - called optimum weight limitation .this ensures that the artificial fluctuations in the longitudinal shower profiles introduced by the thinning method are sufficiently small for this analysis .the kinetic energy thresholds for explicitly tracking particles were set to : 100 , 100 , 0.25 , 0.25 mev for hadrons , muons , electrons and photons , respectively .the knowledge of the function can be used to calculate the true signal ( light ) from shower , which may be divided among several neighboring detector pixels .below we propose a universal parameterization of based on corsika simulations and show how the true signal can be estimated with this parameterization . in the followingwe study the dependence of the lateral energy deposit density on various variables .a natural transverse scale length in air showers , which proves to be useful for obtaining a universal parameterization of the lateral distribution , is given by the molire radius where mev is the scale energy , mev the critical energy and g/ the radiation length in air .the local molire radius at a given atmospheric depth of shower development ( at altitude ) can be obtained by dividing eq .( [ eq - mol1 ] ) by the air density , , and is approximately given by .it is also well known that the distribution of particles in a shower at a given depth depends on the history of the changes of along the shower path rather than on the local value at this depth . to take this into account, the value is calculated at cascade units ( radiation length ) above the considered depth .using the value of the molire radius calculated based on the atmospheric profile ( the us standard atmosphere ) for vertical depth , the fraction of energy deposit density versus the distance in molire units is found .the knowledge of gives a possibility to study variation of the shape of energy deposit density due to properties of the atmosphere .the variation of the density of the atmosphere along the path of a shower affects the molire radius and consequently also the radial particle distribution . to characterize the development stage of a shower, we introduce the shower age parameter where is the atmospheric depth of shower maximum extracted from simulated data was determined by fitting a gaisser - hillas type function to the corsika longitudinal profile of energy deposit . ] . with this definition ,a shower reaches its maximum at .figure [ fig339int ] presents the integrals of energy deposit density and for a vertical proton shower with primary energy eev , obtained at different atmospheric depths .it is seen in figure [ fig339int]a that the shape of this integral distribution varies considerably only at depths smaller than 360 g/ . at larger depths , andin particular around the shower maximum , the variation of integral energy deposit profile is not significant .however , this variation is larger when one plots this integral versus distance measured in molire units , as shown in figure [ fig339int]b . figure [ fig - en]a shows the dependence of the integral of energy deposit density on energy and primary particle .it is seen that the integral profile only slightly depends on energy and primary particle .the differences are even smaller if we plot the fraction of energy deposit density versus distance in molire units , as shown in figure [ fig - en]b .the same shape of the ) profile for different primaries and energies means that variations of the profile are mainly due to the atmospheric effect i.e. dependence of the molire radius on altitude . for the same shower geometry ,there are different altitudes of the maxima of proton and iron showers , and in consequence different values of molire radius .since determines the lateral spread of particles in the shower , the shape function becomes broader for iron showers ( higher altitude of shower maximum than for proton shower ) .figure [ fig - inc ] presents the dependence of the integral of energy deposit profile on zenith angle .we note that corsika energy deposit lateral profiles are obtained for horizontal planes at the given observation level , so if one compares energy deposit densities between vertical and inclined showers , a projection of densities from horizontal plane to the plane normal to the shower axis is performed for inclined showers .the corrected profile for a shower inclined at is shown in figure [ fig - inc]a by the solid line .it is seen that this profile differs from the profile obtained for a vertical shower .this means that the shape of depends on the zenith angle .this dependence can be understood if one takes into account the influence of the atmospheric effect on the energy deposit profile .for a homogeneous atmosphere , the shape function for inclined and vertical showers must be the same for the same development stage because the molire radius does not change with altitude . in case of an inhomogeneous atmosphere, differences of the shape function between vertical and inclined shower should be proportional to the differences of the molire radius ( i.e density of air ) .thus if changes of the shape function are due only to the atmospheric effect , then profile should be the same for vertical and inclined showers .figure [ fig - inc]b confirms this assumption . the analysis of figs .2b , 3b and 4b leads to the following conclusion : _ the lateral shape of the energy deposit density versus distance from shower axis measured in molire units is independent of the primary energy , primary particle type and zenith angle .it depends , to a good approximation , only on the shower age_. figure [ fig9int ] confirms this conclusion , too . in this figure we present the integral of the energy deposit density for different age parameters for 10 individual proton and 5 individual iron showers with different zenith angles ( ) and energy 10 eev .it is seen that the shower - to - shower fluctuations are strongly reduced for a given age when we correct profiles for the atmospheric effect i.e. plot .also , there are no differences in the shape of for showers with different zenith angles and primary particle type .this means that it is possible to find a universal function which describes the shape of energy deposit density as a function of shower age only .following our earlier work we will use the function where the parameters and are assumed to be functions of shower age .fits of this functional form to the integral of energy deposit density were performed for the data from figures [ fig9int]b , d , f and are shown in figure [ fit - age ] .the values of the parameters and for different shower ages are presented in figure [ fig - par ] .the age dependence of and parameters is well described by thus , eqs .( [ eq - fit ] ) , ( [ eq - fita ] ) and ( [ eq - fitb ] ) give us a universal function which describes the fraction of energy deposit density within a specified distance from the shower axis for different energies , zenith angles and primary particles .moreover , eq . ( [ eq - fit ] ) can be used to simulate the size of the shower image not only at shower maximum like in ref . , but also for any shower development stage .inverting eq .( [ eq - fit ] ) and taking into account the distance from the detector to the shower ( ) we can find the angular size of the image that corresponds to a certain fraction of the total fluorescence light signal : as the shape of the lateral distribution of energy deposit can be well described by eq .( [ eq - fit ] ) , it may be used to take into account the knowledge on the shower width in the procedure of shower reconstruction in the fluorescence detector .one of the first steps in shower energy reconstruction is the calculation of the light profile at the aperture of the detector , based on the signal recorded by the detector pixels .this signal is converted to the number of equivalent photons at the detector diaphragm .for example , one procedure to determine such a profile is described in .this algorithm uses as input the reconstructed geometry to locate the shower image on the fd telescope camera in 100 ns intervals .next , the signal ( charge ) and noise from pixels lying within a predetermined angular distance from the instantaneous position of the image spot center are collected to find the radius that maximizes the signal - to - noise ratio * over the whole * shower track .finally , the charge in each 100 ns time interval ( time slot ) , , within that radius is found and converted to the number of photons using calibration constants .this procedure works well for distant showers , when the light collected within the radius corresponds to about 100% of the true signal , but some differences between the signal within and the true signal may exist for nearby showers . in the following we investigate this problem and estimate a correction to the described reconstruction algorithm .the necessary shower reconstructions were performed using flores - eye and fdsim programs .first , fdsim was used to generate events based on the gaisser - hillas parameterization .next , the geometrical and energy reconstruction were performed using flores - eye .the reconstructed geometries for some events are listed in table [ tab1 ] . using these geometries, we find the collected signal within angular distance at each time interval depends on geometry , but for events listed in table [ tab1 ] it equals about . ] .then , for the given the effective radius around shower axis and the fraction of light based on the function was calculated .the fraction for events listed in table [ tab1 ] is shown in figure [ fig - frac ] .it is seen that for event1 changes from 89% for a distance - to - shower km to 87% for km .for other events , the collected fraction of the signal within increases with increasing distance - to - shower and equals on average about 91% , 94% and 99% for event2 , event3 , event4 , respectively .in other words , some portion of the signal , which falls beyond is missing in the shower reconstruction procedure . to take into account this lost portion of the signal ,the signal is rescaled according to formula . in this way, one takes into account the shape of lateral distribution of energy deposit and obtains the new integrated charge for each time slot .thus the part of the signal which was contained in neighboring pixels outside is accounted for .it should be pointed out that in general , any reconstruction procedure has to take into account the pixellation of the detector .independent of the specific approach , the correction procedure developed in this work can be applied . in the following ,we demonstrate the influence of this correction on the light profile and on energy determination . in figure [ fig - prof2]ait is shown that our correction leads to considerable differences between the ( dashed line ) and profile ( solid line ) for a nearby shower ( event 1 ) . in case of distant showers ( like event4 )the profile is almost unchanged ( see figure [ fig - prof2]b ) .it should also be noted that changes in the detector - to - shower distance are accounted for in this approach .a differential correction is applied ( i.e. for each time slot ) , which also leads to a better reconstruction of the longitudinal shape ( and thus ) of the shower . accepting only a fraction of the signalcontained within directly influences the reconstructed primary energy of the shower . in table[ tab2 ] we present the influence of the correction on the gaisser - hillas fit to the reconstructed number of particles in the showers .it is seen that this correction changes both the number of particles at the shower maximum and the position of the shower maximum .these changes lead to different estimates of primary energy . in the last columnthe relative differences are listed .one sees that is always positive and decreases from 14% for a distance to shower maximum of =6.5 km to 2% for =23 km .in this work , the distribution of light in the shower optical image is analyzed , based on the lateral distribution of energy deposited by the shower , as derived from corsika simulations .the lateral distribution of energy deposited is parameterized with a functional form inspired by the nkg distribution .the angular distribution of photons arriving simultaneously at the detector ( i.e. the intensity distribution of light in the instantaneous image of the shower ) is obtained .the shape of this distribution can be approximated by a universal function that depends on the shower age only .this universal function is used to derive a correction to the shower energy due to the fraction of light falling into detector pixels located far from the center of the shower image . in the usual procedure of shower reconstruction , signal - to - noise ratio is optimized , so that pixels lying far from the center of the shower image are not included in the analysis .the percentage of shower signal in those outlying pixels was determined in this paper based on the lateral distribution of light in the shower image .the signal recorded by the fluorescence detector in the accepted central pixels is rescaled , so that a corrected light profile of the shower is obtained . for eventsexamined , this correction increases the estimated shower energy by 214% , depending on the detector - to - shower distance. _ acknowledgements ._ this work was partially supported by the polish committee for scientific research under grants no .pbz kbn 054/p03/2001 and 2p03b 11024 and in germany by the daad under grant no .mr is supported by the alexander von humboldt foundation ..characteristics of events used for the comparisons in this paper . the shower zenith angle , azimuth angle , core position , are measured relative to fd detector . [ tab1 ] [ cols="^,^,^,^,^ " , ]
the light intensity distribution in a shower image and its implications to the primary energy reconstructed by the fluorescence technique are studied . based on detailed corsika energy deposit simulations , a universal analytical formula is derived for the lateral distribution of light in the shower image and a correction factor is obtained to account for the fraction of shower light falling into outlying pixels in the detector . the expected light profiles and the corresponding correction of the primary shower energy are illustrated for several typical event geometries . this correction of the shower energy can exceed 10% , depending on shower geometry .
agent - based models ( abms ) are an attempt to understand how macroscopic regularities may emerge through processes of self - organization in systems of interacting agents .one of the main purposes of this modeling strategy is to shed light on the fundamental principles of self - organized complexity in adaptive multi - level systems in order to gain an insight into the microscopic conditions and mechanisms responsible for the temporal and spatial patterns observed at aggregate levels .therefore , abms are sometimes considered as a methodology to provide a > > theoretical bridge < < ( :148 ) between micro and macro theories ( see also ) . while used as a tool in economics , sociology , ecology and other disciplines , abms are often criticized for being tractable only by simulation .this paper addresses this issue by applying markov chain and information - theoretic tools to a particular abm with the specific objective to better understand the transition from the most informative agent level to the levels at which the system behavior is typically observed .a well posed mathematical basis for linking a micro - description of a model to a macro - description may help the understanding of many of the observed properties and therefore provide information about the transition from the interaction between individual entities to the complex macroscopic behaviors observed at the global level . for this purpose, the paper draws upon a recently introduced markov chain framework for aggregation in agent - based and related computational models ( see and also ) .the starting point is a microscopic markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent model , which is obtained by considering the set of all possible agent configurations as the state space of a huge markov chain an idea borrowed from .namely , if we consider an abm in which agents can be in different states this leads to a markov chain with states .moreover , in models with sequential update by which one agent is chosen to update its state at a time , transitions are only allowed between system configurations that differ with respect to a single agent .such an explicit micro formulation enables the application of the theory of markov chain aggregation namely , lumpability in order to reduce the state space of the micro chain and relate microscopic descriptions to a macroscopic formulation of interest .namely , when performing simulations of an abm we are actually not interested in all the dynamical details , but rather in the behavior of certain macro - level properties that inform us about the global state of the system . in opinion dynamics , and in binary opinion models in particular , the typical level of observation is the number of agents in the different opinion states or respectively the average opinion ( due to the analogy to spin systems often called > > magnetization < < ) .the explicit formulation of abms as markov chains enables the development of a mathematical framework to link a micro chain corresponding to an abm to such a macro - level description of interest .more precisely , from the markov chain perspective , the transition from the micro to the macro level is a projection of the micro chain with state space onto a new state space by means of a ( projection ) map from to .the meaning of the projection is to lump sets of micro configurations in into an aggregate set according to the macro property of interest .such a situation naturally arises if the abm is observed not at the micro level of , but rather in terms of a measure on by which all configuration in that give rise to the same measurement are mapped into the same macro state , say .an illustration of such a projection is provided in fig .[ fig : projectiongeneral ] . ) is observed ( ) at a higher level and this observation defines another macro level process ( ) .the micro process is a markov chain with transition matrix .the macro process is a markov chain ( with ) only in the case of lumpability . ]two things may happen by projecting the microscopic markov chain onto a coarser partition .first , the macro process is still a markov chain which is the case of lumpability .then markov chain tools can be used to compute the dynamical quantities of interest and a precise understanding of the model behavior is possible .as shown in this depends essentially on the symmetries implemented in the model .secondly , markovianity may be lost after the projection which means that memory effects are introduced at the macroscopic level .noteworthy , in abms as well as more generally in markov chains , this situation is the rule rather than an exception .while the first part of this paper derives exact markov chain description for the complete and a perfect two community graph , and the second part of this paper is devoted to the study of the non - markovian case . for these purposes, the explicit construction of an abm s microscopic transition kernel is a necessary starting point because it helps , on the one hand , to establish the conditions for which the macro - level process remains markovian ( i.e. , lumpability ) , and enables , on the other hand , the use of information - theoretic measures for the `` closedness '' of an aggregate description .a series of measures among them conditional past - future mutual information and micro - to - macro information flow have been developed to quantify the deviations from an idealized description which models the dynamics of the system by the state variables associated to a coarser level .see also for the mathematical analysis of the relation between different levels of description in complex multi - level systems . in our context , this allows for a quantification of the memory effects that are introduced by a global aggregation over the agent population without sensitivity to micro- or mesoscopic structures which presents a first step to study how microscopic heterogeneity in abms may lead to macroscopic complexity when the aggregation procedure defines a non - markovian macro process . to my knowledge , this paper is the first to apply these concepts to an abm . in this paper the contrarian voter model ( cvm )is used as a first simple scenario .the cvm is a binary opinion model where agents placed on a network can adopt two different opinions : and . in the pure voter model agents are chosen at random and they align in the interaction because one of them imitates the other . as in other binary models of opinion dynamics( see for two well - known variants and for an overview ) this mechanism of local alignment leads to a system which converges to a final profile of global conformity ( consensus ) in which all agent share the same opinion .contrarian behavior , then , relates to the presence of individuals that do not seek conformity under all circumstances or to the existence of certain situations in which agents would not desire to adopt the behavior or attitude of their interaction partner . in the cvmstudied here contrarian behavior is included by introducing a small probability with which agents to not imitate their interaction partner , but adopt precisely the opposite opinion .there are several different ways to include nonconformity behavior into a binary opinion model and the approach adopted here is probably the most simple one . in our choice, we basically follow ref . which , based on the concept of contrarian investment strategies in finance , is the first study to introduce contrarian behavior into a model of opinion dynamics ( namely , into galam s majority model ) .while the majority model without contrarians is characterized by a relatively fast convergence to complete consensus , the introduction of only a small rate of contrarian choices leads to the coexistence of the two opinions with a clear majority - minority splitting .noteworthy , as the contrarian rate increases further , the model exhibits a phase transition to a disordered phase in which no opinion dominates in the population .similar observations have been made for the sznajd model .more recently the literature often distinguishes between two types of nonconformity : ( i. ) anti - conformity or contrarian behavior and ( ii . ) independent or inflexible agents .see for opinion models that include independent or inflexible agents .the importance of a distinction between individuals that generally oppose the group norm or act independently of it is , from the socio - psychological perspective , relatively obvious .the fact that these two behaviors may also give rise to qualitatively different dynamical properties , however , has been established only recently . herewe stick to contrarians .the voter model with contrarians presented in is probably the one that relates most to the model used here .the main difference is that a fixed number of agents always acts in a contrarian way whereas in the present model all agents take contrarian choices with a small probability . in that setting, could not observe the phase transition from majority - minority splitting to disorder , but rather a change from a uniform to a gaussian equilibrium distribution .while this difference in comparison with the sznajd and galam models has been attributed to the linearity of the cvm in , this study shows that there is in fact an order - disorder phase transition in the cvm as well .however , the ordered phase can be observed only below a very small contrarian rate of at which the equilibrium distribution is uniform ( see sec .[ cha:5.hmstatdyn ] ) , in accordance with . in the setting of with a fixed number of contrarian agents , however , this value is already reached , on average , with only a single contrarian , independent of the population size .notice , finally , that the complete graph plays an exceptional role for the analytical treatment of nonconformity models commended on above . from the markovian point of view, this is due to the fact that for the complete graph binary opinion models are lumpable that is reducible without loss of information to a macroscopic description in terms of the average opinion or > > magnetization < < .this paper analyses the complete graph as well , but it goes beyond it by studying the cvm on a perfect two community graph . in that case , a loss - less macro description is obtained by taking into account separately the average opinion in the two sub - graphs , that is , by a refinement of the level of observation . alongside with the analysis of the respective model dynamics , these two cases allow to address some very interesting questions concerning the relation between the two coarse - grainings .for instance , it is possible to illustrate why lumpability ( in its strong as well as in its weak form ) fails for the two community graph .moreover , looking at the two - community scenario from the global perspective of total > > magnetization< < allows for the exact computation of memory effects that emerge at the macroscopic level .the sequel of the paper is organized as follows .[ cha:5.cvm ] introduces the cvm and derives the corresponding microscopic markov chain .[ cha:5.hmand2com ] deals with the model on the complete and the two - community graph with a particular focus on the stationary dynamics of the model .the model dynamics are studies in terms of the contrarian rate and the coupling between the two - communities . after the discussion of the two stylized topologies , sec .[ cha:5.networkdynamics ] shows the effect of various paradigmatic networks on the macroscopic stationary behavior . in sec .[ cha:5.micromesomacro ] we return to the two - community cvm and find with it an analytical scenario to study the discrepancy between a mean - field model ( homogeneous mixing ) and the model on a more complex ( though still very simple ) topology .it shows that memory effects are introduced at the macro level when we aggregate over agent attributes without sensitivity to the microscopic details .the cvm is a binary opinion model where agents can adopt two different opinions : and .the model is an extension of the voter model ( vm ) in order to include a form of contrarian behavior . at each step , an agent ( ) is chosen at random along with one of its neighbors ( ) .usually ( with probability ) , imitates ( vm rule ) , but there is also a small probability that agent will do the opposite ( contrarian extension ) . more specifically ,if holds opinion and meets an agent in , will change to with probability , and will maintain its current state with probability . likewise ,if and are in the same state , will flip to the opposite state with probability . from the micro - level perspective , the cvm implements an update function of the form . for further convenience, we denote the current state of an agent by and the updated state at time as .the map can then be written as where denotes the opposite attribute of . in each iteration ,two agents are chosen along with a random variable that decides whether the voter ( ) or the contrarian rule ( ) is performed .the probability for that is .notice that the update rule is equal for all agents and independent from the agent choice .therefore the probability that an agent pair is chosen to perform the contrarian rule can be written as . respectively, we have for the vm rule .moreover , the respective probabilities are equal for each step of the iteration process . consequently , the cvm iteration may be seen as a ( time - homogeneous ) random choice among deterministic options , and is therefore a markov chain ( see and for all details ) . due to the sequential update scheme only one agent ( namely , agent ) may change its state at each time step .this means that a non - zero transition probabilities exist only between agent configurations that differ in at most one element ( agent ) .we call those configuration adjacent and write if and . considering that ( for contrarian rule ) is chosen with probability and ( vm rule ) with , and that this choice is independent of the agent choice , the micro - level transition probability between two adjacent configurations is given by notice that the configuration space of the cvm ( ) is the set of all bit - strings of length , and that therefore , the micro - level process for the cvm corresponds to a random walk on the hypercube ( as for the vm , see ). notice also that the cvm leads to a _ regular _ chain ( as opposed to an absorbing random walk for the original vm ) because whenever , there is a non - zero probability that the process leaves the consensus states and .( [ eq : phatvmcontrarian ] ) tells us that this probability is precisely .therefore , the system does not converge to a fixed configuration and the long - term behavior of the model can be characterized by its stationary distribution .the most natural level of observation in binary state dynamics is to consider the temporal evolution of the attribute densities , or respectively , the number of agents in the two different states .while a mean - field description would typically formulate the macro dynamics as a differential equation describing the evolution of attribute densities , the markov chain approach operates with a discrete description ( in time as well as in space ) in which all possible levels of absolute attribute frequencies and transitions between them are taken into account .let us denote as the number of -agents in the population ( ) and refer to this level of observation as global or _full aggregation_. in terms of markov chain aggregation , a macro description in terms of is achieved by a projection of the micro - level process on the hypercube onto a new process with the state space defined by the partition , where .notice that in hypercube terminology that level of observation corresponds to the hamming weight of a configuration and each collects all micro configurations with the same hamming weight . .the resulting macro process is , in general , a non - markovian process on the line . ]one important observation in has been that homogeneous mixing is a prerequisite for lumpability with respect to , and that microscopic heterogeneities ( be it in the agents or in their connections ) translate into dynamical irregularities that prevent lumpability .this means that full aggregation over the agent attributes ( ) leads in general to a non - markovian macro process .we illustrate this process in fig .[ fig : fullaggregation ] .still , the process obtained by the projection from micro to macro is characterized by the fact that from an atom the only possible transitions are the loop to , or a transition to neighboring atoms and because only one agent changes at a time . however , the micro level transition rates ( [ eq : phatvmcontrarian ] ) depend essentially on the connectivity structure between the agents , and therefore , the transition probabilities at the macro level ( denoted as in fig .[ fig : fullaggregation ] ) are not uniquely defined ( except for the case of homogeneous mixing ) .that is , for two configurations in the same macro state the probability to go to another macro state ( e.g. , ) may be very different which violates the lumpability conditions of thm .6.3.2 in .before we use the cvm to address questions related to non - lumpable projections , we discuss the model behavior using markov chain tools for two stylized situations .this section analyses the behavior of the cvm for homogeneous mixing and the two - community graph . as shown in at the example of the vm , the symmetries in the interaction networkcan be used to define a markovian coarse - graining . as both interaction topologies are characterized by a large symmetry group , markov chain aggregation considerablyreduces the size of the micro chains such that the important entities of interest ( e.g. , stationary distribution ) can be computed on the basis of the respective macroscopic transition matrices .the case of homogeneous mixing is particularly simple .let us consider that the model is implemented on the complete graph without loops where the probability to choose a pair of agents becomes whenever and .consequently , since all agents interact with all the others with equal probability , the respective transition rates depend only on the numbers and of agents in the two states : \vspace{6pt}\\ & = ( n - k ) \left [ ( 1-p ) k \omega + p ( n - k)\omega \right]\vspace{6pt}\\ & = ( 1-p ) \frac{(n - k ) k}{n(n-1 ) } + p \frac{(n - k)(n - k-1)}{n(n-1)}. \end{array } \label{eq : pmacrovmcontrarian01}\end{aligned}\ ] ] similarly , we obtain for and finally , [ cols="^,^ " , ]let us consider that for the cylinders of length 3 .as noted above , the grammar of the system is determined by the fact that and .therefore , as illustrated in fig .[ fig : macro3cylinder ] , for any with there are nine possible paths ] for past and for future , we have to sum over all meso level paths that contribute to the given macro path .let us denote a meso level path as ] can be realized in four different ways for each or . ] with : \\ \nonumber [ ( m-1 \ l),(m \ l),(m \l+1)]\\ \nonumber [ ( m \ l-1),(m \ l),(m+1 \ l)]\\ \nonumber [ ( m \ l-1),(m \ l),(m \ l+1)]\end{aligned}\ ] ] the same reasoning can be applied to derive the probabilities for cylinders of length four even though the situation becomes slightly more complicated , as illustrated on the r.h.s . of fig .[ fig : meso3cylinder ] . on the basis of the probabilities of blocks of length three and fourrespectively , the computation of the markovianity measures and is straightforward . all that is needed is to compute the respective block entropies . fig .[ fig : in.all ] shows ( dashed curves ) and ( solid curves ) as a function of the coupling between the two communities for a system of agents ( ) .the different curves represent various different contrarian rates from to .notice the log - linear scaling of the figure .( dashed curves ) and ( solid curves ) as a function of the coupling between the two communities for a system of agents .the different curves represent various different contrarian rates from to , see legend . ]what becomes clear in fig .[ fig : in.all ] , first of all , is that the deviation from markovianity is most significant for small inter - community couplings .this means , in the reading of , that the information provided by pasts of length about the future state ( beyond that given by the present ) is larger than zero for small . in general andnot surprisingly , which means that both the first and the second outcome before the present provide a considerable amount of information .in fact , the numbers indicate that the first and the second step into the past contribute in almost the same way .noteworthy , the two measures and behave in the same way from the qualitative point of view which suggests that the computationally less expensive can be well - suited for the general markovianity test .the inset in fig .[ fig : in.all ] shows the situation for values around ( homogeneous mixing ) as well as .as we would expect by the strong lumpability of homogeneous mixing , and are zero in the case .also if the inter - community coupling becomes larger than the coupling within communities ( a situation that resembles a bipartite graph ) and are very small , indicating that a markovian macro description ( i.e. , ideal aggregation , cf . , p. 140 and , pp .61 63 ) describes well these situations .finally , we notice in fig .[ fig : in.all ] that the measures do not generally increase monotonically with a decreasing ratio which is most obvious for the example with a very small ( green curves ) .this is somewhat unexpected and it indicates the existence of certain parameter constellations at which macroscopic complexity ( for this is how non - markovianity may be read ) is maximized . to obtain a better understanding of this behavior , the measures and are plot in fig .[ fig : in.p ] as a function of the contrarian rate .notice again the log - linear scaling of the plot . and as a function of the contrarian rate for various coupling ratios and a system of . ]it becomes clear that there is a strong and non - trivial dependence of the markovianity measures on the contrarian rate .namely , and are very small if is relatively large but they are also relatively small if becomes very small .there is a parameter regime in between in which deviations from markovianity become most significant .notice that in the inset of fig .[ fig : in.p ] the same curves are shown on a double - logarithmic scale .this shows , first , that and for very small are still significantly larger compared to the case of relatively large ( say ) .secondly , we observe that and actually vanish for . indeed , it is possibly to show that the two - community cvm with satisfies lumpability conditions independent of the topological parameter ( cf . , pp.116/17 ) .finally , a detailed picture of the dependence of on the contrarian rate is provided in fig .[ fig : in.p.fine ] .the plot compares the cases and in order to show that the peaks in the depend also on . for the interpretation of this behavior , notice that the at which deviations from markovianity become largest , lie precisely in the parameter interval in which switching times between the two complete consensus states become minimal .compare fig .[ fig : cvm.mt.0ton ] in sec .[ cha:5.majminswitching ] . on the contrarian rate .blue curves correspond to and red curves to . in the first casethe peak is at around , in the latter at . ] all in all , this analysis shows that global aggregation over an agent population without sensitivity to micro- or mesoscopic structures leads to memory effects at the macroscopic level .this paper has provided an analysis of the cvm on the complete and the two - community graph . based on the previous work on markov chain aggregation for abms ,higher - level markov chain descriptions have been derived and allow a detailed understanding of the two cases .a large contrarian rate leads to a process which fluctuates around the states with approximately the same number of black and white agents , the fifty - fifty situation being the most probable observation .this is true for homogeneous mixing as well as for the two - community model .however , if is small , a significant difference between the two topologies emerges as the coupling between the two communities becomes weaker . on the complete graph the population is almost uniform for long periods of time , but due to the random perturbations introduced by the contrarian rule there are rare transitions between the two consensus profiles . on the community graph , an effect of local alignment is observed in addition to that , because the system is likely to approach a meta - stable state of intra - community consensus but inter - community polarization .a order - disorder phase transition as the contrarian rate increases has been observed on the complete graph in several previous contrarian opinion models ( e.g. , ) . for the cvm , in the transition from consensus switching to disorderthere is a phase in which the process leads uniform stationary distribution in which all opinion frequency levels are observed with equal probability ( ) .the contrarian rate at which this happens is and depends inversely on the system size such that a model with a single contrarian agent fails to enter the ordered regime .this confirms and explains the behavior observed in for a model with a fixed number of contrarian agents .a particular focus of this paper has been on the effect of inhomogeneities in the interaction topology on the stationary behavior . in this regard ,the two - community cvm served as a suitable scenario to assess the macroscopic effects introduced by a slight microscopic heterogeneity .namely , homogeneous mixing compatible with the usual way of aggregation over all agents leads to a random walk on the line with states whereas the two - community model leads to a random walk on a 2d lattice with states .as the latter is a proper refinement of the former this gives us means to study the relation between the two coarse - grainings in a markov chain setting . in this regard , this paper has made visible the reasons for which lumpability fails , and it has also provided a first analysis of the macroscopic memory effects that are introduced by heterogeneous interaction structures . in this regard , the paper demonstrates that information - theoretic measures are a promising tool to study the relationship between different levels of description in abms .there are various issues that deserve further discussion .for instance , is the emergence of memory in the transition from the micro to the macro level a useful characterization for the complexity of a certain system ?namely , the theory of markov chain aggregation makes explicit statements about when a micro process is _ compressible _ to a certain macro - level description .this links non - lumpability to computational incompressibility , one of the key concepts in dynamical emergence ( * ? ? ?* ; * ? ? ?* among others ) .this point shall be discussed in a forthcoming paper .finally , i would like to mention the possibility of applying the arguments developed in the second part this paper to the case of models with absorbing states as , for instance , the pure vm ( ) . in that case, the quasi - stationary distribution ( see ) takes the role of or respectively in the computation of cylinder measures .one interesting issue to be addressed in this regard is to reconsider the question of weak lumpability for the vm .finally , to understand how microscopic heterogeneity and macroscopic complexity are related , numerical experiments with different network topologies are another promising way to continue the analysis started in this paper .banisch , s. , lima , r. , and arajo , t. , aggregation and emergence in agent - based models : a markov chain approach , in _ proceedings of the european conference on complex systems 2012 _ , eds .gilbert , t. , kirkilionis , m. , and nicolis , g. , springer proceedings in complexity ( springer international publishing , 2013 ) , isbn 978 - 3 - 319 - 00394 - 8 , pp .37 .izquierdo , l. r. , izquierdo , s. s. , galn , j. m. , and santos , j. i. , techniques to understand computer simulations : markov chain analysis , _ journal of artificial societies and social simulation _ * 12 * ( 2009 ) 6 .jacobi , m. n. and grnerup , o. , a spectral method for aggregating variables in linear dynamical systems with application to cellular automata renormalization , _ advances in complex systems _ * 12 * ( 2009 ) 131155 .lazarsfeld , p. and merton , r. k. , friendship as a social process : a substantive and methodological analysis , in _ freedom and control in modern society _ , eds .berger , m. , abel , t. , and page , c. h. ( new york : van nostrand , 1954 ) , pp .
an analytical treatment of a simple opinion model with contrarian behavior is presented . the focus is on the stationary dynamics of the model and in particular on the effect of inhomogeneities in the interaction topology on the stationary behavior . we start from a micro - level markov chain description of the model . markov chain aggregation is then used to derive a macro chain for the complete graph as well as a meso - level description for the two - community graph composed of two ( weakly ) coupled sub - communities . in both cases , a detailed understanding of the model behavior is possible using markov chain tools . more importantly , however , this setting provides an analytical scenario to study the discrepancy between the homogeneous mixing case and the model on a slightly more complex topology . we show that memory effects are introduced at the macro level when we aggregate over agent attributes without sensitivity to the microscopic details and quantify these effects using concepts from information theory . in this way , the method facilitates the analysis of the relation between microscopic processes and a their aggregation to a macroscopic level of description and informs about the complexity of a system introduced by heterogeneous interaction relations . max planck institute for mathematics in the sciences ( germany ) + mpi - mis , inselstrasse 22 , d-04103 leipzig , germany + email : sven.banisch.de
the wfcam science archive (; hambly et al .2007 , collins et al . 2006 ) holds the image and catalogue data products generated by the wide field camera ( wfcam ) on ukirt ( united kingdom infrared telescope ) .the data comprise pipeline processed multi - extension fits files ( multiframes ) containing pixel / image and catalogue data for four detectors at one pointing .the latter contains all detections of stacked multiframes .the data are pipeline processed at the cambridge astronomical survey unit ( casu ) and transferred to edinburgh where the wide field astronomy unit ( wfau ) processes it for ingestion into the database . since the release database contains advanced products which can take a lot of cpu time to produce , it is preferable to carry out the ingest procedure as fast as possible .another aspect is that the pixel / image and catalogue data need to be ingested completely before further processing is done .another constraint is the uniqueness of multiframes and detections .each multiframe has to have a unique identifier across the whole database and each detection must be unique for a given survey .to ingest the large amount of data into the wsa as the first stage in building a release database , a set of curation usecases ( cus ) have been designed . they are coded in a python / c++ environment , where c++ is used where high performance is needed and python to facilitate an easy to use object - oriented environment .table [ tab : p4.1_tab1 ] shows an overview of the ingest cus and the average volume of data to be processed per observing night .lll 1 & data transfer from casu to wfau & gbyte + 2 & creation of compressed images ( jpegs ) & jpegs + 3 & & files + 4 & & detections + in the following we will concentrate on cus 2 to 4 , as the data transfer ( cu1 ) is described in detail in bryant et al .after transfer the data are split into daily chunks and then processed on multiple computers .+ the following dependencies need to be observed to avoid duplicate entries or missing data .each fits file gets assigned a unique multiframe i d associating data across the database with its source .also each object in a catalogue gets assigned a unique object i d associating data across the database with its original detection . to update the database with the paths to compressed images ( cu2 ) , the general fits file metadata has to be ingested beforehand ( cu3 ) . andfinally catalogue data can only be processed ( cu4 ) if the corresponding image metadata is available ( cu3 ) .the normal procedure of executing the cus for small amounts of data would comprise of a run of each cu followed by an ingest as shown in figure [ fig : p4.1_fig1 ] .this might then be followed again by the whole procedure for the next batch of data .as the creation of compressed images ( cu2 ) itself can be done independently , it can be decoupled from image and catalogue data processing . sincethis procedure is linear , unique ids for pixel files and catalogue detections are applied during cu3 and cu4 , respectively .+ to improve cpu usage and speed up the process for a whole cycle the following enhancements were applied : 1 .files get multiframe ids depending on the largest mulitframe i d in the database directly after transfer ( cu1 ) .the database is accordingly updated .the cu4 object i d is turned into a temporary negative , only per day unique i d . this way we avoid overlaps with already in the database existing ( positive )object ids .it can be translated into a global unique i d directly after ingest with a simple addition to the last maximal object i d .the ingest process is completely de - coupled from extraction and processing .the first two steps allow us to process metadata simultaneously on different computers .the last one allows the ingest to be run on available ingestable data while new data is processed . since ingest can take as much time as processing , this can nearly halve the run time of the full cycle .figure [ fig : p4.1_fig2 ] shows the flow chart for the final design .a data daemon has been created that checks the available computers , their load , and the tasks that need to be executed as well as tasks already running . at the momentit suggests the distribution of these tasks to enhance usage of cpus on the pixel servers .the database operator then takes the final decision in running the tasks .in addition to the data daemon the ingester daemon can be run , checking automatically for ingestable data and ingesting them into the database .this maximises cpu usage on the database server .+ since these daemons can be run automatically , we are investigating the best way to run individual tasks remotely from a master computer .a number of python - related solutions exist and are described in the .running each of the cus 2 to 4 for 30 nights of data on five computers at the same time and ingesting the data as soon as it was available improved the total run times as follows :
one of the major challenges providing large databases like the wfcam science archive ( wsa ) is to minimize ingest times for pixel / image metadata and catalogue data . in this article we describe how the pipeline processed data are ingested into the database as the first stage in building a release database which will be succeeded by advanced processing ( source merging , seaming , detection quality flagging etc . ) . to accomplish the ingestion procedure as fast as possible we use a mixed python / c++ environment and run the required tasks in a simple parallel modus operandi where the data are split into daily chunks and then processed on different computers . the created data files can be ingested into the database immediately as they are available . this flexible way of handling the data allows the most usage of the available cpus as the comparison with sequential processing shows .
we introduce new mathematical techniques for analyzing complex spatiotemporal nonlinear dynamics and demonstrate their efficacy in problems from two different paradigms in hydrodynamics .our approach employs methods from algebraic topology ; earlier efforts have shown that computing the homology of topological spaces associated to scalar or vector fields generated by complex systems can provide new insights into dynamics .we extend prior work by using a relatively new tool called persistent homology .complex spatiotemporal systems often exhibit complicated pattern evolution .the patterns are given by scalar or vector fields representing the state of the system under study .persistent homology can be viewed as a map that assigns to every field a collection of points in , called a _ persistence diagram_. for a given scalar field , the points in the persistence diagram encode geometric features of the sub - level sets for all values of .a feature encoded by the point appears in for the first time and disappears in .therefore , and are called birth and death coordinates of this feature .the lifespan indicates the prominence of the feature . in particular, features with long lifespans are considered important and features with short lifespans are often associated with noise .thus , the persistence diagram is a highly simplified representation of the field generating the pattern .the space of all persistence diagrams , , can be endowed with a variety of metrics under which is a continuous function .this has several important implications that we exploit in this paper .first , continuity implies that small changes in the field pattern , e.g. bounded errors associated with measurements or numerical approximations , lead to small changes in the persistence diagrams .second , by using different metrics , we can vary our focus of interest between larger and smaller changes in the persistence diagrams .moreover , by comparing different metrics , we can infer if the changes in a pattern affect geometric features with longer or shorter life spans . finally , since, applying the map to a time series of patterns produces a time series in , the distance between the consecutive data points in can be used to quantify the average rate at which the geometry of the patterns is changing .as mentioned above , the dynamics of spatiotemporal systems are characterized by the time - evolution of the patterns corresponding to the fields generated by the system . however , capturing these vector fields , either experimentally or numerically , results in multi - scale high dimensional data sets . in order to efficiently analyze these data sets ,a dimension reduction must be performed .we use persistent homology to perform nonlinear dimension reduction from a time series of patterns to a time series of persistence diagrams .we show that this reduction can cope with redundancies introduced by symmetries ( both discrete and continuous ) present in the system .in particular , this approach directly quotients out symmetries and , thereby , permits easy identification of solutions that lie on a group orbit .separately , we also apply persistent homology to extract information about dynamical structures in the reduced data . characterizing dynamics in the space of persistence diagramscan not be done using conventional methods ( e.g. , time delay embeddings ) , since choosing a coordinate system in is currently an open problem . however , since is a metric space , the geometry of the point cloud , generated by the time series of the reduced data , is encoded by a scalar field which assigns to each point in its distance to .we show how persistent homology may be applied to describe dynamics by characterizing the geometry of .an outline of the paper is as follows . in section [ sec : systems ] we present a brief overview of the two fluid flows examined in this paper : ( 1 ) kolmogorov flow and ( 2 ) rayleigh - bnard convection .we note here , for emphasis , that while persistent homology can be applied to vector fields , it will be sufficient for this paper to focus on scalar fields drawn from these systems ( specifically , one component of the vorticity field for kolmogorov flow , and the temperature field for rayleigh - bnard convection ) . in section [ sec : ph * ]we discuss key issues related to the application of persistent homology . by now , the mathematical theory of persistent homology is well developed .therefore , our main emphasis is on the computational aspect of passing from the data to the persistence diagrams .section [ sec : ph ] describes the correspondence between the geometric features of a scalar field and the points in its corresponding persistence diagram .section [ sec : spaceofpd ] discusses the structure of the space and the properties of the associated metrics . in sections [ sec : distances ] and [ sec : analyzingpointcloud ] we discuss how these metrics can be used to analyze dynamics .first , we interpret distance between the persistence diagrams representing the consecutive data points in the time series as a rate at which geometry of the corresponding scalar fields is changing .second , we motivate and explain the procedure for extracting the geometric structure of the point cloud in .we close the paper by applying the developed techniques to the following problems . in section [ sec : fixedpoints ] , we identify distinct classes of symmetry - related equilibria for kolomogorov flow . in section [ sec : periodicorbit ] , we show that a relative periodic orbit for kolmogorov flow collapses to a closed loop in .finally , in section [ sec : rbc ] , we deal with identifying recurrent dynamics that occur on different time scales in our study of rayleigh - bnard convection flow .for the study of turbulence in two dimensions , kolmogorov proposed a model flow where the two - dimensional ( 2d ) velocity field is given by ( with and ) , where is the pressure field , is the kinematic viscosity , is fluid density , and is the forcing that drives the flow .laboratory experiments in electromagnetically - driven shallow layers of electrolyte can exhibit flow dynamics that are well - described by equations ( [ eq : q2dns ] ) with appropriate choices of and to capture three - dimensional effects , which are commonly present in experiments . in this paper, we refer to all models described by equations ( [ eq : q2dns ] ) ( including experimentally - realistic versions ) as kolmogorov flows .it is convenient to use the vorticity - stream function formulation to study kolmogorov flow analytically and numerically .equations ( [ eq : q2dns ] ) , written in terms of the z - component of the vorticity field , a scalar field , take the form for the current study , we choose , m/s , s , kg / m , and m. we express the strength of the forcing in terms of a non - dimensional parameter , the reynolds number .equation ( [ eq : q2dvor_nd ] ) is solved numerically by using a pseudo - spectral method , assuming periodic boundary conditions in both and directions , i.e. , , where m and m are the dimensions of the domain in the and directions , respectively .it is important to note that equation ( [ eq : q2dvor_nd ] ) , with periodic boundary conditions , is invariant under any combination of three distinct coordinate transformations : ( 1 ) a translation along : , ] is finite dimensional for every , and there are only finitely - many thresholds at which the vector spaces change ( for a precise definition see ) . for our purposes , it suffices to remark that if is a piecewise - constant function on a finite complex , then is tame .in particular , the numerically - computed vorticity field and -bit temperature field are tame functions . for the remainder of this paper, we use to denote the set of persistence diagrams corresponding to and to denote the set of all persistence diagrams .let denote the set of tame functions equipped with the norm .a fundamental result is that , using the wasserstein or bottleneck metrics , is a lipschitz - continuous function .in particular , if , then these results on lipschitz continuity have two important implications for this work , both stemming from the fact that our analysis is based on numerical simulations .assume for the moment that denotes the exact solution at a given time to either kolmogorov flow or the boussinesq equations .ideally , we want to understand .our computations of persistent homology are based on , a cubical complex defined in terms of the numerically - reported values , where represents the associated piecewise - constant function .if the numerical approximation satisfies , then by we have a bound on the bottleneck distance between the actual persistence diagram and the computed persistence diagram , so that .figure [ fig : bottleneck ] provides a schematic justification of this claim .as indicated in the introduction , persistent homology is invariant under certain continuous deformations of the domain . to be more precise , if is a homeomorphism and , then .of particular relevance to this paper is a function which arises as a symmetric action on the domain . in this paper, we work with piecewise - constant numerical approximations of the actual functions of interest , and we can not assume that this equality holds .however , if is given and , where is as above , and we have an bound on the difference between the approximation and the true function , then by , in summary , under the assumption of bounded noise or errors from numerical simulations ( or experimental data ) , we have explicit control of the errors of the distances in .the goal of this section is twofold : one , to provide intuition about the information contained in the different metrics , and two , to suggest how viewing a time series in can provide insight into the underlying dynamics .we begin by remarking that the bottleneck distance measures only the single largest difference between the persistence diagrams and ignores the rest .the wasserstein distance includes all differences between the diagrams .thus , it is always true that the sensitivity of the wasserstein metric to small differences ( possibly due to noise ) can be modulated by the choice of the value of , i.e. if , then one expects to be less sensitive to small changes than . in this paper , we restrict ourselves to the bottleneck distance and the wasserstein distances for .the most obvious use of these metrics is to identify or distinguish patterns . as an example , we consider patterns along an orbit from the kolmogorov flow . as indicated in section [ intro_kolmogorov ] , this particular trajectory arises from a periodic orbit with a slow drift along an orbit of continuous symmetry .in particular , we consider the three time points indicated in figure [ fig : projections](a ) : two that appear to differ by the continuous symmetry , and a third that lies on the ` opposite ' side of the periodic orbit. plots of the associated vorticity fields at these points ( see figure [ fig:3states ] ) agree with this characterization of the time points .we want to identify this information through the associated persistence diagrams , , and , shown in figure [ fig:3statesdiagram ] .indeed , the plots of and are difficult to distinguish , but is clearly distinct . to quantify this difference , we make use of the distances between the persistence diagrams using , , and .these values are recorded in table [ table : threenetworksfulldistance ] . not surprisingly , the distances between and are much smaller than the distances between and .we want to use these distances , as opposed to the detailed information in the persistence diagrams , to obtain rough information about how the pattern at figure [ fig:3states](a ) differs from the pattern at figure [ fig:3states](c ) .persistence diagrams and for the vorticity fields shown in figure [ fig:3states ] .the points in and are almost identical because the corresponding vorticity fields are similar .the points in are more spread out and do not shadow the points in so well .the same is true for the persistence diagrams which are not shown .so , for , as indicated by table [ table : threenetworksfulldistance ] . ,title="fig:",width=163 ] persistence diagrams and for the vorticity fields shown in figure [ fig:3states ] .the points in and are almost identical because the corresponding vorticity fields are similar .the points in are more spread out and do not shadow the points in so well .the same is true for the persistence diagrams which are not shown .so , for , as indicated by table [ table : threenetworksfulldistance ] ., title="fig:",width=163 ] .distances between selected persistence diagrams ( rounded to decimal places ) shown in figure [ fig:3statesdiagram ] , corresponding to the vorticity fields given by figure [ fig:3states ] . [ cols="^,^,^,^",options="header " , ]as mentioned in section [ sec::largedata ] , characterizing the geometry of a continuous trajectory becomes a challenge in the setting of dynamics with multiple time scales . to demonstrate this, we consider the numerical simulation of rayleigh - bnard convection , where from multiple perspectives it appears that the trajectory is close to a periodic orbit and that the rate of change in the patterns of the temperature field is far from constant .this can be clearly seen visually ( see video 6 , 7 , or 8 in the supplementary materials ) . moreover , both the speed plot , figure [ fig : consecdist](b ) , and the distance matrix , figure [ fig : distancematrixperiodicorbit](b ) , suggest recurrent dynamics . however , we note that the rate of change , especially using the bottleneck distance , is typically small except for short periods of time at which the speed spikes .the distance matrix has a distinct checkerboard pattern , with the edges corresponding to the spikes , again indicating a rapid and large change in location in the space of persistence diagrams .the maximum bottleneck distance between the consecutive sampling points is ( figure [ fig : consecdist](b ) caption ) , while the diameter of the point cloud is only ( figure [ fig : distancematrixperiodicorbit](b ) ) .therefore , we expect that significant portions of the trajectory are missing .indeed , figure [ fig : persistencediagramsrbc500](a ) shows that there are several persistence points in with a ( finite ) death coordinate larger than ten .thus , at a length scale of ( which is forty times larger than the noise threshold ) , the sample of the trajectory is broken into several pieces .the largest gap between different pieces of the trajectory is , as indicated by the persistence point with coordinates .this means that the sampling rate is far from adequate .the diagram in figure [ fig : persistencediagramsrbc500](b ) contains a single dominant point at with life span .however , unlike in our analysis of the kolmogorov flow in the previous section , we can not argue that this point corresponds to a single dominant loop along which the data is organized because of the gaps in the sampling of the orbit . as mentioned in section [ sec::largedata ], the missing parts of the orbit could introduce loops of similar size corresponding to secondary structures .these structures might occur due to the fact that the loop corresponding to the underlying almost - periodic dynamics might be twisted , pinched , or bent in . in order to obtain information about secondary structures, we require a faster sampling rate . we increased the sampling rate considerably and collected approximately equally - spaced snapshots of the temperature field over four - and - a - half periods and compute the associated persistence diagrams , producing a point cloud .the maximal distances between the consecutive frames for the increased sampling rate drop to , and .the new value of is much closer to our estimate of the numerical error and it is more than 24 times smaller than the diameter of the point cloud generated from the slower sampling .since the point cloud could only increase in diameter through increasing the sample rate , we consider this sampling rate to be satisfactory .our next step is to use the ideas introduced in section [ sec::largedata ] to reduce the size of the sample and to complete our analysis .first we construct a -dense , -sparse subsample of the point cloud .the smallest value of for which we were able to compute the persistence diagrams , using gb of memory , is .this value is only slightly larger than the largest distance between the consecutive states and , since the diameter of the subsampled point cloud is , the relationship between the length scale of the smallest detectable feature and the length scale of the diameter of the point cloud is still sufficient to resolve the geometry of the dynamics .the resulting persistence diagrams are shown in figure [ fig : persistencediagramsrbc ] .as shown by , figure [ fig : persistencediagramsrbc](a ) , the point cloud merges to a single connected component at .this indicates that the sample of the trajectory is not broken into different pieces separated from each other .since the maximum consecutive distance between any two points in is , the loop along which the data is organized should be present for .however , after subsampling , it is possible that the loop will not be born until . looking at the diagram in figure [ fig : persistencediagramsrbc](b ) , we see that it contains a dominant point at , and so the loop was indeed born before .this is the loop along which the point cloud is organized .now , there is another point , , with life span .this point corresponds to a secondary structure of the orbit .indeed , it can be seen from the distance matrix for the -sparse , -dense subsample ( not shown for brevity ) that the part of the orbit corresponding to the fast dynamics ( missing for the slow sampling rate ) revisits very similar states before continuing along the main loop .however , the development of more sensitive tools is required to fully understand these secondary features .we now turn our attention to the differences between the persistence diagrams of the original point cloud and its subsample .theorem [ thm::subsample ] implies that , and so there exists a bijection between the points in and such that the distance between matched points is less than . according to remark [ rem::subsample ] , for the dominant point , there is exactly one corresponding point in .this point is the unique point in that lies inside of the shaded box touching the point , see figure [ fig : persistencediagramsrbc](b ) .the same is true for the other dominant point .moreover , there are no points in outside of the shaded regions .points in that do not correspond to the off - diagonal points in can appear only far away from the diagonal .we have shown how persistent homology can be used to identify equilibria and study periodic dynamics , and how this method is particularly natural when solutions must be identified that lie on a group orbit .we study two regimes in kolmogorov flow : chaotic dynamics due to the appearance of unstable fixed points , and a periodic flow that exhibits drift in a direction of continuous symmetry .we also study an almost - periodic orbit from rayleigh - bnard convection .we solve for the unstable equilibria in the first case and sample the periodic orbits in the other two cases , and use persistent homology to project these solutions to the space of persistence diagrams .we provide theoretical results that show this projection is stable with respect to numerical errors and discuss how the projection naturally identifies symmetry - related solutions .we give three different metrics on the space of persistence diagrams that can be used to study pattern evolution on large versus small spatial scales , and how these metrics can be used to estimate numerical error in the space of persistence diagrams .we develop an intuition for studying dynamics in the space of persistence diagrams by looking at point clouds in two - dimensional euclidean space , and discuss methods for determining if a continuous trajectory has been sampled densely enough to resolve the underlying dynamics , as well as mathematical methods used to address issues associated with computing on large sample sets .we demonstrate the efficacy of these methods on kolmogorov flow and rayleigh - bnard convection , comparing our methods to traditional fourier methods where appropriate .our results show that the geometry of the dynamics are recovered in each case . for rayleigh - bnard convection in particular, we show that the dynamics are recovered even after truncating the simulated data to an 8-bit temperature field , and so this approach is suitable for studying data collected experimentally , rather than numerically .also for this flow , we recover more subtle aspects of the geometry in the space of persistence diagrams . in summary, we have shown that this method is both robust to noise and sensitive to more complicated dynamics , and that it is appropriate for studying dynamics on datasets obtained experimentally .our ongoing research will further refine these tools .the work of mk , rl , and km has been partially supported by nsf grants nsf - dms-0835621 , 0915019 , 1125174 , 1248071 , and contracts from afosr and darpa .the work of jrf , bs and mfs has been partially supported by nsf grants dms-1125302 , cmmi-1234436 .topologically , a torus is a closed surface defined as the product of two circles .it can be also described as a quotient of the cartesian plane under the identifications .the homology groups of are given by intuitively , this means that has a single connected component ( ) , two independent loops ( ) , and a single cavity ( ) . for a more detailed treatment of the following material , see see hatcher , ch 0 for a reference to homotopies of maps , and hatcher ch 1 for a reference on identifying independent loops in a space , or the notion of the fundamental group . in this section ,we explain the notion of independent loops of subsets of a torus . by a loop ,we mean a continuous path \rightarrow t ] such that and for all ] .define , which runs the loop backwards in time , and define , which traverses the loop times . finally , given two loops and , we can form their sum by taking a path \rightarrow t$ ] such that and and form the loop \\ \delta(4 t - 1 ) & : t \in [ 1/4,1/2]\\ \alpha_2(4t-2 ) & : t \in [ 1/2 , 3/4]\\ -\delta(4t-3 ) & : t \in [ 3/4 , 1 ] .\end{array } \right.\ ] ] algebraically , this can be written as .we say that a loop is independent of a collection of loops if there does not exist a homotopy of loops from to a linear combination of the loops .figure [ fig : holeintorus ] shows eight subsets of a torus .note that for , and the sets can be considered as sub - level sets of some scalar function .we will now examine each set and identify the independent loops in each .the set can not be contracted to a point .it forms a band that wraps around the torus .there are many different loops ( wrapping once around the torus from left to right in the picture ) inside of this band .however , we can choose a single loop that represents all of them ; every other loop can be either continuously deformed to a linear combination of the loop , or contracted to a point .similarly , the set contains two independent loops .the set is formed by linking the horizontal bands present in . the loops and are still independent in ( one can not be deformed to the other inside ) .it might seem that there is a new independent loop , .however , this is not case because can be deformed ( inside of ) to the union of the black lines corresponding to and . after this deformation, the loop traverses twice : the right part of the deformed loop traverses from the top to the bottom , and the left part in the opposite direction .algebraically , the deformed loop can be expressed as .this shows that can be deformed to a linear combination of the loops and .thus , is not a new independent loop .the set , obtained from by adding another link between the horizontal bands , contains a new independent loop , , consisting of the edges and ( ) .this means that the loop can not be deformed inside of to a linear combination of the loops and . again , the loop is not independent from , , and because it can be perturbed to , which is a linear combination of the loops , and .therefore , there are three independent loops in this case . adding another link between the horizontal bands creates another independent loop .hence , the number of independent loops for two bands with links is .alternatively , we can view the set as a single band with one puncture , and as a single band with two punctures .the number of independent loops is , where is the number of punctures , and the extra loop is generated by the band . due to the identification , the set contains another link between the horizontal bands .this band creates another puncture . in figure[ fig : holeintorus](f ) , this puncture seems to have four distinct components ( white blocks in the corners ) .however , under the boundary identification , they correspond to a single component .therefore , there are four independent loops .the independent loops start disappearing as the punctures are filled in .the set contains a single puncture , and according to the previous argument , there are two independent loops , and . in this case , the loop can be deformed to a point inside of the set .finally , the set contains two independent loops corresponding to the two copies of that generate the torus .k. krishan , h. kurtuldu , m. f. schatz , m. gameiro , k. mischaikow , and s. madruga , `` homology and symmetry breaking in rayleigh - bnard convection : experiments and simulations , '' _ phys . fluids _ , vol . 19 , nov .2007 .h. kurtuldu , k. mischaikow , and m. f. schatz , `` extensive scaling from computational homology and karhunen - love decomposition analysis of rayleigh - bnard convection experiments , '' _ phys ._ , vol .107 , no . 3 , 2011 . m. paul , k .- h . chiam , m. cross , p. fischer , and h. greenside , `` pattern formation and dynamics in rayleigh bnard convection : numerical simulations of experimentally realistic geometries , '' _ physica d _ , vol .184 , no . 1 ,pp . 114126 , 2003 .
we use persistent homology to build a quantitative understanding of large complex systems that are driven far - from - equilibrium ; in particular , we analyze image time series of flow field patterns from numerical simulations of two important problems in fluid dynamics : kolmogorov flow and rayleigh - bnard convection . for each image we compute a persistence diagram to yield a reduced description of the flow field ; by applying different metrics to the space of persistence diagrams , we relate characteristic features in persistence diagrams to the geometry of the corresponding flow patterns . we also examine the dynamics of the flow patterns by a second application of persistent homology to the time series of persistence diagrams . we demonstrate that persistent homology provides an effective method both for quotienting out symmetries in families of solutions and for identifying multiscale recurrent dynamics . our approach is quite general and it is anticipated to be applicable to a broad range of open problems exhibiting complex spatio - temporal behavior .
many mathematical models of real - life processes pose challenges during numerical computations , due to their large size and complexity .model order reduction ( mor ) techniques are methods that reduce the computational complexity of numerical simulations , an overview of mor methods is provided in .mor techniques such as balanced truncation ( bt ) and singular perturbation approximation ( spa ) are methods which have been introduced in and , respectively , for linear deterministic systems here is asymptotically stable , , and , , are state , output and input of the system , respectively . from the gramians and which solve dual lyapunov equations a balancing transformation is found , which is used to project the state space of size to a much smaller dimensional state space ( see , e.g. ) .recently , the theory for bt and spa has been extended to stochastic linear systems of the form where , and as above , and and ( ) are uncorrelated scalar square integrable lvy processes with mean zero ( often and the special case of wiener processes are considered , see , for example , ) . in this casebt and spa require the solution of more general lyapunov equations of the form where ] ) , then one needs to solve the sde in ( [ linsysafterdisintr2 ] ) a large number of times . for a state space of high dimensionthis is computationally expensive .reduction of the state space dimension decreases the computational complexity when sampling the solution to ( [ linsysafterdisintr2 ] ) , as the sde can then be solved in much smaller dimensions . hence the computational costs are reduced dramatically .the linear system ( [ linsysafterdisintr2 ] ) is a problem where the control is noise . in this casethe standard theory for balancing related mor applied to a deterministic system no longer applies .balanced truncation has been applied to linear systems with white noise before .the discrete time setting was discussed in . for the continuous timesetting , dissipative hamiltonian systems with wiener noise were treated in , but no error bounds were provided . in this paperwe consider both bt and spa model order reduction .as far as we are aware , no theory and in particular error bounds for balancing related mor have been developed for continuous time sdes with lvy noise . using theory for linear stochastic differential equations with additive lvy noise we provide a stochastic concept of reachability .this concept motivates a new formulation of the reachability gramian .we prove bounds for the error between the full and reduced system which provide criteria for truncating , e.g. criteria for a suitable size of the reduced system .we analyse both bt and spa and apply the theory directly to an application arising from a second order damped wave equation .we now consider a particular example which explains why the above setting is of practical interest .[ [ motivational - example ] ] motivational example + + + + + + + + + + + + + + + + + + + + in the lateral time - dependent displacement of an electricity cable impacted by wind was modeled by the following one - dimensional symbolic second order spde with lvy noise : for ] and , with boundary and initial conditions for small , the output equation is approximately the position of the middle of the cable . in , it is shown that transforming this spde in into a first order spde and then discretising it in space , leads to a system of the form ( [ linsysafterdisintr ] ) where .one drawback of the approach above is , that , when the electricity cable is in steady state , the wind has no impact . a more realistic scenario , which models the wind as some form of stochastic input , is the following symbolic equation for ] and , boundary and initial conditions as in ( [ eq : bcic ] ) , and the components of a square integrable mean zero lvy process that takes values in .in this paper , we consider a framework which covers this model .moreover we modify the output in ( [ introbspout ] ) and let so that both the position and velocity of the middle of the string are observed . transformation and discretisation ofthis spde leads to a system of the form ( [ linsysafterdisintr2 ] ) where is an asymptotically stable matrix , i.e. .this paper is set up as follows .section [ sec : balancing ] provides the theoretical tools for balancing linear sdes with additive lvy noise .we explain the theoretical concepts of reachability and observability in this setting and show how this motivates mor using bt and spa .moreover we provide theoretical error bounds for both methods . in section[ sec : wave ] we show how a wave equation driven by lvy noise can be transformed into a first order equation and then reduced to a system of the form ( [ linsysafterdisintr2 ] ) by using a spectral galerkin method . numerical results which support our theory are provided in section [ sec : numerics ] .in balancing related mor was considered for deterministic systems of the form where was assumed to be asymptotically stable , i.e. , , and ) ] , , if it is contained in an open set with \right\}=0 , \end{aligned}\ ] ] else is reachable .the system is called completely reachable if \right\}>0 \end{aligned}\ ] ] for every open set .we refer to , where weak controllability was analyzed for equations with wiener noise .weak controllability turns out to be similar to condition ( [ completereach ] ) . to characterise the degree of reachability of a state ,we introduce finite time reachability gramians ] is the covariance matrix of at time .we replace by to shorten the notation in the proof . using ito s formula in corollary [ iotprodformelmatpro ], we obtain the following for , : \right)_{i , j=1 , \ldots , n},\end{aligned}\ ] ] where is the -th unit vector and we used . inserting the stochastic differential of yields the ito integrals have mean zero , we have =\int_0^t \mathbb e\left[x(s ) x^t(s)\right ] a^t ds+\int_0^t a \mathbb e\left[x(s ) x^t(s)\right ] ds+\left(\mathbb e[e_i^t x , e_j^t x]_t\right)_{i , j=1 , \ldots , n},\end{aligned}\ ] ] where we replaced by .this does not impact the integrals since a cdlg process has at most countably many jumps on a finite time interval ( see ( * ? ? ?* theorem 2.7.1 ) ) .applying corollary [ iotprodformelmatpro ] again , the stochastic differential of is given by : \right)_{i , j=1 , \ldots , n},\end{aligned}\ ] ] taking the expected value , we have ^t=\mathbb e\left([e_i^t bm , e_j^tbm]_t\right)_{i , j=1 , \ldots , n} ] . since the component has the same jumps and the same martingale part as , we know by ( [ decomqucov ] ) that =[e_i^t x , e_j^t x]_t ] , a.s . if and only if meaning that is orthogonal to . since is symmetric positive semidefinite , we have and hence \right\}=1,\end{aligned}\ ] ] we observe from ( [ impreachset ] ) that all the states that are not in are not reachable and thus they do not contribute to the system dynamics . as a first step to reduce the system dimension it is necessary to remove all the states that are not in .we will see in the next proposition , that the finite reachability gramians can be replaced by the infinite gramian since their images coincide .this ( infinite ) gramian exists due to the asymptotic stability of .it is easier to work with since it can be computed as the unique solution to satisfies ( [ lyapeqreach ] ) since satisfies ( [ matrixdiffgl ] ) and if due the asymptotic stability of . for the case this gramian was discussed in ( * ? ? ?* section 4.3 ) in the context of balancing for deterministic systems ( [ detsysant ] ) .[ refsameimbla ] the images of the finite reachability gramians , , and the infinite reachability gramian are the same , that is , since and are symmetric positive semidefinite , it is enough to show that their kernels are equal .let .this implies , since is increasing .hence . on the other hand , if , we have consequently , for all ] , where , we obtain the transformed partitioned system }&={\left[\begin{array}{cc}}{a}_{11}&{a}_{12}\\ { a}_{21}&{a}_{22}{\end{array}\right]}{\left[\begin{array}{cc}}{x}_1(t)\\ { x}_2(t){\end{array}\right]}dt+ { \left[\begin{array}{c}}{b}_1\\ { b}_2{\end{array}\right]}dm(t),\label{balrelpart}\\ y(t)&= { \left[\begin{array}{cc}}{c}_1 & { c}_2{\end{array}\right]}{\left[\begin{array}{cc}}{x}_1(t)\\ { x}_2(t){\end{array}\right]}.\label{balrelpartout}\end{aligned}\ ] ] in this system , the difficult to reach and observe states are represented by , which correspond to the smallest hsvs , but of course has to be chosen such that the neglected hsvs are small ( ) .we discuss two methods ( bt and spa ) to neglect leading to a reduced system of the form where , and ( ) .[ [ balanced - truncation ] ] balanced truncation + + + + + + + + + + + + + + + + + + + for bt the second row in ( [ balrelpart ] ) is truncated and the remaining components in the first row and in ( [ balrelpartout ] ) are set to zero .this leads to reduced coefficients which is similar to the deterministic case .the next lemma states that bt preserves asymptotic stability , which is known from the deterministic case , see ( * ? ? ?* theorem 7.9 ) .[ btpresstab ] let the gramians and be positive definite and , then for , i.e. and are asymptotically stable . the above lemma is vital for the error bound analysis in section [ btandspa ] .[ [ singular - perturbation - approximation ] ] singular perturbation approximation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + instead of setting , one assumes .this idea originates from the deterministic case , where it can be observed that are the fast variables meaning that they are in a steady state after a short time . in our framework , the classical derivative of does not exist but we proceed with setting in ( [ balrelpart ] ) .this yields an algebraic constraint }{\left[\begin{array}{cc}}{x}_1(s)\\ { x}_2(s){\end{array}\right]}ds+{b}_2 m(t)=:r(t),\end{aligned}\ ] ] where we assumed zero initial conditions .applying ito s product formula ( [ profriot ] ) to every summand of ( is the component of ) yields .\end{aligned}\ ] ] inserting the differential of and exploiting that the expectation of the ito integrals is zero , gives =\mathbb e \left[\int_0^ta^t(s ) r(s)ds\right]+\mathbb e\left[\int_0^t r^t(s ) a(s ) ds\right ] + \mathbb e \sum_{i=1}^{n - r}[r_i , r_i]_t , \end{aligned}\ ] ] where we set .setting gives ] , and . [ bterrorbound ]let be the output of the reduced order system obtained by bt , then under the assumptions of lemma [ btpresstab ] , we have}\mathbb e \left\|y(t)- y_{bt}(t)\right\|_{\mathbb r^p}\leq \left({\operatorname{tr}}(\sigma_2 ( b_2 \mathcal{q}_m b_2^t+2 p_{g , 2 } a_{21}^t))\right)^{\frac{1}{2}},\end{aligned}\ ] ] where are the last rows of with being the balancing transformation . evaluating the left and right upper block of ( [ partobeq ] ) yields from ( [ computeerrorbound ] ) the error bound has the form since . using the balancing transformation and the partition of in ( [ balancedrels ] ), we obtain .now , the left upper block of ( [ partreacheq ] ) is such that . using the partitions of and } ] , then , since are invertible by lemma [ btpresstab ] , its inverse is given in block form },\end{aligned}\ ] ] where .if we multiply ( [ partobeq ] ) with from the left hand side and select the left and right upper block of this equation , we obtain where and thus furthermore , multiplying ( [ partobeq ] ) with from the left and with from the right , the resulting left upper block of the equation is and thus we define which is the error bound for spa . from the proof of theorem [ bterrorbound ] we know that the following holds by ( [ bzwzw ] ) and the definition of the reachability equation of the rom , we have this leads to we multiply ( [ gemischtegram ] ) with the balancing transformation from the left ( here ) and use the partitions of , from ( [ balancedrels ] ) and the partition of } ] , ) ] and the output operator with ) ] . in this case and for . to approximate the -valued process in ( [ firstordertransf ] ), we construct a sequence of finite dimensional adapted cdlg processes with values in , defined by where we set * for all , * for , * . for the mild solution to ( [ approxsde ] ) ,let be a -semigroup on given by for all .it is generated by such that the mild solution of equation ( [ approxsde ] ) is since is bounded , the -semigroup on is represented by , .we formulate the main result of this section , which uses ideas from and is proved in appendix [ sec : app2 ] .[ th : mildsol ] the mild solution of equation ( [ approxsde ] ) approximates the mild solution of equation ( [ firstordertransf ] ) , i.e. for and .this implies the convergence of the corresponding outputs . in the following , we make use of the property that the mild and the strong solution of ( [ approxsde ] )coincide , since we are in finite dimensions .we write the output of the galerkin system as an expression depending on the fourier coefficients of the galerkin solution .the coefficients of are for , where is the -th unit vector in .we set and obtain .the components of satisfy using the fourier series representation of , we obtain hence , the vector of fourier coefficients is given by where with ( ) , and the eigenvalues of , and for .we will often make use of the compact form of the sde in ( [ galerkwavenowspec ] ) which is where and ] with & 0 \end{matrix}\right)^t,\\ \mathcal c h_{2 \ell}&=\left(\begin{matrix } 0 & \frac{1}{\sqrt{2\pi}\ell\epsilon}\left [ \cos \left(\ell \left(\frac{\pi}{2 } -\epsilon \right)\right)-\cos\left(\ell\left(\frac { \pi } { 2 } + \epsilon \right)\right)\right]\end{matrix}\right)^t,\end{aligned}\ ] ] where we assume to be even and .in figure [ fig : wavetime ] we plot the numerical solution to the stochastic damped wave equation for ] where we set , and ( e.g. stochastic inputs ) .the weighting functions for the two inputs are and .the noise processes are and is a compound poisson process , where is a poisson process with parameter equal to , are independent uniformly distributed jumps and is a standard wiener process . and are independent .the plot in figure [ fig : wavetime ] shows a particular realisation of the solution to ( [ inrobspeq ] ) at specific times .we see that the string moves up and down as expected due to the nonzero ( stochastic ) input .we observe that the third snapshot is taken after a jump occured in stochastic process . the corresponding output ,namely both the position and the velocity in the middle of the string , is shown in figure [ fig : position_velocity ] . in the plot for the velocitythe noise generated by the lvy process can be seen .the trajectory of the velocity is impacted by lvy noise with jumps , where the velocity ( e.g. the impact by wind ) is randomly increased or reduced . )-([introbspoutnew ] ) in the phase plane . ]the trajectory for the position of the cable in figure [ fig : position_velocity ] is smoother as it is the integral of the velocity .finally , figure [ fig : paths ] shows the velocity versus the position of the string , for the same sample path , in a phase portrait .the four jumps are clearly visible .we consider the spectral galerkin discretisation of the second order damped wave equation which we discussed in detail in section [ sec : wave ] , and in particular , the example in ( [ inrobspeq])-([introbspoutnew ] ) with two stochastic inputs and two outputs , namely position and velocity of the middle of the string .we set and choose the weighting functions and the noise processes ( ) as in figure [ fig : wavetime ] .we fix the state dimension to and reduce the galerkin solution by bt and spa . for computing the trajectories of the sde we use the euler - maruyama method ( see , e.g. ) . and velocity with . ]figures [ fig : btr6 ] and [ fig : btr24 ] show the logarithmic errors for the position and the velocity of the middle of the string , if mor by bt is applied to the wave equation with stochastic inputs when reduced models of dimension and , respectively , are computed . and velocity with . ]the first two plots in each of the figures show the logarithmic mean error for both the position and the velocity .one observation is that the position is generally more accurate than the velocity ( about one order of magnitude here ) , since the trajectories are smoother .moreover , comparing the expected values of the errors of the reduced model of dimension ( first two plots in figure [ fig : btr6 ] ) with the one of dimension ( first two plots in figure [ fig : btr24 ] ) it can be seen that the latter ones are more accurate ( an improvement of about one order of magnitude ) as one would expect .the last two plots in figures [ fig : btr6 ] and [ fig : btr24 ] show the logarithmic errors for position and velocity for one particular trajectory , which is the same as the one for the sample we considered in section [ sec : wave ] . and velocity with . ]figures [ fig : spar6 ] and [ fig : spar24 ] show the logarithmic errors for the position and the velocity of the middle of the string , if mor by spa is applied to the wave equation with stochastic inputs when reduced models of dimension and , respectively , are computed .again , the first two plots show the mean errors while the last two plots show the errors in particular trajectories . and velocity with . ]we observe that the error in the position is smaller than the error in the velocity , and , the error is smaller if a larger dimension of the reduced order model is used .finally , we compare the error bounds for bt ( see theorem [ bterrorbound ] ) and spa ( see theorem [ spaerrorbound ] ) with the worst case mean errors , that is } \mathbb e\left\|y(t)-y_r(t)\right\|_{\mathbb{r}^p } \end{aligned}\ ] ] for both methods in table [ tab : errorbd ] , where is the full output of the original model and the rom output ..error and error bounds for both bt and spa and several dimensions of the reduced order model ( rom ) . [ cols="^,^,^,^,^,^ " , ] first , as expected both mean errors and error bounds are getting smaller the larger the size of the rom .moreover , both error bounds are rather tight and close to the actual error of the rom , e.g. the bounds , which are worst case bounds also provide a good prediction of the true time domain error . we also note that bt performs better than spa , both in actually computed mean errors as well as in terms of the error bounds .we have presented theory for balancing related model order reduction ( mor ) applied to linear stochastic differential equations ( sdes ) with additive lvy noise .in particular we extended the concepts of reachability and observability to stochastic systems and formulated a new reachability gramian .we then showed how balancing related mor which is well known for deterministic systems can be extended to sdes with additive lvy noise , e.g. leads to the solution of a lyapunov equation ( with a slightly different right hand side ) .we proved a general error bound for reduced ( asymptotically stable ) systems in this setting and then gave specific bounds for balanced truncation ( bt ) and singular perturbation approximation ( spa ) which depended on the neglected ( small ) hankel singular values of the linear system .we finally applied our theory to a second order damped wave equation , discretised using a spectral galerkin method , and controlled by lvy noise .the numerical results showed that mor can be applied successfully and that errors for both bt and spa are small , and the error bounds tight .let all stochastic processes appearing in this section be defined on a filtered probability space shall be right continuous and complete . ] .we denote the set of all cdlg square integrable -valued martingales with respect to by .let be scalar semimartingales .we set with for .then the ito product formula \end{aligned}\ ] ] for holds , see or for the special case of lvy - type integrals . by (* theorem 4.52 ) , the compensator process ] . since is a contraction semigroup , we have by the representation ( ) and lebesgue s theorem , the bound in ( [ inconcon2 ] ) tends to zero for .for we get which tends to zero for and hence \rightarrow 0\end{aligned}\ ] ] for by lebesgue s theorem .
when solving linear stochastic differential equations numerically , usually a high order spatial discretisation is used . balanced truncation ( bt ) and singular perturbation approximation ( spa ) are well - known projection techniques in the deterministic framework which reduce the order of a control system and hence reduce computational complexity . this work considers both methods when the control is replaced by a noise term . we provide theoretical tools such as stochastic concepts for reachability and observability , which are necessary for balancing related model order reduction of linear stochastic differential equations with additive lvy noise . moreover , we derive error bounds for both bt and spa and provide numerical results for a specific example which support the theory . [ [ keywords ] ] keywords : + + + + + + + + + model order reduction , balanced truncation , singular perturbation approximation , stochastic systems , lvy process , gramians , lyapunov equations . [ [ ams - subject - classifications ] ] ams subject classifications + + + + + + + + + + + + + + + + + + + + + + + + + + + primary 93a15 , 93b40 , 93e03 , 93e30 , 60j75 . secondary 93a30 , 15a24 .
we consider the quadratic optimization problem over the unit ball which is known as an -norm trust - region subproblem in nonlinear programming and grothendieck problem in combinatorial optimization .applications of ( qpl1(q ) ) can be also found in compressed sensing where is introduced to approximate , the number of nonzero elements of .if is negative or positive semidefinite , ( qpl1(q ) ) is trivial to solve , see .generally , ( qpl1(q ) ) is np - hard , even when the off - diagonal elements of are all nonnegative , see . in the same paper , hsia showed that ( qpl1(q ) ) admits an exact nonconvex semidefinite programming ( sdp ) relaxation , which was firstly proposed as an open problem by pinar and teboulle . very recently, different sdp relaxations for ( qpl1(q ) ) have been studied in .the tightest one is the following doubly nonnegative ( dnn ) relaxation due to bomze et al . : where is the vector with all elements equal to , is the set of symmetric matrices , means that is componentwise nonnegative , stands for that is positive semidefinite , is the standard inner product of and , and .\ ] ] notice that the set of extreme points of is , where is the -th column of the identity matrix .define =[i~-i ] \in \re^{n\times 2n}.\ ] ] then we have consequently , ( qpl1(q ) ) can be equivalently transformed to the following standard quadratic program ( qps ) : now we can see that exactly corresponds to the well - known doubly nonnegative relaxation of ( qps ) .moreover , as mentioned in , can be also derived by applying the lifting procedure to the following homogeneous reformulation of ( qps ) : a natural extension of ( qpl1(q ) ) is it is a relaxation of the sparse principal component analysis ( spca ) problem obtained by replacing the original constraint with ( [ l1:1 ] ) due to the following fact : a well - known sdp relaxation for ( qpl2l1(q ) ) is due to daspremont et al . : recently , xia extended the doubly nonnegative relaxation approach from ( qpl1(q ) ) to ( qpl2l1(q ) ) and obtained the following sdp relaxation : it was proved in that , where denote the optimal value of problem . unfortunately , this equivalence result is incorrect though it is true that .a first counterexample will be given in this paper ( see example 2 below ) to show it is possible .the other extension of ( qpl1(q ) ) is where and .( qplp ) is known as a special case of the grothendieck problem if the diagonal entries of vanish . according to the survey , there is no approximation and hardness results for the grothendieck problem with .though has an exact nonconvex sdp relaxation similar to that of , the computational complexity of is still unknown . since the unit balls ( ) are included in the unit ball , a trivial bound for is where is the largest eigenvalue of . as mentioned by nesterov in the sdp handbook ,no practical sdp bounds of are in sight for .recently , bomze used the hlder inequality to propose the following sdp bound in general , dominates when close to , though lacking a proof . in this paper , based on a new variable - splitting reformulation for the -constrained set, we establish a new sdp relaxation for ( qpl1(q ) ) , which is proved to dominate .we use a small example to show the improvement could be strict . then we extend the new approach to ( qpl2l1(q ) ) and obtain two new sdp relaxations .we can not prove the first new sdp bound dominates , though it was demonstrated by examples .however , under a mild assumption , the second new sdp bound dominates . finally , motivated by the model ( qpl2l1(q ) ), we establish a new sdp bound for ( qplp(q ) ) and show it is in general tighter than .the paper is organized as follows . in section 1, we propose a new variable - splitting reformulation for the -constrained set and then a new sdp relaxation for ( qpl1(q ) ) .we show it improves the state - of - the - art sdp - based bound . in section 2 ,we extend the new sdp approach to ( qpl2l1(q ) ) and study the obtained two new sdp relaxations . in section 3 , we establish a new sdp relaxation for ( qplp(q ) ) , which improves the existing upper bounds .conclusions are made in section 4 .in this section , we establish a new sdp relaxation for ( qpl1(q ) ) based on a new variable - splitting reformulation for the -constrained set . for any ,let then we have now we obtain a new variable - splitting reformulation of the -constrained set : it follows that applying the lifting procedure , we obtain the following new doubly nonnegative relaxation of ( qpl1(q ) ) we first compare the qualities of and .[ thm:1 ] .according to the definitions , we have and .it is sufficient to prove the first inequality .since is a feasible solution of , we have suppose .let be an optimal solution of ( ) .since , we have and therefore consequently , .similarly , we can show .now we assume .there is a vector such that and .that is , .it follows that .let be an optimal solution of ( ). then .moreover , since , we have .we conclude that if this is not true , then .define it is trivial to see that is also feasible to ( ) .moreover , we have which contradicts the fact that is a maximizer of ( ) .according to the equality ( [ eq1 ] ) , is also a feasible solution of ( ) .consequently , .the proof is complete .the following small example illustrates that could strictly improve .[ exam1 ] consider the following instance of dimension \ ] ] we modeled this instance by cvx 1.2 ( ) and solved it by sedumi ( ) within cvx .then we obtained that finally , we show that there are some cases for which has no improvement .this `` negative '' result is also interesting in the sense that in case we solve , we can fix ( ) at zeros in advance . [ thm:2 ] suppose . .let be an optimal solution of .suppose there is an index such that .let and define a symmetric matrix where and all other elements are zeros. then it follows that then , is also an optimal solution of .repeat the above procedure until we obtain an optimal solution of , denoted by , satisfying for .notice that is a feasible solution of .therefore , we have . combining this inequality with theorem [ thm:1 ], we can complete the proof .in this section , we extend the above new reformulation approach to ( qpl2l1(q ) ) and obtain two new semidefinite programming relaxations .similar to the reformulation ( [ x - y:1])-([x - y:4 ] ) , we have it follows that introducing , we obtain the following new sdp relaxation for ( qpl2l1(q ) ) : according to the definition , we trivially have : . [ prop ] .both and share the same relaxation : let .we have therefore , can be further relaxed to let be the eigenvalue decomposition of , where and are column - orthogonal . since we can further relax to the following linear programming problem : now it is trivial to verify that the proof is complete .[ cor ] suppose , then we have we are unable to prove , though we failed to have found an example such that .moreover , the following example shows that it is possible . as a by - product , we observe from the example , which means that the result ( theorem 3.2 ) is incorrect . notice that it is true that .[ exam2 ] consider the same instance of example [ exam1 ] and let .we modeled this instance by cvx 1.2 ( ) and solved it by sedumi ( ) within cvx .we obtained that thus , in order to theoretically improve , we consider it is trivial to see that however , may be not an upper bound of , which is indicated by the following example .[ exam3 ] consider the same instance of example [ exam1 ] and let .we modeled this instance by cvx 1.2 ( ) and solved it by sedumi ( ) within cvx .we obtained that so , we have to identify when is an upper bound of . [ thm:4 ]suppose we have .we first notice that the maximum eigenvalue problem is a homogeneous trust - region subproblem and hence has no local - non - global maximizer . therefore , suppose there is an optimal solution of , denoted by , satisfying , then also globally solves ( e ) , i.e. , consequently , the assumption ( [ as:2 ] ) implies that taking the transformation ( [ xy:1])-([xy:4 ] ) and then applying the lifting approach , we obtain the sdp relaxation .the proof is complete .the assumption ( [ as:2 ] ) is generally not easy to verify .however , when has a unique maximum eigenvalue , ( [ as:2 ] ) holds if and only if , where is the -normalized eigenvector corresponding to the maximum eigenvalue of .moreover , according to corollary [ cor ] and proposition [ prop ] , the assumption ( [ as:2 ] ) can be replaced by the following easy - to - check sufficient condition this section , we first propose a new sdp relaxation for ( qplp(q ) ) and then show it improves both ( [ b2 ] ) and ( [ bom ] ) . motivated by the hlder inequality ( [ hol ] ) and the model ( qpl2l1(q ) ) , we obtain the following new relaxation for ( qplp(q ) ) : taking the transformation ( [ xy:1])-([xy:4 ] ) and then applying the lifting approach , we obtain the following sdp relaxation for , which is very similar to : according to the definitions , the second inequality is trivial . it is sufficient to prove the first inequality .we first show .let . since has the following relaxation : let be the eigenvalue decomposition of , where and are column - orthogonal . according to ( [ x:1])-([x:3 ] ), we can further relax to the following linear programming problem : it is not difficult to verify that now we prove .notice that where the last inequality follows from theorem [ thm:1 ] .the proof is complete .we randomly generated a symmetric matrix of order using the following matlab scripts : .... rand('state',0 ) ; q = rand(n , n ) ; q = ( q+q')/2 ; .... and then compared the qualities of the three upper bounds , , and .the results were plotted in figure 1 , where the lower bound of qplp(q ) is computed as follows .solve and obtain the optimal solution .let be the unit eigenvectors corresponding to the maximum eigenvalues of and , respectively .then and are two feasible solutions of ( qplp(q ) ) and gives a lower bound of . from figure 1, we can see that for , though and can not dominate each other , both are strictly improved by .[ fig ] , and in dependence of .,scaledwidth=80.0% ]the sdp relaxation has been known to generate high quality bounds for nonconvex quadratic optimization problems . in this paper ,based on a new variable - splitting characterization of the unit ball , we establish a new semidefinite programming ( sdp ) relaxation for the quadratic optimization problem over the unit ball ( qpl1 ) .we show the new developed sdp bound dominates the state - of - the - art sdp - based upper bound for ( qpl1 ) .there is an example to show the improvement could be strict .then we extend the new reformulation approach to the relaxation problem of the sparse principal component analysis ( qpl2l1 ) and obtain two sdp formulations .examples demonstrate that the first sdp bound is in general tighter than the dnn relaxation for ( qpl2l1 ) .but we are unable to prove it . under a mild assumption ,the second sdp bound dominates the dnn relaxation .finally , we extend our approach to the nonconvex quadratic optimization problem over the ( ) unit ball ( qplp ) and show the new sdp bound dominates two upper bounds in recent literature .y. nesterov , global quadratic optimization via conic relaxation , in handbook of semidefinite programming , h. wolkowicz , r. saigal and l. vandenberghe , eds . , kluwer academic publishers , boston , 363387 , 2000
in this paper , by improving the variable - splitting approach , we propose a new semidefinite programming ( sdp ) relaxation for the nonconvex quadratic optimization problem over the unit ball ( qpl1 ) . it dominates the state - of - the - art sdp - based bound for ( qpl1 ) . as extensions , we apply the new approach to the relaxation problem of the sparse principal component analysis and the nonconvex quadratic optimization problem over the ( ) unit ball and then show the dominance of the new relaxation . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
many of network service provider s pain points can be traced back to a fundamental issue : the lack of network visibility .for example , the network congestion collapse can be avoided in many cases if we know exactly when and where the congestion is happening or even better , if we can precisely predict it well before any impact is made ; sophisticated network attacks can be prevented through stateful and distributed network behavior analysis ; in order to monetize the traffic and provide application - centric service , user flows and their interaction with networks need to be tracked and understood .all these pose vast needs on generalized network data , either passing through networks or generated by network devices .not surprisingly , people start to model the network visibility as a big data problem. traditional algorithms can still apply , but more advanced big data analytics such as machine learning can provide unlimited opportunities to mine values from network data . if we can retrieve any data of interest in real time with direct helps from the data source ( i.e. , the network data plane ) , then most problems can be solved by the comprehensive data analytics at the application plane . therefore , our value proposition is to _ build a unified and general - purpose network data analytics platform with integrated data plane support to provide the omni network visibility_. this is in contrast to the ad - hoc solutions which only deal with one single problem a time with a special means to acquire the relevant data .sdn appears to be an ideal architecture to support the omni network visibility .the logically centralized controller is at the unique vantage point to see any data in networks .however , so far the sdn controller has limited view of network data and states , because the data plane is incapable of providing enough data to sustain all application requirements .therefore , we need to make the data plane programmable so any data can be collected if needed .more importantly , the programmability must be open to the upper layer applications so service provider can directly take advantage of this flexibility for application layer data collection .we further argue that the data plane should also support on - demand and real - time programming to meet the dynamic data needs from various runtime applications ( detailed in sec .[ motiv ] ) .this interactive programming capability is in contrast to the conventional programming with static languages .with such a data plane , we can add network probes anywhere and anytime through a standard interface .these passive probes will not alter the forwarding behavior but add the monitor points which are responsible to collect and preprocess data for applications .we make two major contributions in this paper .first , we devise the dynamic network probe ( dnp ) as a flexible and dynamic means for sdn data plane data collection .second , we show the possibility to build a universal network data analytics platform in which network devices play an integrated role .the first contribution forms the foundation for the second one .data collected from data plane provide input to the sdn control loop .what data to collect is determined by the purpose of the data - consuming application .the most basic application is routing and forwarding decision for which the network topology and link states need to be collected .other applications , such as traffic engineering , network security , network health monitoring , trouble shooting , and fault diagnosis , require different type of data . the data are either normal traffic packets that are filtered , sampled , or digested , or metadata generated by network devices to convey network states and status . in either case , data collection is meant to be passive and should not change the network forwarding behavior .the sdn controller analyzes the collected data and then makes decisions accordingly to actively change the network behavior .forwarding information base ( fib ) and access control list ( acl ) table updates are the most notable examples , as well as traffic manager ( tm ) parameters .a few other possible changes are less obvious . for example , some applications keep refining the ways to collect data based on previous observations .the control loop algorithm continuously interacts with the data plane and modify the data source and content .we can also imagine that in some other applications , new network and packet processing functions can be enabled to alter the forwarding behavior at runtime .this paper concerns with the passive data collection only .although the technology discussed in this paper can also be applied to actively modify the network behavior , we leave that topic to future work .ideally , we want to gain the full visibility to know any states anytime anywhere in the entire network data plane . in reality , this is extremely difficult if not impossible .theoretically , any network state can be inferred if all the traffic through it can be seen , so a simple option is to mirror all the raw traffic to servers where data analytical engine is running .however , this brute - force method requires to double the device port count and the traffic bandwidth , and poses enormous computing and storage cost . as a tradeoff, test access port ( tap ) or switch port analyzer ( span ) is used to selectively mirror only a portion of the overall traffic .network packet broker ( npb ) is deployed along with tap or span to process and distribute the raw data to various data analytical tools .there are some other ad - hoc solutions ( e.g. , sflow and everflow ) which can provide sampled and digested packet data and some traffic statistics .meanwhile , network devices also generate various log files to record miscellaneous events in the system .when aggregating all these solutions together , we can gain a relatively comprehensive view of the network .however , the main problem here is the lack of a unified platform to deal with the general data collection problem .moreover , each ad - hoc solution inevitably loses information due to data plane resource limitation which makes the data analytical results suboptimal , so does the follow - up data plane control based on the results .we also note that application s requests on data are often dynamic and realtime .when third party applications run on top of the network operator s controller , their data needs are diversified and unpredictable .even a single application may need to constantly adjust what data to collect ( e.g. , an elephant flow detector continues to narrow down the flow granularities and gather their statistics ) .trying to design an omnipotent system to support all possible runtime data requests is inviable because the resources required are prohibitive ( e.g. , even a simple counter per flow is impossible in practice ) .an alternative is to reprogram or reconfigure the data plane device whenever an unsupported data request appears .this is possible thanks to the recently available programmable chips and the trend to open the programmability to service providers .unfortunately , the static programming approach can not meet the realtime requirements due to the latency incurred by the programming and compiling process .the reprogramming process also risks breaking the normal operation of network devices .then a viable solution left to us is : whenever applications request data which is unavailable in the data plane , the data plane can be configured in real time to return the requested data .that is , we do not attempt to make the network data plane provide all data all the time . instead , we only need to ensure that _ any application can acquire any necessary data instantly whenever it actually asks for it ._ this data - on - demand model can support effectively `` omni '' network visibility , but it is still unthinkable with the current stiff and black - boxed data plane .so we first introduce the recent technology advance to ground the feasibility of our vision in sec .[ tech ] and [ pof ] .driven by sdn , network data plane is evolving to become open programmable .this means the network operators are in control of customizing the network device s function and forwarding behavior .several ongoing trends in industry , as shown in figure [ fig_opdp ] , are validating this idea .these trends are shaping new forms of network devices and inspiring innovative ways to use them .the first trend is led by the ocp networking project , which advocates the decoupling of the network operating system and the network device hardware .a common switch abstract interface ( sai ) allows applications to run on heterogeneous substrate devices .however , such devices are built with fixed function asics , which provide limited flexibility for application customization .the second trend is built upon the first one yet makes a big leap . chip and device vendors are working on opening the programmability of the npu , cpu , and fpga - based network devices to network operators .most recently , programmable asics has been proved feasible .high level language such as p4 is developed to make the network device programming easy and fast .now a network device can be incarnated into different functioning boxes depending on the program installed . however , such programming process is considered static .even a minor modification to the existing application requires to recompile the updated source code and reinstall the application .this incurs long deployment latency and may also temporarily break the normal data plane operation .hence , although this generation of trend is still in its infancy , we have seen some of its limitations .open programmable data plane should be stretched further to support runtime interactive programming in order to extend its scope of usability .dynamic application requirements can not be foreseen at design time , and runtime data plane modifications are required to be done in real time ( for agile control loop ) and on demand ( to meet data plane resource constraints ) . meanwhile , the data plane devices are capable of doing more complex things such as stateful processing without always resorting to controller for state tracking .this allows network devices to offload a significant portion of the data processing task and only hand off the preprocessed data to the data - requesting applications .we introduced the pof programming model to support this trend as shown in figure [ fig_pof1 ] .we still use static programing with high level languages to define the main data plane processing and forwarding function .but at runtime , whenever an application requires to make some modification to the data plane , we deploy the incremental modification directly through the runtime control channel . the key to make this dynamic and interactive programming work is to maintain a unified interface to devices for both configuration and runtime control , because both programming paths share the same data plane abstraction and use the same back - end adapting and mapping method .the data plane processing and forwarding function is abstracted as the standard intermediate representation ( ir ) .ir contains high - level and abstract objectives that structurize the data plane components and flow logic .these objectives are not deviated too much from the high level language such as p4 .this means for incremental changes , it is possible to provide apis directly at ir level .user can therefore interactively apply data plane changes and avoid the full source - code compiling process . of course, the network devices need to be flexible enough to support the interactive programming .we have implemented an npu - based hardware prototype to support it .the virtual network device running on cpu / gpu can certainly support it easily .asics and fpgas are also possible to support it but do need some architectural innovations which will be briefly discussed in section [ conclude ] .the forwarding application in data plane can be modeled in different ways ( e.g. , forces ) , but so far the most popular abstract model is the match - action table pipeline . in this model ,the streaming packets coming from some physical or logical input ports enter a pipeline . at each pipeline stage, some data extracted form the packet or the metadata is used as key to search a table .the matching entry triggers some associated action .the action may lead the packet to another pipeline stage or an output port .a single network device may contain multiple such pipelines , segregated by logical ports ( e.g. , controller , switch fabric , or black box modules ) .this simple yet expressive model is sufficient to describe arbitrary data plane forwarding applications .we make several enhancements to the basic model in order to support the runtime interactive programming .first , the actions are no longer statically associated with flow table entries as in openflow and p4 . as the chief programming point, actions contain primitive packet processing instructions and control instructions .each custom action can be dynamically loaded into network devices .each flow entry has a pointer field which holds a pointer to an action , and a parameter field which holds the parameters used by the associated action[multiblock footnote omitted ] .this mechanism allows users to change flow actions at runtime .it also allows each action to have different number of instructions , as long as the performance and action storage can sustain the size . with this mechanism , users are free to download new actions to change a flow s behavior on the run .even new pipeline stages ( i.e. , new tables ) can be inserted into the pipeline dynamically without interrupting the data path .second , some data plane resources are maintained in a dynamically shared pool .such resources include meters , counters , timers , and global registers and tables .the rationale is multifold : ( 1 ) network devices can not afford to statically allocated counters or meters to all the flows and flow tables . for example , a two - million - entry ip forwarding table alone can consume 64 mb memory for counters . since it is impossible to know a priori how to reasonably allocate these limited resources at design time ,counters and meters must be shared dynamically ; ( 2 ) it is easy to see if a counter or meter can be allocated to multiple flows , the aggregated flow statistics and measurement can be realized with ease .likewise , multiple counters and meters can also be allocated to a single flow so various aspects of the flow can be counted and metered with conditional instructions ; ( 3 ) a global register or table is essentially a persistent storage . when it is assigned to a flow, the flow packet can assess and update its value , therefore some stateful processing can be realized .similarly , a register can be assigned to multiple flows to enable inter - flow information sharing , and a flow can be assigned with multiple registers to hold more stateful data . at last , for better stateful processing support , actions are given the privilege to write tables ( i.e. , insert , modify , or delete table entries ) .this can effectively turn a table into a state table while the search key is no longer a flow signature but a state signature .this idea was originated in and further developed in .figure [ fig_dpmodel ] summarizes the major data plane features that support interactive programming .such data plane is so flexible that the inattentive use of interactive programming may accidentally break the device operation or even paralyze the entire network .however , we will see at least in one scenario we are immune from the negative effects yet can still enjoy the benefits of interactive programming .network probes are passive monitors which are installed at specific forwarding data path locations to collect specific data .dnps are _ dynamically _ deployed and revoked probes by applications at runtime .the customizable dnps can collect simple statistics or conduct more complex data preprocessing .since dnps may require actively modifying the existing data path pipeline beyond simple flow entry manipulation , these operations need to be done through interactive programming process . when a dnp is revoked , the involved shared resources are automatically recycled and returned back to the global resource pool .dnps can be deployed at various data path locations including port , queue , buffer , table , and table entry . when the data plane programmability is extended to cover other components ( e.g. , cpu load , fan speed , gps coordinations , etc . ) , dnps can be deployed to collect corresponding data as well .a few data plane objectives can be composed to form probes .these objectives are counter , meter , timer , timestamp , and register . combining these with the packet filter through flow table entry configuration, one can easily monitor and catch arbitrary states on the data plane .the simplest probe is just a counter .the counter can be configured to count bytes or packets and the counting can be conditional .the more complex probes are modeled as finite state machines ( fsm ) which are configured to capture specific events[multiblock footnote omitted ] .we have shown how stateful processing is supported in section [ pof ] .fsms essentially preprocess the raw stream data and only report the necessary data to controller - side applications .these complex dnps help reduce the device to controller bandwidth consumption and also alleviate the controller s processing load .applications can use pull mode or push mode to access probes and collect data .the normal counter probes are often accessed via pull mode .applications decide what time and how often the counter value is read . on the other hand ,the complex fsm probes are usually accessed in push mode .when the target event is triggered , a report is generated and pushed to the application .below is the pseudo code of a push - mode packet counter : whenever a counter is incremented by a predefined amount , a packet is generated to report the event to the subscribing application . ....cntr[i ] + + ; if(cntr[i ] = = threshold ) { gen_pkt(to_app , flow_id , now ) ; cntr[i ] = 0 ; } .... to install this probe , we first identify the target action . if the action exists , we generate a new action by inserting this piece of code to the code of old action .then we download this new action to the target device .next we issue a command to switch the flow pointer from to . nowthe probe in takes effect and can be deleted if no longer needed .sometimes the target action does not exist which means a new flow entry needs to be installed along with the action .a solid example is that the application wants to gather the statistics of a new flow that does not exist .we need to choose a target table first and then analyze the normal processing action to this flow at the table .we augment with the probe code to get and download to the target device .next we issue commands to insert to and associate to .note that if the flow overlaps with another flow in table , the action of may also need to be modified .timer is a special global resource .a timer can be configured to link to some action . when the time is up , the corresponding action is executed .for example , to get notification when a port load exceeds some threshold , we can set a timer with a fixed time - out interval , and link the timer to an action which reads the counter and generates the report packet if the condition is triggered .this way , the application avoids the need to keep pulling statistics from the data plane .the pseudo code of the action is as follows : .... if(cntr[i ] > = threshold ) { gen_pkt(to_app , port_id , cntr[i ] , now ) ; } cntr[i ] = 0 ; .... with the use of global registers and state tables , more complex fsm probes can be implemented .for example , to monitor the half - open tcp connections , for each syn request , we store the flow signature to a state table. then for each ack packet , the state table is checked and the matched entry is removed .the state table can be periodically pulled to acquire the list of half - open connections .the application can also choose to only retrieve the counter of half - open connections .when the counter exceeds some threshold , further measure can be taken to examine if a syn flood attack is going on .the pseudo code is shown below . ....if(tcp ) { if(syn ) { write flow_sig to stb ; cntr[i ] + + ; } else if(ack ) { remove flow_sig from stb ; cntr[i ] -- ; } } .... registers can be considered mini state tables which are good to track a single flow and a few state transitions .for example , to get the duration of a particular flow , when the flow is established , the state and the timestamp are recorded in a register ; when the flow is teared down , the flow duration can be calculated with the old timestamp and the new timestamp .in another example , we want to monitor a queue by setting a low water mark and a high water mark for the fill level . every time when an enqueue or a dequeue event happens ,the queue depth is compared with the marks and a report packet is generated when a mark is crossed .some probes are essentially packet filters which are used to filter out a portion of the traffic and mirrored the traffic to the application or some other target port for further processing .there are two ways to implement a packet filter : use a flow table that matches on the filtering criteria and specify the associated action ; or directly make decision in the action .an example of the former case is to filter all packets with a particular source ip address .an example of the latter case is to filter all tcp fin packets at edge .although we can always use a flow table to filter traffic , sometimes it is more efficient and convenient to directly work on the action . as being programmed by the application ,the filtered traffic can be further processed before being sent .two most common processes are digest and sample , both aiming to reduce the quantity of raw data .the digest process prunes the unnecessary data from the original packet and only pack the useful information in the digest packet .the sample process picks a subset of filtered traffic to send based on some predefined sampling criteria .the two processes can be used jointly to maximize the data reduction effect .an application may need to install multiple dnps in one device or across multiple devices to finish one data analytical task .for example , to measure the latency of any link in a network .we install a dnp on the source node to generate probe packets with timestamp .we install another dnp at the sink node to capture the probe packets and report both the source timestamp and the sink timestamp to the application for link latency calculation .the probe packets are also dropped by the sink dnp .the source dnp can be configured to generate probe packets at any rate .it can also generate just one probe packet per application request .the pseudo code of the sink dnp action is as follows : .... if(is_probe_packet ) { gen_pkt(to_app , old_time , now ) ; drop(this ) ; } .... using the similar idea , we can deploy dnps to measure the end - to - end flow latency or trace exact flow paths. the information can be piggybacked on packets of normal traffic or on generated probe packets .since potentially we may have many such tasks but each of such tasks may not be constantly needed and each consumes some network resources , making them dynamic is no doubt more efficient . in summary, dnp is a versatile tool to prepare and generate just - in - time data for data analytical applications .in the past , network data analytics is considered a separate function from networks .they consume raw data extracted from networks through ad hoc protocols and interfaces . with the open programmable data plane, we expect a paradigm shift that makes the data plane be an active component of the data analytical solution .the programmable in - network computing is efficient and flexible to offload the data preprocessing through interactive data plane programming . a universal network data analytical platform built on top of thisenables a tight and agile sdn control loop .while dnp is a passive data plane data collection mechanism , we need to provide a declarative interface for applications to use the target - specific dnps for data analytics .a proposed dynamic networking data analytical system is illustrated in figure [ fig_platform ] .an application translates its data requirements into some dynamic transactional queries .the queries are then compiled into a set of dnps targeting a subset of data plane devices and the instructions for data post - processing after data are collected from the data plane . after the dnps are deployed , each dnp conducts in - network data preprocessing and feeds the preprocessed data to the collector .the collector finishes the data post - processing and presents the results to the data - requesting application. a query can be either continuous or one - shot .the continuous query may require the application to continuously refine the existing dnps or deploy new dnps .when an application revokes its queries , the idle dnp resource is released .since one dnp may be subscribed by multiple applications , the runtime system needs to keep track of the active dnps . as our future work, we aim to build a dnp - based network data analytics platform which can address multiple business requirements of network service providers , including qoe measurement , security enforcement , customer care , and network optimization .we have built a hardware - based dnp prototype based on huawei ne40e device .the device line card is equipped with a 200gbps npu . the pof interface protocol , essentially an extension of openflow 1.4 , is used as the device programming interface .the preliminary results are promising .we can deploy an arbitrary counter probe and start to collect results in less than -ms without interrupting the normal service and impacting the forwarding performance .in contrast , to achieve the same effect with the static programming approach , we need to edit and recompile the source code of the entire design , delete the old device configuration , and download the new configuration .the compiling process consumes about second and the normal service is interrupted by about second for new program download .dnp reduces the deployment latency by 40 times .note that this is only for a small forwarding application with 3 flow tables and 10 instruction blocks . for a larger forwarding application ,the advantage of dnp is even more prominent , because deploying a dnp has a fixed cost but compiling and downloading a larger design consume more time .we also evaluated the dnp s performance impact to the normal forwarding throughput on our platform by keeping inserting more flow counters and observing the achievable forwarding throughput .the results are shown in figure [ fig_eval ] .the factors that affect the performance include the additional instructions and memory accesses incurred by the dnps .our analysis shows the extra memory accesses is the dominant factor .one counter operation needs one read and one write to an sram block reserved for counters .our platfom s counter memory can sustain at most 425 m accesses per second . the memory bandwidth and latencyinteract with the limited number of cores and threads , which eventually drags down the throughput .our evaluation shows that our platform can support line speed forwarding with up to 14k flow counters .when more complex dnps are applied , we expect the performance impact will be noticeable with fewer dnps .the results also serve as an evidence why dynamic rather than static probes are needed .interested readers may use the open source software provided at pof website to conduct the similar experiments on a software target .many technique challenges need to be addressed to realize dnp and the universal network data analytics platform on general sdn data plane .we list a few here and also provide our initial thoughts on potential solutions .\a ) allowing applications to modify the data plane has security and safety risks ( e.g. , dos attack ) .the counter measure is to supply a standard and safe api to segregate applications from the runtime system and provide applications limited accessibility to the data plane .each api can be easily compiled and mapped to standard dnps . an sql - like query language which adapts to the stream processing systemmight be feasible for the applications .\b ) when multiple correlated dnps are deployed across multiple network devices or function blocks , or when multiple applications request the same dnps , the deployment consistency needs to be guaranteed for correctness .this requires a robust runtime compiling and management system which keeps track of the subscription to dnps and controls the dnp execution time and order .\c ) the performance impact of dnps must be evaluated before deployment to avoid unintentionally reducing the forwarding throughput .fortunately , the resource consumption and performance impact of standard dnps can be accurately profiled in advance .a device is usually over provisioned and is capable of absorbing extra functions up to a limit .moreover , programmable data plane allows users to tailor their forwarding application to the bare bones so more resources can be reserved for probes . the runtime system needs to evaluate the resulting throughput performance before committing a dnp .if it is unacceptable , either some old dnps need to be revoked or the new request must be denied .\d ) while dnp is relatively easy to be implemented in software - based platform ( e.g. , npu and cpu ) , it is harder in asic - based programmable chips .architectural and algorithmic innovations are needed to support a more flexible pipeline which allows new pipeline stage , new tables , and new custom actions to be inserted at runtime through hitless in - service updates .an architecture with shared memory and flexible processor cores might be viable to meet these requirements . alternatively , dnps can be implemented using an `` out - of - band '' fashion .that is , some reserved pipeline stages are dedicated for dnps .relevant data and/or packets are configured to be passed to these stages for processing and counting .mainly working on fixed function network devices , sflow , planck , flexam , and everflow are all packet - based mirroring and sampling techniques , through setting flow filters or relying on particular sampling policies .such techniques can only provide partial data plane visibility with information loss .netsight aggressively collects the complete packet history in order to achieve full network visibility with the significant storage and computing cost at control plane . some other work ( e.g. , tpp , int , and flowradar ) builds on programmable network devices ,therefore non - packet data can be retrieved from data plane through programming means .tpp revives the idea of active network but keeps it simple .it allows packets to carry a tiny program in header so the program instructions can be executed by network devices and data collected along the forwarding path .int realizes the similar idea by using p4 as the high level programming language .flowradar can maintain full flow statistics with succinct data structure .this group of techniques falls into the static programming category in which the custom functions are predefined at design time . to collect data that were not supported by the current design ,one has to reprogram the data plane .some techniques are introduced to enhance the data plane data processing capability .opensketch proposes a set of data plane primitives to facilitate the network measurement .openstate devises a stateful data plane programming abstraction which can be used to implement fsms .insp provides a generic data plane packet generation api which can be used to preprocess and encapsulate collected data from data plane .dpt suggests to augment a timestamp header to all packets for various data analytical applications .interactive control on what data to collect based on network dynamics is discussed in some work ( e.g. , tpp , dream , and mozart ) . however , due to the lack of interactive programming capability , new data can only be collected through selecting existing probes or configuring new flow table entries .gigascope and path query provide sql - like languages for applications to initiate interactive queries for network states and data .frenetic queries manipulate flow entries with the assumption that each flow entry has its counters and rely on the controller - side runtime system to aggreagte the data collected .the high - level and network - wide queries can be compiled into dnps to collect preprocessed data direclty through data plane programming .while the network data plane is naturally a data streaming system , a data stream management system ( dsms ) would rely on dnps to realize in - network sampling and sketching techniques .based on ebpf , io - visor is a kernel i / o and networking infrastructure which can be used to implement virtual switch .it supports dynamic tracing by installing virtual probes converted from high level programs at runtime .this is inline with the idea of dnp .dnp is enabled by the most recent technology advances : open programmable data plane and interactive data plane programming .it takes advantage of the data plane processing capability and provides real - time and on - demand network visibility at low cost and high performance .we believe dnp is a solid stepping stone to a network data analytical platform which can help service providers to gain deeper insight and mine more values from their networks .
effective sdn control relies on the network data collecting capability as well as the quality and timeliness of the data . as open programmable data plane is becoming a reality , we further enhance it with the support of runtime interactive programming in order to cope with application dynamics , optimize data plane resource allocation , and reduce control - plane processing pressure . based on the latest technologies , we propose the dynamic network probes ( dnp ) as a means to support real - time and on - demand network visibility . dnps serve as an important building block of an integrated networking data analytics platform which involves the network data plane as an active component for in - network computing . in this paper , we show the types of dnps and their role in the big picture . we have implemented an np - based hardware prototype to demonstrate the feasibility and efficiency of dnps . we lay out the research challenges and our future work to realize the omni network visibility based on dnps .
planar cellular structures formed by tessellation , tiling , or subdivision of plane into contiguous and nonoverlapping cells have always generated interest among scientists in general and physicist in particular because cellular structures are ubiquitous in nature .examples include acicular texture in martensite growth , tessellated pavement on ocean shores , agricultural land division according to ownership , grain texture in polycrystals , cell texture in biology , soap froths and so on .for instance , voronoi lattice formed by partitioning of a plane into convex polygons or apollonian packing generated by tiling a plane into contiguous and non - overlapping disks has found widespread applications .there are some theoretical models , both random and deterministic , developed either to directly mimic these structures or to serve as a tool on which one can study various physical problems .however , cells in most of the existing structures do not have sides which share with more than one side of other cells . in reality, the sides of a cell can share a part or a whole side of another cell . as a result, the number of neighbours of a cell can be higher than the number of sides it has .moreover , cellular structure may emerge through evolution where cells can be of different sizes and have different number of neighbours since nature favours these properties as a matter of rule rather than exception .a lattice with such properties can be of great interest as it can mimic disordered medium on which one can study percolation or random walk like problems . in this article, we propose a weighted planar stochastic lattice ( wpsl ) as a space - filling cellular structure where annealed coordination number disorder and size disorder are introduced in a natural way .the definition of the model is trivially simple .it starts with an initiator , say a square of unit area , and a generator that divides it randomly into four blocks .the generator thereafter is sequentially applied over and over again to only one of the available blocks picked preferentially with respect to their areas .it results in the partitioning of the square into ever smaller mutually exclusive rectangular blocks .a snapshot of the wpsl at late stage ( figure 1 ) provides an awe - inspiring perspective on the emergence of an intriguing and rich pattern of blocks .we intend to investigate its topological and geometrical properties in an attempt to find some order in this seemingly disordered lattice .if the blocks of the wpsl are regarded as isolated fragments then the model can also describe the fragmentation of a object by random sequential nucleation of seeds from which two orthogonal cracks parallel to the sides of the parent object are grown until intercepted by existing cracks .in reality , fragments produced in fracture of solids by propagation of interacting cracks is a formidable mathematical problem .the model in question can , however , be considered as the minimum model which should be capable of capturing the essential features of the underlying mechanism .the wpsl can also describe the martensite formation as we find its definition remarkably similar to the model proposed by rao _et al . _ which is also reflected in the similarity between the figure 1 and figure 2 of .yet another application , perhaps a little exotic , is the random search tree problem in computer science .searching for an order in the disorder is always an attractive proposition in physics . to this end, we invoke the concept of complex network topology to quantify the coordination number disorder and the idea of multifractality to quantify the size disorder of the blocks in wpsl .it is interesting to note that the dual of the wpsl ( dwpsl ) obtained by replacing each block with a node at its center and common border between blocks with an edge joining the two corresponding vertices emerges as a network .the area of the respective blocks is assigned to the corresponding nodes to characterize them as their fitness parameter .nodes in the dwpsl , therefore , are characterized by their respective fitness parameter and the corresponding degree defined as the number of links a node has . for a decade , there has been a surge of interest in finding the degree distribution triggered by the work of a .-barabasi and his co - workers who have revolutionized the notion of the network theory by recognizing the fact that real networks are not static rather grow by addition of new nodes establishing links preferentially , known as the preferential attachment ( pa ) rule , to the nodes that are already well connected . incorporating boththe ingredients , growth and the pa rule , barabasi and albert ( ba ) presented a simple theoretical model and showed that such network self - organizes into a power - law degree distribution with .the phenomenal success of the ba model lies in the fact that it can capture , at least qualitatively , the key features of many real life networks .interestingly , we find that the dwpsl has all the ingredients of the ba model and its degree distribution follows heavy - tailed power - law but with exponent revealing that the coordination number of the wpsl is scale - free in character .in addition to characterizing the blocks of the wpsl by the coordination number , they can also be characterized by their respective length and width .we then find that the dynamics of the wpsl is governed by infinitely many conservation laws , namely the quantity remains independent of time where blocks are labbeled by index .for instance , total area is one obvious conserved quantity obtained by setting , sum of the cubic power of the length ( or width ) of all the existing blocks is a non - trivial conserved quantity obtained by setting ( or ) .interestingly , we find that when the block is populated with the fraction of the measure then the distribution of the population in the wpsl emerges as multifractal indicating further development towards gaining deeper insight into the complex nature of the wpsl we proposed .multifractal analysis was initially proposed to treat turbulence but later successfully applied in a wide range of exciting field of research . recently though it has received a renewed interest as it has been found that the wild fluctuations of the wave functions in the vicinity of the anderson and the quantum hall transition can be best quantified by using a multifractal analysis .the organization of this paper is as follows . in section 2, we give its exact algorithm of the model . in section 3 various structural topological properties of the wpsl and its dualare discussed in order to quantify the annealed coordination number disorder . in section 4we discuss the geometric properties of the wpsl in an attempt to quantify the annealed size disorder .finally , section 5 gives a short summary of our results .perhaps an exact algorithm can provide a better description of the model than the mere definition . in step one ,the generator divides the initiator , say a square of unit area , randomly into four smaller blocks .we then label the four newly created blocks by their respective areas and in a clockwise fashion starting from the upper left block ( see figure 2 ) . in each stepthereafter only one block is picked preferentially with respect to their respective area ( which we also refer as the fitness parameter ) and then it is divided randomly into four blocks . in general, the step of the algorithm can be described as follows .( i ) subdivide the interval ] , ] each of which represents the blocks labelled by their areas respectively .( ii ) generate a random number from the interval ] and $ ] respectively and hence the point mimics a random point chosen in the block .( v ) draw two perpendicular lines through the point parallel to the sides of the block in order to divide it into four smaller blocks .the label is now redundant and hence it can be reused .( vi ) label the four newly created blocks according to their areas , , and respectively in a clockwise fashion starting from the upper left corner . (vii ) increase time by one unit and repeat the steps ( i ) - ( vi ) _ ad infinitum_. with exactly neighbours as a function of time ., width=321,height=207 ]we first focus our analysis on the blocks of the wpsl and their coordination numbers .note that for square lattice , the deterministic counterpart of the wpsl , the coordination number is a constant equal to .however , the coordination number in the wpsl is neither a constant nor has a typical mean value , rather the coordination number that each block assumes in the wpsl is random .moreover , it is allowed to evolve with time and hence the coordination number disorder in the wpsl can be regarded as of annealed type . defining each step of the algorithm as one time unit and imposing periodic boundary condition in the simulation, we find that the number of blocks which have coordination number ( or neighbours ) continue to grow linearly with time ( see figure 3 ) . on the other hand , the number of total blocks at time in the lattice also grow linearly with time which we can write in the asymptotic regime .the ratio of the two quantities that describes the fraction of the total blocks which have coordination number is .it implies that becomes a global property since we find it independent of time and size of the lattice .we now take wpsl of fixed size or time and look into its structural properties . for instance, we want to find out what fraction of the total blocks of the wpsl of a given size has coordination number . for thiswe collect data for as a function of and obtain the coordination number distribution function where subscript indicates fixed time .interestingly , the same wpsl can be interpreted as a network if the blocks of the lattice are regarded as nodes and the common borders between blocks as their links which is topologically identical to the dual of the wpsl ( dwpsl ) . for the dwpsl network where data points represent average of independent realizations .the line have slope exactly equal to revealing power - law degree distribution with exponent ., width=321,height=207 ] is shown using using the same data as of figure 4 . the dotted line with slope equal to is drawn to guide our eyes ., width=321,height=207 ] in fact , the data for the degree distribution of the dwpsl network is exactly the same as the coordination number distribution function of the wpsl i.e. , .the plot of vs in figure 4 using data obtained after ensemble average over independent realizations clearly suggests that fitting a straight line to data is possible .this implies that the degree distribution decays obeying power - law however , note that figure 4 has heavy or fat - tail , benchmark of the scale - free network , which represents highly connected _ hub _ nodes .the presence of messy tail - end complicates the process of fitting the data into power - law forms , estimating its exponent , and identifying the range over which power - law holds .one way of reducing the noise at the tail - end of the degree distribution is to plot cumulative distribution .we therefore plot vs in figure 5 using the same data of figure 4 and find that the heavy tail smooths out naturally where no data is obscured .the straight line fit of figure 5 has a slope which indicates that the degree distribution ( figure 4 ) decays following power - law with exponent .we find it worthwhile to mention as a passing note that the mean coordination number reaches asymptotically to a constant value equal to .we thus find that in the large - size limit the wpsl develops some order in the sense that its annealed coordination number disorder or the degree distribution of its dual is scale - free in character .this is in sharp contrast to the quenched coordination number disorder found in the voronoi lattice where it is almost impossible to find cells which have significantly higher or fewer neighbours than the mean value .in fact , it has been shown that the degree distribution of the dual of the voronoi lattice is gaussian .the square lattice , on the other hand , is self - dual and its degree distribution is .further , it is interesting to point out that the exponent is significantly higher than usually found in most real - life network which is typically .this suggests that in addition to the pa rule the network in question has to obey some constraints .for instance , nodes in the wpsl are spatially embedded in euclidean space , links gained by the incoming nodes are constrained by the spatial location and the fitting parameter of the nodes .owing to its unique dynamics this was not unexpected . perhaps , it is noteworthy to mention that the the degree distribution of the electric power grid , whose nodes like wpsl are also embedded in the spatial position , is shown to exhibit power - law but with exponent .the power - law degree distribution has been found in many seemingly unrelated real life networks .it implies that there must exists some common underlying mechanisms for which disparate systems behave in such a remarkably similar fashion .barabasi and albert argued that the growth and the pa rule are the main essence behind the emergence of such power - law .indeed , the dwpsl network too grows with time but in sharp contrast to the ba model where network grows by addition of one single node with edges per unit time , the dwpsl network grows by addition of a group of three nodes which are already linked by two edges .it also differs in the way incoming nodes establish links with the existing nodes . to understand the growth mechanism of the dwpsl networklet us look into the step of the algorithm .first a node , say it is labeled as , is picked from the nodes preferentially with respect to the fitness parameter of a node ( i.e. , according to their respective areas ) .secondly , connect the node with two new nodes and in order to establish their links with the existing network . at the same time , at least two or more links of with other nodes are removed ( though the exact number depends on the number of neighbours already has ) in favour of linking them among the three incoming nodes in a self - organised fashion . in the process ,the degree of the node will either decrease ( may turn into a node with marginally low degree in the case it is highly connected node ) or at best remain the same but will never increase .it , therefore , may appear that the pa rule is not followed here .a closer look into the dynamics , however , reveals otherwise .it is interesting to note that an existing nodes during the process gain links only if one of its neighbour is picked not itself .it implies that the higher the links ( or degree ) a node has , the higher its chance of gaining more links since they can be reached in a larger number of ways .it essentially embodies the intuitive idea of pa rule .therefore , the dwpsl network can be seen to follow preferential attachment rule but in disguise .we again focus on the blocks of the wpsl but this time we characterize them by their length and width instead of the number of neighbours they have .then the evolution of their distribution function can be described by the following kinetic equation incorporating the -tuple millen transform in the kinetic equation yields iterating it to get all the derivatives of and then substituting them into the taylor series expansion of about one can write its solution in terms of generalized hypergeometric function where for symmetry reason and ^{{{1}\over{2}}}.\ ] ] one can immediately see that ( i ) is the total number of blocks and ( ii ) is the total area of all the blocks which is obviously a conserved quantity .the behaviour of in the long time limit is it implies that the system is in fact governed by several conservation laws namely , are independent of time for all which has been confirmed by numerical simulation ( see figure 6 ) .vs for are drawn using data collected from one realization ., width=321,height=207 ] we now focus on the distribution function that describes the concentration of blocks of length at time regardless of the size of their widths .then the moment of is defined as appreciating the fact that and using equation ( [ eq : aminus ] ) one can immediately write that note that had we focus on instead of we would have exactly the same result since their moments are identical , , due to symmetry reason .we find that the quantity and hence or is a conserved quantity .however , yet anotherinteresting fact is that although remains a constant against time in every independent realization , their exact numerical value fluctuates sample to sample ( see figure 7 ) .it clearly indicates lack of self - averaging or wild fluctuation .nonetheless , during each realization we can use as a measure to populate the block with the fraction of the total population .the corresponding `` partition function '' then is whose solution can be written immediately from equation ( [ eq : qmoment ] ) to give vs for four different realizations shows that the numerical value is different at every independent realization ., width=321,height=207 ] we find it instructive to express in terms of square root of the mean area as it gives the weighted number of squares needed to cover the measure which scales as where the mass exponent the non - linear nature of suggests that an infinite hierarchy of exponents is required to specify how the moments of the probabilities scales with .note that is the hausdorff - besicovitch ( h - d ) dimension of the wpsl since the bare number and follows from the conservation laws ( or normalization of the probabilities ) ., width=321,height=207 ] we now perform the legendre transformation of by using the lipschitz - hlder exponent as an independent variable to obtain the new function substituting it in equation ( [ weightednumber ] ) we find that since it is the dominant value of the integral obtained by the extremal conditions. we can infer from this that the number of squares needed to cover the psl with in the range to scales as where is the number of times the subdivided regions indexed by .it implies that a spectrum of spatially intertwined fractal dimensions are needed to characterize the measure which is always concave in character ( see figure 8) .it implies that the size disorder of the blocks are multifractal in character since the measure is related to size of the blocks .that is , the distribution of in wpsl can be subdivided into a union of fractal subsets each with fractal dimension in which the measure scales as .note that is always concave in character ( see figure 8) with a single maximum at correspond to the dimension of the wpsl with empty blocks . on the other hand , we find that the entropy associated with the partition of the measure on the support ( wpsl ) by using the relation in the definition of . then a few steps of algebraic manipulation reveals that exhibits scaling where the exponent obtained from it is interesting to note that is related to the generalized dimension , also related to the rnyi entropy in the information theory , given by ={{\tau(q)}\over{1-q}},\ ] ] which is often used in the multifractal formalism as it can also provide insightful interpretation .for instance , is the dimension of the support , is the rnyi information dimension and is known as the correlation dimension .we have proposed and studied a weighted planar stochastic lattice ( wpsl ) which has annealed coordination number and the block size disorder . we have shown that the coordination number disorder is scale - free in character since the degree distribution of its dual ( dwpsl ) , which is topologically identical to the network obtained by considering blocks of the wpsl as nodes and the common border between blocks as links , exhibits power - law .however , the novelty of this network is that it grows by addition of a group of already linked nodes which then establish links with the existing nodes following pa rule though in disguise in the sense that existing nodes gain links only if one of their neighbour is picked not itself . besides, we have shown that if the blocks of the wpsl are characterized by their respective length and width then we find remains a constant regardless of the size of the lattice .however , the numerical values of the conserved quantities except varies from sample to sample revealing absence of self - averaging - an indication of wild fluctuation .we have shown that if the blocks are occupied with a fraction of the measure equal to cubic power of their respective length or width then its distribution on the wpsl is multifractal nature .such multifractal lattice with scale - free coordination disorder can be of great interest as it has the potential to mimic disordered medium on which one can study various physical phenomena like percolation and random walk problems etc .nip acknowledges support from the bose centre for advanced study and research in natural sciences .rao m , sengupta s , and sahu h k 1995 phys .* 75 * , 2164 atkinson h v 1998 acta metall , * 36 * 469 j. c. m. mombach j c m , vasconcellos m a z , and de almeida r m c j. 1990 phys .d * 23 * , 600 weaire d and rivier n , 1984 contemp . phys . * 25 * , 59 okabe a , boots b , sugihara k , and chiu s n 2000 _ spatial tessellations - concepts and applications of voronoi diagrams _ ( chicester : wiley ) delaney g w , hutzler s and aste t 2008 phys .* 101 * 120602 hassan m k and rodgers g j 1996 phys .a * 218 * , 207 krapivsky p l and ben - naim e 1994 phys . rev .e * 50 * 3502 krapivsky p l and ben - naim e 1996 phys .* 76 * 3234 majumdar s n , dean d s and krapivsky p l 2005 pramana - j. phys . * 64 * , 1187 barabasi a l and r. albert r 1999 science * 286 * , 509 albert r and barabasi a l 2002 rev . mod . phys . * 74 * , 47 mandelbrot b b 1982 _ fractal geometry of nature _ ( new york : w. h. freeman ) evers f and mirlin a d 2008 rev .* 80 * , 1355 m. e. j. newman , siam review * 45 * , 167 ( 2003 ) .de oliveira m m , alves s g , ferreira s c , and dickman r 2008 phys . rev .e * 78 * , 031133 luke y l 1969 _ the special functions and their approximations i _ ( new york : academic press ) feder j 1988 fractals ( new york : plenum , new york ) hentschel h g e and procaccia i 1983 physica * 8d * , 435 sporting j , weickert j 1999 ieee trans .theory * 45 * , 1051
we propose a weighted planar stochastic lattice ( wpsl ) formed by the random sequential partition of a plane into contiguous and non - overlapping blocks and find that it evolves following several non - trivial conservation laws , namely is independent of time , where and are the length and width of the block . its dual on the other hand , obtained by replacing each block with a node at its center and common border between blocks with an edge joining the two vertices , emerges as a network with a power - law degree distribution where revealing scale - free coordination number disorder since also describes the fraction of blocks having neighbours . to quantify the size disorder , we show that if the block is populated with then its distribution in the wpsl exhibits multifractality .
with the advent of the internet , many machine learning applications are faced with very large and inherently high - dimensional datasets , resulting in challenges in scaling up training algorithms and storing the data . especially in the context of search and machine translation , corpus sizes used in industrial practicehave long exceeded the main memory capacity of single machine .for example , experimented with a dataset with potentially 16 trillion ( ) unique features . discusses training sets with ( on average ) items and distinct features , requiring novel algorithmic approaches and architectures . as a consequence ,there has been a renewed emphasis on scaling up machine learning techniques by using massively parallel architectures ; however , methods relying solely on parallelism can be expensive ( both with regards to hardware requirements and energy costs ) and often induce significant additional communication and data distribution overhead . this work approaches the challenges posed by large datasets by leveraging techniques from the area of _ similarity search _ , where similar increase in dataset sizes has made the storage and computational requirements for computing exact distances prohibitive , thus making data representations that allow compact storage and efficient approximate distance computation necessary .the method of -bit minwise hashing is a very recent progress for efficiently ( in both time and space ) computing _ resemblances _ among extremely high - dimensional ( e.g. , ) binary vectors . in this paper , we show that -bit minwise hashing can be seamlessly integrated with linear support vector machine ( svm ) and logistic regression solvers . in , the authors addressed a critically important problem of training linear svm when the data can not fit in memory . in this paper, our work also addresses the same problem using a very different approach . in the context of search , a standard procedure to represent documents ( e.g. , web pages ) is to use -shingles ( i.e. , contiguous words ) , where can be as large as 5 ( or 7 ) in several studies .this procedure can generate datasets of extremely high dimensions .for example , suppose we only consider common english words .using may require the size of dictionary to be .in practice , often suffices , as the number of available documents may not be large enough to exhaust the dictionary . for -shingle data , normally only abscence / presence ( 0/1 ) information is used , as it is known that word frequency distributions within documents approximately follow a power - law , meaning that most single terms occur rarely , thereby making a -shingle unlikely to occur more than once in a document .interestingly , even when the data are not too high - dimensional , empirical studies achieved good performance with binary - quantized data .when the data can fit in memory , linear svm is often extremely efficient after the data are loaded into the memory .it is however often the case that , for very large datasets , the data loading time dominates the computing time for training the svm .a much more severe problem arises when the data can not fit in memory .this situation can be very common in practice .the publicly available _ webspam _ dataset needs about 24 gb disk space ( in libsvm input data format ) , which exceeds the memory capacity of many desktop pcs .note that _ webspam _ , which contains only 350,000 documents represented by 3-shingles , is still a small dataset compared to the industry applications .we propose a solution which leverages _ b - bit minwise hashing_. our approach assume the data vectors are binary , very high - dimensional , and relatively sparse , which is generally true of text documents represented via shingles .we apply -bit minwise hashing to obtain a compact representation of the original data . in order to use the technique for efficient learning, we have to address several issues : * we need to prove that the matrices generated by -bit minwise hashing are indeed positive definite , which will provide the solid foundation for our proposed solution . *if we use -bit minwise hashing to estimate the resemblance , which is nonlinear , how can we effectively convert this nonlinear problem into a linear problem ? * compared to other hashing techniques such as random projections , count - min ( cm ) sketch , or vowpal wabbit ( vw ) , does our approach exhibits advantages ?it turns out that our proof in the next section that -bit hashing matrices are positive definite naturally provides the construction for converting the otherwise nonlinear svm problem into linear svm .+ proposed solving the memory bottleneck by partitioning the data into blocks , which are repeatedly loaded into memory as their approach updates the model coefficients .however , the computational bottleneck is still at the memory because loading the data blocks for many iterations consumes a large number of disk i / os .clearly , one should note that our method is not really a competitor of the approach in .in fact , both approaches may work together to solve extremely large problems ._ minwise hashing _ has been successfully applied to a very wide range of real - world problems especially in the context of search , for efficiently computing set similarities .minwise hashing mainly works well with binary data , which can be viewed either as 0/1 vectors or as sets .given two sets , , a widely used ( normalized ) measure of similarity is the _ resemblance _ : in this method , one applies a random permutation on and .the collision probability is simply one can repeat the permutation times : , , ... , to estimate without bias , as the common practice of minwise hashing is to store each hashed value , e.g. , and , using 64 bits .the storage ( and computational ) cost will be prohibitive in truly large - scale ( industry ) applications .b - bit minwise hashing _ provides a strikingly simple solution to this ( storage and computational ) problem by storing only the lowest bits ( instead of 64 bits ) of each hashed value . for convenience ,we define the minimum values under : and , and define lowest bit of , and lowest bit of .[the_basic ] assume is large .^{2^b-1}}{1-\left[1-r_1\right]^{2^b}},\hspace{1 in } a_{2,b } = \frac{r_2\left[1-r_2\right]^{2^b-1}}{1-\left[1-r_2\right]^{2^b}}.\box\end{aligned}\ ] ] this ( approximate ) formula ( [ eqn_basic ] ) is remarkably accurate , even for very small .some numerical comparisons with the exact probabilities are provided in appendix [ app_basic_error ] .we can then estimate ( and ) from independent permutations : , , ... , , ^ 2 } = \frac{1}{k}\frac{\left[c_{1,b}+(1-c_{2,b})r\right]\left[1-c_{1,b}-(1-c_{2,b})r\right]}{\left[1-c_{2,b}\right]^2}\end{aligned}\ ] ] we will show that we can apply -bit hashing for learning without explicitly estimating from ( [ eqn_basic ] ) .this section proves some theoretical properties of matrices generated by resemblance , minwise hashing , or -bit minwise hashing , which are all positive definite matrices .our proof not only provides a solid theoretical foundation for using -bit hashing in learning , but also illustrates our idea behind the construction for integrating -bit hashing with linear learning algorithms .+ * definition * : a symmetric matrix satisfying , for all real vectors is called _ positive definite ( pd)_. note that here we do not differentiate pd from _ nonnegative definite_. [ thm_pd ] consider sets , , ... , .apply one permutation to each set and define .the following three matrices are all pd . 1 . the _ resemblance matrix _ , whose -th entry is the resemblance between set and set : 2 .the _ minwise hashing matrix _ : 3 . the _ b - bit minwise hashing matrix _ : , where is the -th lowest bit of .consequently , consider independent permutations and denote the b - bit minwise hashing matrix generated by the -th permutation .then the summation is also pd . + * proof : * a matrix is pd if it can be written as an inner product . because is the inner product of two d - dim vectors .thus , is pd .similarly , the b - bit minwise hashing matrix is pd because the resemblance matrix is pd because and is the -th element of the pd matrix .note that the expectation is a linear operation . our proof that the -bit minwise hashing matrix is pd provides us with a simple strategy to expand a nonlinear ( resemblance ) kernel into a linear ( inner product ) kernel .after concatenating the vectors resulting from ( [ eqn_m_b ] ) , the new ( binary ) data vector after the expansion will be of dimension with exactly ones .linear algorithms such as linear svm and logistic regression have become very powerful and extremely popular .representative software packages include svm , pegasos , bottou s sgd svm , and liblinear .given a dataset , , , the -regularized linear svm solves the following optimization problem : and the -regularized logistic regression solves a similar problem : here is an important penalty parameter .since our purpose is to demonstrate the effectiveness of our proposed scheme using -bit hashing , we simply provide results for a wide range of values and assume that the best performance is achievable if we conduct cross - validations . in our approach , we apply independent random permutations on each feature vector and store the lowest bits of each hashed value .this way , we obtain a new dataset which can be stored using merely bits . at run - time , we expand each new data point into a -length vector .+ for example , suppose and the hashed values are originally , whose binary digits are .consider .then the binary digits are stored as ( which corresponds to in decimals ) . at run - time , we need to expand them into a vector of length , to be , which will be the new feature vector fed to a solver : clearly , this expansion is directly inspired by the proof that the -bit minwise hashing matrix is pd in theorem [ thm_pd ] .note that the total storage cost is still just bits and each new data vector ( of length ) has exactly 1 s .also , note that in this procedure we actually do not explicitly estimate the resemblance using ( [ eqn_basic ] ) .our experimental settings follow the work in very closely . the authors of conducted experiments on three datasets , of which the _ webspam _ dataset is public and reasonably high - dimensional ( , ) .therefore , our experiments focus on _ webspam_. following , we randomly selected of samples for testing and used the remaining samples for training .we chose liblinear as the tool to demonstrate the effectiveness of our algorithm .all experiments were conducted on workstations with xeon(r ) cpu ( w5590.33ghz ) and 48 gb ram , under windows 7 system .thus , in our case , the original data ( about 24 gb in libsvm format ) fit in memory . in applications for which the data do not fit in memory, we expect that -bit hashing will be even more substantially advantageous , because the hashed data are relatively very small .in fact , our experimental results will show that for this dataset , using and can achieve the same testing accuracy as using the original data .the effective storage for the reduced dataset ( with 350k examples , using and ) would be merely about 70 mb .we implemented a new resemblance kernel function and tried to use libsvm to train the _webspam _ dataset .we waited for * * over one week * * but libsvm still had not output any results .fortunately , using -bit minswise hashing to estimate the resemblance kernels , we were able to obtain some results .for example , with and , the training time of libsvm ranged from 1938 seconds ( ) to 13253 seconds ( ) .in particular , when , the test accuracies essentially matched the best test results given by liblinear on the original _ webspam _ data . therefore , there is a significant benefit of _ data reduction _ provided by -bit minwise hashing , for training nonlinear svm .this experiment also demonstrates that it is very important ( and fortunate ) that we are able to transform this nonlinear problem into a linear problem .since there is an important tuning parameter in linear svm and logistic regression , we conducted our extensive experiments for a wide range of values ( from to ) with fine spacings in ] is given by ( [ eqn_var_b ] ) and is the constant defined in theorem [ the_basic ] . + * proof * : the proof is quite straightforward , by following the conditional expectation formula : , and the conditional variance formula .recall , originally we estimate the resemblance by } ] , the additional term ^ 2}$ ] in ( [ eqn_var_b - bit+vw ] ) can be relatively large if is not large enough . therefore , we should choose ( to reduce the additional variance ) and ( otherwise there is no need to apply this vw step ) .if , then may be a good trade - off , because .+ figure [ fig_16-bit+vw ] provides an empirical study to verify this intuition .basically , as , using vw on top of 16-bit hashing achieves the same accuracies at using 16-bit hashing directly and reduces the training time quite noticeably .[ fig_16-bit+vw ] we also experimented with combining 8-bit hashing with vw .we found that we need to achieve similar accuracies , i.e. , the additional vw step did not bring more improvement ( without hurting accuracies ) in terms of training speed when .this is understandable from the analysis of the variance in lemma [ lem_b - bit+vw ] .minwise hashing has been widely used in ( search ) industry and -bit minwise hashing requires only very minimal ( if any ) modifications ( by doing less work ) .thus , we expect -bit minwise hashing will be adopted in practice .it is also well - understood in practice that we can use ( good ) hashing functions to very efficiently simulate permutations . in many real - world scenarios ,the preprocessing step is not critical because it requires only one scan of the data , which can be conducted off - line ( or on the data - collection stage , or at the same time as n - grams are generated ) , and it is trivially parallelizable .in fact , because -bit minwise hashing can substantially reduce the memory consumption , it may be now affordable to store considerably more examples in the memory ( after -bit hashing ) than before , to avoid ( or minimize ) disk ios . once the hashed data have been generated , they can be used and re - used for many tasks such as supervised learning , clustering , duplicate detections , near - neighbor search , etc .for example , a learning task may need to re - use the same ( hashed ) dataset to perform many cross - validations and parameter tuning ( e.g. , for experimenting with many values in svm ) .+ nevertheless , there might be situations in which the preprocessing time can be an issue .for example , when a new unprocessed document ( i.e. n - grams are not yet available ) arrives and a particular application requires an immediate response from the learning algorithm , then the preprocessing cost might ( or might not ) be an issue .firstly , generating n - grams will take some time . secondly ,if during the session a disk io occurs , then the io cost will typically mask the cost of preprocessing for -bit minwise hashing .note that the preprocessing cost for the vw algorithm can be substantially lower .thus , if the time for pre - processing is indeed a concern ( while the storage cost or test accuracies are not as much ) , one may want to consider using vw ( or _ very sparse random projections _ ) for those applications .as data sizes continue to grow faster than the memory and computational power , machine - learning tasks in industrial practice are increasingly faced with training datasets that exceed the resources on a single server . a number of approaches have been proposed that address this by either scaling out the training process or partitioning the data , but both solutions can be expensive . in this paper , we propose a compact representation of sparse , binary datasets based on -bit minwise hashing .we show that the -bit minwise hashing estimators are positive definite kernels and can be naturally integrated with learning algorithms such as svm and logistic regression , leading to dramatic improvements in training time and/or resource requirements .we also compare -bit minwise hashing with the vowpal wabbit ( vw ) algorithm , which has the same variances as random projections .our theoretical and empirical comparisons illustrate that usually -bit minwise hashing is significantly more accurate ( at the same storage ) than vw for binary data .interestingly , -bit minwise hashing can be combined with vw to achieve further improvements in terms of training speed when is large ( e.g. , ) .10 dimitris achlioptas .database - friendly random projections : with binary coins . , 66(4):671687 , 2003 .alexandr andoni and piotr indyk .near - optimal hashing algorithms for approximate nearest neighbor in high dimensions . in _ commun .volume 51 , pages 117122 , 2008 .harald baayen . , volume 18 of _ text , speech and language technology_. kulver academic publishers , 2001 .michael bendersky and w. bruce croft .finding text reuse on the web . in _ wsdm _ , pages 262271 , barcelona , spain , 2009 .leon bottou .available at http://leon.bottou.org/projects/sgd .andrei z. broder . on the resemblance and containment of documents . in _ the compression and complexity of sequences _ , pages 2129 , positano , italy , 1997 .andrei z. broder , steven c. glassman , mark s. manasse , and geoffrey zweig .syntactic clustering of the web . in _www _ , pages 1157 1166 , santa clara , ca , 1997 .gregory buehrer and kumar chellapilla .a scalable pattern mining approach to web graph compression with communities . in _wsdm _ , pages 95106 , stanford , ca , 2008 .olivier chapelle , patrick haffner , and vladimir n. vapnik .support vector machines for histogram - based image classification ., 10(5):10551064 , 1999 .ludmila cherkasova , kave eshghi , charles b. morrey iii , joseph tucek , and alistair c. veitch . applying syntactic similarity algorithms for enterprise information management . in _kdd _ , pages 10871096 , paris , france , 2009 .flavio chierichetti , ravi kumar , silvio lattanzi , michael mitzenmacher , alessandro panconesi , and prabhakar raghavan . on compressing social networks .in _ kdd _ , pages 219228 , paris , france , 2009 .graham cormode and s. muthukrishnan .an improved data stream summary : the count - min sketch and its applications ., 55(1):5875 , 2005 .yon dourisboure , filippo geraci , and marco pellegrini .extraction and classification of dense implicit communities in the web graph ., 3(2):136 , 2009 .rong - en fan , kai - wei chang , cho - jui hsieh , xiang - rui wang , and chih - jen lin .liblinear : a library for large linear classification ., 9:18711874 , 2008 .dennis fetterly , mark manasse , marc najork , and janet l. wiener .a large - scale study of the evolution of web pages . in _ www _ , pages 669678 , budapest , hungary , 2003 .george forman , kave eshghi , and jaap suermondt .efficient detection of large - scale redundancy in enterprise file systems ., 43(1):8491 , 2009 . sreenivas gollapudi and aneesh sharma . an axiomatic approach for result diversification . in _ www _ , pages 381390 , madrid , spain , 2009 .matthias hein and olivier bousquet .hilbertian metrics and positive definite kernels on probability measures . in _ aistats _ , pages 136143 , barbados , 2005 .cho - jui hsieh , kai - wei chang , chih - jen lin , s. sathiya keerthi , and s. sundararajan . a dual coordinate descent method for large - scale linear svm . in _ proceedings of the 25th international conference on machine learning _ ,icml , pages 408415 , 2008 .yugang jiang , chongwah ngo , and jun yang .towards optimal bag - of - features for object categorization and semantic video retrieval . in _ civr _ , pages 494501 , amsterdam , netherlands , 2007 .nitin jindal and bing liu .opinion spam and analysis . in _pages 219230 , palo alto , california , usa , 2008 .thorsten joachims . training linear svms in linear time . in _ kdd _ , pages 217226 , pittsburgh , pa , 2006 .konstantinos kalpakis and shilang tang .collaborative data gathering in wireless sensor networks using measurement co - occurrence ., 31(10):19791992 , 2008 .ping li , trevor j. hastie , and kenneth w. church . very sparse random projections . in _kdd _ , pages 287296 , philadelphia , pa , 2006 .ping li and arnd christian knig . theory and applications b - bit minwise hashing . in _ commun .acm _ , to appear .ping li and arnd christian .b - bit minwise hashing . in _ www _ , pages 671680 , raleigh , nc , 2010 .ping li , arnd christian , and wenhao gui .b - bit minwise hashing for estimating three - way similarities . in _ nips _ , vancouver , bc , 2010 .gurmeet singh manku , arvind jain , and anish das sarma . etecting near - duplicates for web - crawling . in _ www _ , banff , alberta , canada , 2007 .marc najork , sreenivas gollapudi , and rina panigrahy .less is more : sampling the neighborhood graph makes salsa better and faster . in _ wsdm _ , pages 242251 ,barcelona , spain , 2009 .shai shalev - shwartz , yoram singer , and nathan srebro .pegasos : primal estimated sub - gradient solver for svm . in _ icml _ , pages 807814 , corvalis , oregon , 2007 .qinfeng shi , james petterson , gideon dror , john langford , alex smola , and s.v.n . vishwanathan .hash kernels for structured data ., 10:26152637 , 2009 .simon tong .lessons learned developing a practical large scale machine learning system .available at http://googleresearch.blogspot.com/2010/04/lessons-learned-developing-practical.html , 2008 .tanguy urvoy , emmanuel chauveau , pascal filoche , and thomas lavergne .tracking web spam with html style similarities ., 2(1):128 , 2008 .kilian weinberger , anirban dasgupta , john langford , alex smola , and josh attenberg .feature hashing for large scale multitask learning . in _ icml _ , pages 11131120 , 2009 .hsiang - fu yu , cho - jui hsieh , kai - wei chang , and chih - jen lin .large linear classification when data can not fit in memory . in _ kdd _ , pages 833842 , 2010 .note that the only assumption needed in the proof of theorem [ the_basic ] is that is large , which is virtually always satisfied in practice .interestingly , ( [ eqn_basic ] ) is remarkably accurate even for very small .figure [ fig_approximateerror ] shows that when , the absolute error caused by using ( [ eqn_basic ] ) is .the exact probability , which has no closed - form , can be computed by exhaustive enumerations for small .the vw algorithm provides a bias - corrected version of the count - min ( cm ) sketch algorithm .the key step in cm is to independently and uniformly hash elements of the data vectors to .that is with equal probabilities , where . for convenience ,we introduce the following indicator function : thus , we can denote the cm `` samples '' after the hashing step by and estimate the inner product by , whose expectation and variance can be shown to be \end{aligned}\ ] ] from the definition of , we can easily infer its moments , for example , the proof of the mean ( [ eqn_mean_cm ] ) is simple : = \sum_{i=1}^d u_{1,i}u_{2,i } + \frac{1}{k}\sum_{i\neq j } u_{1,i}u_{2,j}.\end{aligned}\ ] ] the variance ( [ eqn_var_cm ] is more complicated : the following expansions are helpful : ^ 2 = & \sum_{i=1}^d a_i^2b_i^2 + \sum_{i\neq j } a_i^2b_j^2 + 2a_i^2b_ib_j+2b_i^2a_ia_j+2a_ib_ia_jb_j\\\notag + & \sum_{i\neq j\neq c } a_i^2b_jb_c+b_i^2a_ja_c + 4a_ib_ia_jb_c+\sum_{i\neq j\neq c\neq t}a_ib_ja_cb_t\\\notag \left[\sum_{i\neq j } a_i b_j\right]^2 = & \sum_{i\neq j } a_i^2b_j^2+a_ib_ia_jb_j + \sum_{i\neq j\neq c } a_i^2b_jb_c+b_i^2a_ja_c + 2a_ib_ia_jb_c+\sum_{i\neq j\neq c\neq t}a_ib_ja_cb_t\\\notag \sum_{i=1}^da_ib_i\sum_{i\neq j } a_i b_j = & \sum_{i\neq j } a_i^2b_ib_j+b_i^2a_ia_j+\sum_{i\neq j\neq c } a_i^2b_jb_c\end{aligned}\ ] ] which , combined with the moments of , yield \\\notag = & ( k-1)\left[\frac{1}{k}\sum_{i\neq j } u_{1,i}u_{2,i}u_{1,j}u_{2,j}+\frac{2}{k^2}\sum_{i\neq j \neq c } u_{1,i}u_{2,i}u_{1,j}u_{2,c}+\frac{1}{k^3}\sum_{i\neq j\neq c\neq t } u_{1,i}u_{2,j}u_{1,c}u_{2,t}\right]\end{aligned}\ ] ] therefore , \end{aligned}\ ] ] the nice approach proposed in the vw paper is to pre - element - wise - multiply the data vectors with a random vector before taking the hashing operation .we denote the two resultant vectors ( samples ) by and respectively : where with equal probabilities . here, we provide a more general scheme by sampling from a sub - gaussian distribution with parameter and which include normal ( i.e. , ) and the distribution on with equal probabilities ( i.e. , ) as special cases .let .the goal is to show \end{aligned}\ ] ] we can use the previous results and the conditional expectation and variance formulas : , , , . as , we need to compute thus , by examining the expectation ( [ eqn_mean_cm ] ) , the bias of cm can be easily removed , because \end{aligned}\ ] ] is unbiased with variance ,\end{aligned}\ ] ] which is essentially the same as the variance of vw .we compare vw ( and random projections ) with -bit minwise hashing for the task of estimating inner products on binary data . with binary data ,i.e. , , we have , , .the variance ( [ eqn_var_vw ] ) ( by using ) becomes we can compare this variance with the variance of -bit minwise hashing . because the variance ( [ eqn_var_b ] ) is for estimating the resemblance , we need to convert it into the variance for estimating the inner product using the relation : for -bit minwise hashing , each sample is stored using only bits . for vw ( and random projections ) , we assume each sample is stored using 32 bits ( instead of 64 bits ) for two reasons : ( i ) for binary data , it would be very unlikely for the hashed value to be close to , even when ; ( ii ) unlike -bit minwise hashing , which requires exact bit - matching in the estimation stage , random projections only need to compute the inner products for which it would suffice to store hashed values as ( double precision ) real numbers . if , then -bit minwise hashing is more accurate than binary random projections .equivalently , when , in order to achieve the same level of accuracy ( variance ) , -bit minwise hashing needs smaller storage space than random projections .there are two issues we need to elaborate on : 1 . here, we assume the purpose of using vw is for _ data reduction_. that is , is small compared to the number of non - zeros ( i.e. , , ) .we do not consider the case when is taken to be extremely large for the benefits of _ compact indexing _ without achieving _data reduction_. 2 .because we assume is small , we need to represent the sample with enough precision .that is why we assume each sample of vw is stored using 32 bits .in fact , since the ratio is usually very large ( e.g. , ) by using 32 bits for each vw sample , it will remain to be very large ( e.g. , ) even if we only need to store each vw sample using 16 bits . without loss of generality, we can assume ( hence ) .figures [ fig_gb8 ] to [ fig_gb1 ] display the ratios ( [ eqn_g_vw ] ) for , respectively . in order to achieve high learning accuracies , -bit minwise hashing requires ( or even 8) . in each figure , we plot for and full ranges of and .we can see that is much larger than one ( usually 10 to 100 ) , indicating the very substantial advantage of -bit minwise hashing over random projections .note that the comparisons are essentially independent of .this is because in the variance of binary random projection ( [ eqn_g_vw ] ) the term is negligible compared to in binary data as is very large . to generate the plots, we used ( although practically should be much larger ) .+ * conclusion * : our theoretical analysis has illustrated the substantial improvements of -bit minwise hashing over the vw algorithm and random projections in binary data , often by 10- to -fold .we feel such a large performance difference should be noted by researchers and practitioners in large - scale machine learning .
in this paper , we first demonstrate that -bit minwise hashing , whose estimators are positive definite kernels , can be naturally integrated with learning algorithms such as svm and logistic regression . we adopt a simple scheme to transform the nonlinear ( resemblance ) kernel into linear ( inner product ) kernel ; and hence large - scale problems can be solved extremely efficiently . our method provides a simple effective solution to large - scale learning in massive and extremely high - dimensional datasets , especially when data do not fit in memory . + we then compare -bit minwise hashing with the vowpal wabbit ( vw ) algorithm ( which is related the count - min ( cm ) sketch ) . interestingly , vw has the same variances as random projections . our theoretical and empirical comparisons illustrate that usually -bit minwise hashing is significantly more accurate ( at the same storage ) than vw ( and random projections ) in binary data . furthermore , -bit minwise hashing can be combined with vw to achieve further improvements in terms of training speed , especially when is large .
initially introduced in as a discrete approximation of langevin dynamics , the model of repeated quantum interactions has found since many applications ( quantum trajectories , stochastic control , etc . ) . in this workwe generalize this model by allowing _random _ interactions at each time step .our main focus is the long - time behavior of the reduced dynamics .our viewpoint is that of quantum open systems , where a `` small '' system is in interaction with an inaccessible environment ( or an auxiliary system ) .we are interested in the reduced dynamics of the small system , which is described by the action of quantum channels .when repeating such interactions , under some mild conditions on the spectrum of the quantum channel , we show that the successive states of the small system converge to the invariant density matrix of the channel .these considerations motivated us to consider random invariant states , and we introduce a new probability measure on the set of density matrices .there exists extensive literature on what is a `` typical '' density matrix .there are two general categories of such probability measures on : measures that come from metrics with statistical significance and the so - called `` induced measures '' , where density matrices are obtained as partial traces of larger , random pure states .our construction from section [ sec : fixed ] falls into the second category , since our model involves an open system in interaction with a chain of `` auxiliary '' systems .next , we introduce two models of random quantum channels . in the first model ,we allow for the states of the auxiliary system to be random . in the second one ,the unitary matrices acting on the coupled system are assumed random , distributed along the haar invariant probability on the unitary group , and independent between different interactions .since the ( random ) state of the system fluctuates , almost sure convergence does not hold , and we state results in the ergodic sense .the article is structured as follows .the section [ sec : rep_int ] is devoted to presenting the model of quantum repeated interactions and its description via quantum channels .section [ sec : spectral ] contains some general facts about the spectra of completely positive maps , as well as some related tools from matrix analysis .next , in section [ sec : fixed ] we study our first model , where the interaction unitary is a fixed , deterministic matrix .we prove that , under some assumptions on the spectrum of the quantum channel , the state of the system converges to the invariant state of the channel .it is at this time that we introduce the new ensemble of random density matrices , by transporting the unitary haar measure via the application which maps a channel to its invariant state .the final two sections are devoted to introducing two models of random quantum channels , one where the interaction unitary is constant and the auxiliary states are i.i.d .density matrices ( sec . [sec : random_env ] ) and another where the interaction unitaries are independent and haar distributed ( sec .[ sec : iid_unitaries ] ) .we introduce now some notation and recall some basic facts and terminology from quantum information theory .we write for the set of self - adjoint complex matrices and for the set of _ density matrices _ ( or states ) , = 1\} ] which verifies = \operatorname{tr}[a ( x \otimes \operatorname{i}_{\mathcal{k } } ) ] , \quad \forall x \in { \mathcal{b}}({\mathcal{h}}).\ ] ] we shall also extensively use the _ haar _ ( or uniform ) measure on the unitary group ; it is the unique probability measure which is invariant by left and right multiplication by unitary elements : this introductory section we give a description of the physical model we shall use in the rest of the paper : _ repeated quantum interactions_. the setting , a system interacting repeatedly with `` independent '' copies of an environment , was introduced by s. attal and y. pautrat in where it was shown that in the continuous limit ( when the time between interactions approaches zero ) , the dynamics is governed by a quantum stochastic differential equation . a different model , where after each interaction an indirect quantum measurement of the system is performed ,was considered by the second named author in and shown to converge in the limit to the so - called stochastic schrdinger equations . here ,we are concerned only with the discrete setting and with the limit of a large number of interactions .the study of random quantum trajectories is postponed to a later paper .consider a quantum system described by a complex hilbert space state .in realistic physical models , is usually a quantum system with relatively few degrees of freedom and it represents the object of interest of our study ; we shall refer to it as the _ small system_. consider also another quantum system which interacts with the initial small system .we shall call the _ environment _ and we denote by its hilbert state space . in this workwe consider finite dimensional spaces and .we shall eventually be interested in _ repeated _ interactions between and independent copies of , but let us start with the easier task of describing a single interaction between the `` small '' system and the environment .assume that the initial state of the system is a product state , where and are the respective states of the small system and the environment .the coupled system undergoes an unitary evolution and is the global state after the interaction .the unitary operator comes from a hamiltonian where the operators and are the free hamiltonians of the systems and respectively and represents the interaction hamiltonian. we shall be interested in the situation where , otherwise there is no coupling and the system and the environment undergo separate dynamics . in this general case , the evolution unitary operator is given by where is the interaction time .hence , the state of the coupled system after one interaction is given by since one is interested only in the dynamics of the `` small '' system , after taking the partial trace we obtain the final state of , .\ ] ] we now move on to describe successive interactions between and a chain of independent copies of . in order to do this , consider the countable tensor product where is the -th copy of the environment ( ) .this setting can be interpreted in two different ways : globally , as an evolution on infinite dimensional countable tensor product , or by discarding the environment , as a discrete evolution on .since we are interested only in the evolution of the `` small '' system , the latter approach is the better choice . from eq .( [ eq : single_interaction ] ) , we obtain the recurrence relation ,\ ] ] where are the successive states of the system at times and , and and are the interaction unitary and respectively the state of the auxiliary system for the -th interaction .note that at this stage we work in a general setting , without making any assumptions on the sequences and .we introduce now a more parsimonious description of repeated quantum interactions , via quantum channels .recall that a linear map is called -positive if the extended map is positive . is called _ completely positive _ if it is -positive for all ( in fact suffices ) and _ trace preserving _ if = \operatorname{tr}[x] ] . written as a block matrix in the basis defined in eq .( [ eq : product_basis ] ) , the matrix is diagonal , with diagonal blocks given by .writing in the same fashion and taking the partial trace , we obtain = \sum_{i , j=1}^{d'}b_j u_{ij } x u_{ij}^ * = \sum_{i , j=1}^{d'}(\sqrt{b_j } u_{ij } ) x ( \sqrt{b_j } u_{ij})^*,\ ] ] where are the blocks of the interaction unitary .one recognizes a kraus decomposition for , where the kraus elements are rescaled versions of the blocks of the stinespring matrix .moreover , if is a rank one projector then all the s are zero except one , hence the kraus decomposition we obtained has elements .since we shall be interested in repeated applications of quantum channels , it is natural that spectral properties of these maps should play an important role in what follows .one should note that most results of this section can be generalized to infinite dimensional hilbert spaces .the next lemma gathers some basic facts about quantum channels .since quantum channels preserve the compact convex set of density matrices , the first affirmation follows from the fixed point theorem of markov - kakutani . the second and the third assertions are trivial ( see for further results on norms of quantum channels ) , andthe last one is a consequence of 2-positivity .[ lem : channel ] let a quantum channel. then 1 . has at least one invariant element , which is a density matrix ; 2 . has trace operator norm of 1 ; 3 . has spectral radius of 1 ; 4 . satisfies the schwarz inequality if one looks at a channel as an operator in the hilbert space endowed with the hilbert - schmidt scalar product , then one can introduce , the _ dual map _ of .it is defined by the relation = \operatorname{tr}[\psi(x ) y ] , \quad \forall x , y \in { \mathcal{m}}_d({\mathbb{c}}).\ ] ] from kraus decomposition , one can obtain a kraus decomposition for the dual channel , .note that the trace preserving condition for , reads now .hence , the dual of a quantum channel is a unital ( not necessarily trace - preserving ) completely positive linear map .using this idea , one can see that the partial trace operation is the dual of the tensoring operation , .we now introduce some particular classes of positive maps which are known to have interesting spectral properties .let be a positive linear map . is called _ strictly positive _ ( or positivity improving ) if for all . is called _ irreducible _ if there is no projector such that for some .[ eg : unitary_conj ] let be a fixed unitary and consider the channel , .it is easy to check that the spectrum of is the set since maps pure states ( i.e. rank - one projectors ) to pure states , it neither irreducible , nor strictly positive . obviously , a strictly positive map is irreducible .in fact , the following characterization of irreducibility is known . a positive linear map is irreducible if and only if the map is strictly positive . irreducible unital maps which satisfy the schwarz inequality have very nice peripheral spectra .the proof of the following important result can be found in one of , in more general settings .if is a unital , irreducible map on which satisfies the schwarz inequality , then the set of peripheral ( i.e. modulus one ) eigenvalues is a ( possibly trivial ) subgroup of the unit circle .moreover , every peripheral eigenvalue is simple and the corresponding eigenspaces are spanned by unitary elements of .irreducible ( and , in particular , strictly positive ) quantum channels have desirable spectral properties , hence the interest one has for these classes of maps .as we shall see in section [ sec : fixed ] , irreducible maps are in certain sense generic . on the other hand ,the strict positivity condition is rather restrictive and not suitable for the considerations on this work .next , we develop these ideas , giving criteria for irreducibility and for strict positivity .let us start by analyzing strict positivity .subspaces of product spaces with high entanglement have received recently great attention . in this direction ,applications to the additivity conjecture are the most notable ones .the results in these papers , which rely on probability theory techniques deal with von neumann entropy .when one looks at the rank , projective algebraic geometry comes into play .indeed , possible states of the coupled system are modeled by the projective space .this space contains the _ product states _, as a subset called _ the segre variety_. the following lemma , a textbook result in algebraic geometry , is obtained by computing the dimension of the segre variety ( see ) .[ lem : entangled_subspace ] the maximum dimension of a subspace which does not contain any non - zero product elements is . as a rather simple consequence of this lemma , we obtain a necessary condition for strict positivity .let be strictly positive quantum map .then the choi rank of is at least .let be a minimal kraus decomposition of a strictly positive channel .for all , has full rank , and thus , for all non - zero , = \sum_{i=1}^k { \left| { \langle y ,l_i x\rangle}\right|}^2 > 0.\ ] ] hence , for all non - zero , there exist an such that \neq 0 ] is the submatrix of with rows indexed by and columns indexed by .the next result of is an easy consequence of the fact that if are eigenvalues of with linear independent vectors , then is an eigenvalue of with corresponding eigenvector .[ prop : shemesh_gen ] let be two complex matrices . if and have a common invariant subspace of dimension ( for ) , then their -th wedge powers have a common eigenvector , and hence ( we put ) \neq \{0\},\ ] ] or , equivalently , ^ * \cdot [ ( a^{\wedge k})^i,(b^{\wedge k})^j ] = 0.\ ] ] the preceding conditions turn out to be sufficient under more stringent assumptions on the matrices and ( see for further details ) .the main point of the two preceding results is that there exists an universal polynomial ] , the set is either equal to the whole set or it has haar measure 0 .we start by noticing that the real algebraic set is irreducible .this follows from the connectedness of ( in the usual topology ) and from the fact that irreducible components of a linear algebraic group are disjoint ( , 7.3 ) .the set is the intersection of the irreducible variety with the variety of zeros of the polynomial . if , then ; otherwise , the dimension of is strictly smaller than , the real dimension of .since the haar measure is just the integration of an invariant differential form , it has a density in local coordinates ( , ch .5 ) and hence in this case .[ thm : haar_1_unique ] let be a fixed density matrix of size . if is a random unitary matrix distributed along the haar invariant probability on , then almost surely .the proof goes in two steps .first , we show that is almost surely irreducible and then we conclude by a simple probabilistic argument .let us start by applying lemma [ lem : unitary_poly ] to show that a random quantum channel is almost surely irreducible .to this end , using eq .( [ eq : kraus_from_u ] ) , we obtain a set of kraus operators for which are sub - matrices of .consider two such kraus operators ( choose such that and take , ) . using proposition [ prop : irred_lat ] , to show irreducibility it suffices to see that and do not have a non - trivial common invariant subspace .let be the dimension of a potentially invariant common subspace of and .by the criterion in proposition [ prop : shemesh_gen ] , there exists a polynomial in the entries of and ( and thus in the entries of ) such that if is non - zero , then and do not share a -dimensional invariant space .note that can not be identically zero : for two small enough matrices without common invariant subspaces , one can build a unitary matrix such that , . by the lemma [ lem : unitary_poly ] , -almost all unitary matrices kraus operators and that do not have any -dimensional invariant subspaces in common .since the intersection of finitely many full measure sets has still measure one , almost all quantum channels are irreducible .consider now a random channel which we can assume irreducible .since the peripheral spectrum of an irreducible channel is a multiplicative subgroup of the unit circle , it suffices to show that for all element of the finite set , with haar probability one , is not an eigenvalue of .we use the same trick as earlier .consider such a complex number and introduce the polynomial ] does not depend on the eigenvectors of , but only on the eigenvalue vector .2 . for all unitary matrix , and have the same distribution ( we say that the measure is _ unitarily invariant _ ) .there exists a probability measure on the probability simplex such that if is a diagonal matrix sampled from and is an independent haar unitary on , then has distribution .in other words , the distribution of a random density matrix is determined by the distribution of its eigenvalue vector . to prove the first assertion, we show that for all , replacing with does not change the distribution of . to see this, note that by the invariance of the haar probability measure , the random matrices and have the same distribution .it follows that the same holds for the random channels and and thus for their invariant states .the second affirmation is proved in the same manner ( this time using a fixed unitary acting on ) and the third one is a trivial consequence of the second .in the previous section we considered repeated _identical _ quantum interactions of a system with a chain of identical environment systems .we now introduce classical randomness in our model by considering random states on the environment . in this model ,the unitary describing the interaction is a fixed deterministic matrix .the -th interaction between the small system and the environment is given by the following relation : ,\ ] ] where is a sequence of independent identically distributed random density matrices .notice that , since is constant , we use the shorthand notation .we are interested , as usual , in the limit . in this case however , the ( random ) channels do not have in general a common invariant state , so one has to look at ergodic limits .we use here the machinery developed by l. bruneau , a. joye and m. merkli in ( see for additional results in this direction ) . for the sake of completeness ,let us state their main result .[ thm : bjm ] let be a sequence of i.i.d. random contractions of with the following properties : 1 .there exists a constant vector such that for ( almost all ) ; 2 . .then the ( deterministic ) matrix ] is the rank - one spectral projector of ] is a quantum channel with an unique invariant state and , -almost surely , (\rho_0 ) = \theta , \quad \forall \rho_0 \in { \mathcal{m}^{1 , + } } _ d({\mathbb{c}}).\ ] ] let us start by introducing some notation .let , for some initial state , (\rho_0),\ ] ] and consider the dual operators which are , as described earlier , the adjoints of with respect to the hilbert - schmidt scalar product on .then , for a self - adjoint observable , one has = \operatorname{tr}\left [ \rho_0 \frac 1 n \sum_{n=1}^n ( \psi_1 \circ \cdots \circ \psi_n)(a ) \right].\ ] ] it is easy to see that the random operators satisfy the hypotheses of theorem [ thm : bjm ] on the hilbert space endowed with the hilbert - schmidt scalar product .indeed , the spectrum of is the complex conjugate of the spectrum of , hence is a contraction ( with respect to the hilbert - schmidt norm ) .moreover , with non - zero probability , is the unique invariant state of . from the theorem [ thm : bjm ] ,one obtains the existence of a non - random element such that , -almost surely , plugging this into eq .( [ eq : mu_n_a ] ) , one gets = \operatorname{tr}[\rho_0 { | \operatorname{i}_d \rangle \langle \theta |}a ] = { \langle \theta , a\rangle}_{\text{hs } } \operatorname{tr}[\rho_0 \operatorname{i}_d ] = \operatorname{tr}[\theta^ * a].\ ] ] since the set of density matrices is ( weakly ) closed , and . the fact that is the _unique _ invariant state of ] and one gets the following corollary .let be a sequence of i.i.d .random density matrices and consider the repeated quantum interaction scheme with constant interaction unitary .assume that , with non - zero probability , the induced quantum channel has an unique invariant state .then , -almost surely , for all initial states , one has (\rho_0 ) = \theta,\ ] ] where is the unique invariant state of the deterministic channel } ] , then is the `` chaotic '' state .we now consider a rather different framework from the one studied in sections [ sec : fixed ] and [ sec : random_env ] .we shall assume that the interaction unitaries acting on the coupled system are random independent and identically distributed ( i.i.d . ) according to the unique invariant ( haar ) probability measure on the group .this is a rather non - conventional model from a physical point of view , but it permits to relax hypothesis on the successive states of the environment and to obtain an ergodic - type convergence result .as before , we start with a fixed state .the -th interaction is given by , where is a ( possibly random ) sequence of density matrices on and is a sequence of i.i.d .haar unitaries of independent of the sequence .note that we make no assumption on the joint distribution of the sequence ; in particular , the environment states can be correlated or they can have non - identical probability distributions .the state of the system after interactions is given by the forward iteration of the applications : since we made no assumption on the successive states of the environment , the sequence is not a markov chain in general .indeed , the density matrices were not supposed independent , hence ( and thus ) may depend not only on the present randomness , but also on past randomness , such as , , etc .although the sequence lacks markovianity , it has the following important invariance property .[ lem : u - iid - same - law ] let be a sequence of i.i.d .haar unitaries independent of the family and consider the sequence of successive states defined in eq .( [ eq : cocycle ] ) .then the sequences and have the same distribution .consider a i.i.d .sequence of -distributed unitaries independent from the s and the s appearing in eq .( [ eq : cocycle ] ) . to simplify notation, we put . we also introduce the following sequence of ( random ) unitary matrices : a simple calculation shows that it follows that , in order to conclude , it suffices to show that the family is i.i.d . and -distributed ( it is obviously independent of the s ) .we start by proving that , at fixed , is -distributed .since the families and are independent , one can consider realizations of these random variables on different probability space and .for a positive measurable function , one has ( we put ) & = { \mathbb{e}}[f((v_n \otimes \operatorname{i } ) u_n ( v_{n-1}^ { * } \otimes \operatorname{i } ) ) ] = \\ & = \int f((v_n(\omega^2_{n } ) \otimes \operatorname{i } ) u_n(\omega^1_{n } ) ( v_{n-1}^{*}(\omega^2_{n-1 } ) \otimes \operatorname{i } ) ) d{\mathbb{p}}(\omega^2_{n})d{\mathbb{p}}(\omega^1_{n})d{\mathbb{p}}(\omega^2_{n-1})\\ & = \int \left ( \int f((v_n(\omega^2_{n } ) \otimes \operatorname{i } ) u_n(\omega^1_{n } ) ( v_{n-1}^{*}(\omega^2_{n-1 } ) \otimes \operatorname{i } ) ) d{\mathbb{p}}(\omega^1_{n})\right ) d{\mathbb{p}}(\omega^2_{n})d{\mathbb{p}}(\omega^2_{n-1})\\ & \stackrel{(*)}{= } \int { \mathbb{e}}[f(u_n ) ] d{\mathbb{p}}(\omega^2_{n})d{\mathbb{p}}(\omega^2_{n-1 } ) = { \mathbb{e}}[f ( u_n)],\end{aligned}\ ] ] where we used in the fact that the haar probability on is invariant by left and right multiplication with constant unitaries .we now claim that the r.v . are independent . for some positive measurable functions ,one has & = { \mathbb{e}}\left[\prod_{k=1}^n f_k((v_k \otimes \operatorname{i } ) u_k ( v_{k-1}^ { * } \otimes \operatorname{i}))\right ] = \\ & = \int \prod_{k=1}^n f_k((v_k(\omega^2_{k } ) \otimes \operatorname{i } ) u_k(\omega^1_{k } ) ( v_{k-1}^{*}(\omega^2_{k-1 } ) \otimes \operatorname{i } ) ) \prod_{k=1}^n d{\mathbb{p}}(\omega^1_{k})d{\mathbb{p}}(\omega^2_{k})\\ & = \int \prod_{k=1}^n \left ( \int f_k((v_k(\omega^2_{k } ) \otimes \operatorname{i } ) u_k(\omega^1_{k } ) ( v_{k-1}^{*}(\omega^2_{k-1 } ) \otimes \operatorname{i } ) ) d{\mathbb{p}}(\omega^1_{k } ) \right ) \prod_{k=1}^n d{\mathbb{p}}(\omega^2_{k } ) \\ & \stackrel{(**)}{= } \int { \mathbb{e}}[f_k(u_k ) ] \prod_{k=1}^n d{\mathbb{p}}(\omega^2_{k } ) = \prod_{k=1}^n { \mathbb{e}}[f_k(u_k ) ] \stackrel{(***)}{= } \prod_{k=1}^n { \mathbb{e}}[f_k(\tilde u_k)].\end{aligned}\ ] ] again , we used in the equality the invariance of the -dimensional haar measure and in the fact that and have the same distribution .we conclude from the above result that although the successive states of the small system are random density matrices that can be correlated in a very general way , their joint probability distribution is invariant by independent unitary basis changes .in other words , the correlations manifest only at the level of the spectrum , the matrices being independently rotated by random haar unitaries .the ergodic convergence result in such a case is established in the following proposition .let be a sequence of random density matrices ( we make no assumption whatsoever on their distribution ) and a sequence of i.i.d .haar unitaries independent of . then , almost surely , since both sides of the previous equation are self - adjoint matrices , it suffices to show that for any self - adjoint operator we have = \operatorname{tr}[a]/d ] .using the fact that = \delta_{i , i ' } \delta_{j , j ' } \frac 1 d,\ ] ] one can easily check that that the random variables have mean /d ] is ^ 2 $ ] ) and that they are not correlated ( , if ) .it is a classical result in probability theory that in this case the ( strong ) law of large numbers holds and thus , almost surely , = \frac{\operatorname{tr}[a]}{d}.\ ] ] putting the previous proposition and lemma [ lem : u - iid - same - law ] together , one obtains the main result of this section , an ergodic - mean convergence result for the sequence of states of the `` small '' system .
we consider a generalized model of repeated quantum interactions , where a system is interacting in a random way with a sequence of independent quantum systems . two types of randomness are studied in detail . one is provided by considering haar - distributed unitaries to describe each interaction between and . the other involves random quantum states describing each copy . in the limit of a large number of interactions , we present convergence results for the asymptotic state of . this is achieved by studying spectral properties of ( random ) quantum channels which guarantee the existence of unique invariant states . finally this allows to introduce a new physically motivated ensemble of random density matrices called the _ asymptotic induced ensemble_.
in 2004 , etheridge and fleischmann introduced a stochastic spatial model of two interacting populations known as the symbiotic branching model , parametrized by a parameter ] , that is , the unique gaussian process with covariance structure & = & ( t_1\wedge t_2 ) \ell(a_1\cap a_2),\\ \mathbb{e } [ w^2_{t_1}(a_1)w^2_{t_2}(a_2 ) ] & = & ( t_1\wedge t_2 ) \ell(a_1\cap a_2),\\ \mathbb{e } [ w^1_{t_1}(a_1)w^2_{t_2}(a_2 ) ] & = & \varrho ( t_1\wedge t_2 ) \ell(a_1\cap a_2),\end{aligned}\ ] ] where denotes lebesgue measure , and .note that we work with a white noise in the sense of walsh .solutions of this model have been considered rigorously in the framework of the corresponding martingale problem in theorem 4 of , which states that , under suitable conditions on the initial conditions , a solution exists for all i = j n \neq m i = j n = m ] the cross - variation of two martingales .this is to avoid confusion with which will be defined to be the sum ( resp . , integral ) of the product of and .finally , the nonspatial symbiotic branching model is defined by the stochastic differential equations again , the noises are correlated with =\varrho t ] can be divided into disjoint subsets corresponding to different regimes .the focus of is second moment properties . in the discretesetting , but with a more general setup , growth of second moments is analyzed in detail .a moment duality is used to reduce the problem to moment generating functions and laplace transforms of local times of discrete - space markov processes . a precise analysis of those is used to derive intermittency and aging results which show that different regimes occur for , and .in contrast to , the present paper is not restricted to second moment properties .the aim is to understand the pathwise behavior of symbiotic branching processes better .[ rem : migration ] in this paper , we restrict ourselves to the simplest setups which already provide the full variety of results . for the discrete spatial model we thus restrict ourselves to the discrete laplacian instead of allowing more general transitions .this is not necessary ; see or for a construction of solutions and main properties for more general underlying migration mechanisms in the case .furthermore , we mainly restrict ourselves to homogeneous initial conditions and remark where results hold more generally . here, for nonnegative real numbers we denote by the constant functions . the paper is organized as follows :our main results are presented in section [ subsec : smr ] . before proving the results ,we collect basic properties of the symbiotic branching models and discuss the dualities that we need .this is carried out in section [ sec : bpd ] .the final sections are devoted to the proofs . in section [ sec : comnvlaw ] , proofs of the longtime convergence in law are given , and in section [ sec : moments ] we discuss the longtime behavior of moments .finally , in section [ sec : wavespeed ] we show how to use the results of section [ sec : moments ] to strengthen the main result of .before stating the main results , we briefly recall from that the state space of is given by pairs of tempered functions , that is , pairs of functions contained in where , and we think of as being topologized by the metric given in , equation ( 13 ) , yielding a polish space .the state space for is similar .it was not discussed in and so we present details in section [ sec : bpd ] .we begin with a result , generalizing theorem 1.5 of , on the longtime behavior of the laws of symbiotic branching processes in the recurrent case .[ prop : convlaw ] suppose is a spatial symbiotic branching process in the recurrent case with , and initial conditions .let and be two brownian motions with covariance =\varrho t,\qquad t\ge0,\ ] ] and initial conditions .further , let be the first exit time of the correlated brownian motions from the upper right quadrant . then , weakly in , \rightarrow p^{u , v}[(\bar b^1_{\tau } , \bar b^2_{\tau})\in\cdot]\ ] ] as . here, denotes the pair of constant functions on , respectively , ( ) taking the values of the stopped brownian motions .in particular , the proposition shows ultimate extinction of one species in law .[ all ] for simplicity , proposition [ prop : convlaw ] is formulated for constant initial conditions even though the result holds more generally .theorem 1.5 of ( the case ) was extended in to nondeterministic initial conditions : for fixed let be the set of probability measures on such that and \ , d\nu ( a , b ) = 0 \qquad\mbox{for all } x \in{\mathbb{r}}.\ ] ] here , denotes the transition semigroup of brownian motion ( the definition for the discrete case is similar ) .the proof of can also be applied to and , thus , proposition [ prop : convlaw ] holds in the same way for initial distributions .the restriction to arises from our method of proof which exploits a self - duality of the process which gives no information for .let us briefly discuss the behavior of the limiting distributions in the boundary cases which are well known in the literature and fit neatly into our result .first , suppose is a solution of the stepping stone model ( see example [ ex1 ] ) and ] hitting before , which is , and hence matches ( [ eq : shigalimit ] ) .second , let be a solution of the parabolic anderson model with brownian potential ( see example [ ex2 ] ) and constant initial condition . in it was shown that as discussed above , when viewed as a symbiotic branching process with , this implies from the viewpoint of two perfectly positive - correlated brownian motions , we obtain the same result since they simple move on the diagonal dissecting the upper right quadrant until they eventually get absorbed in the origin , that is , almost surely . to summarize, we have seen that the weak longtime behavior ( in the recurrent case ) of the classical models connected to symbiotic branching is appropriately described by correlated brownian motions hitting the boundary of the upper right quadrant .in contrast to extinction in law , the almost - sure behavior is very different . in the recurrent case for the mutually catalytic branching model , cox and klenke showed that , almost surely , there is no longtime local extinction of any type , but in fact the locally predominant type changes infinitely often .it is not hard to see that the same is true for symbiotic branching with .we do not give a proof since it follows from proposition [ prop : convlaw ] along the same lines as in .[ prop : pb ] let , and suppose is a spatial symbiotic branching process in the recurrent case with initial distribution . then, for all and bounded , = 1,\ ] ] respectively , for bounded , = 1.\ ] ] again , as in remark [ all ] , the result holds for random initial conditions of the class .note that proposition [ prop : pb ] depends strongly on the spatial structure since in the nonspatial model almost sure convergence holds ( see proposition [ prop : sconv ] ) .in the second moments of symbiotic branching processes are analyzed .this particular case admits a detailed study since a moment duality ( see lemma [ la : mdual ] ) has a particularly simple structure which allows one to reduce the study of the moments to that of moment generating functions and laplace transforms of local times .here we are interested in the behavior of moments as tends to infinity .the two available dualities ( self - duality and moment duality ) are combined in two steps .first , a self - duality argument combined with an equivalence between bounded moments of the exit time distribution and of the exit point distribution for correlated brownian motions stopped on exiting the first quadrant is used to understand the effect of .it turns out that for any there are critical values , independent of , dividing regimes in which the moments ] and ] , ] .note that the theorem provides information about all positive real moments , not just integer moments . in the area below the critical curve in figure [ fig : cl ] ,the moments remain bounded . on and above the critical curve , in the recurrent case, the moments grow to infinity .[ shift ] for the curve could be extended with . in terms of the previous theoremthis makes sense since for , symbiotic branching processes with initial conditions are bounded by .this is justified by a simple observation : for initial conditions symbiotic branching processes with are solutions of the stepping stone model and , hence , bounded by .uniqueness in law of solutions implies that solutions with initial conditions are equal in law to solutions times solutions with initial conditions . with this first understanding of the effect of on moments, we may discuss integer moments for the discrete - space model in more detail .let us first recall some known results for solutions of the parabolic anderson model ( see example [ ex2 ] ) where only the parameter appears .using it s lemma , one sees that ] , , for any , for any there is a critical such that if . combined with theorem [ thm : mc ] , parts ( ii ) and ( iii ) emphasize the `` criticality '' of the critical curve : for , moments stay bounded , for moments grow subexponentially fast to infinity , and for moments grow exponentially fast if is large enough . as discussed above , for the parabolic anderson modelit is natural that in the transient case perturbing the critical case does not immediately yield exponential growth , whereas perturbing the recurrent case does immediately lead to exponential growth .it is clear that in the transient case the gap in ( iii ) of theorem [ thm :im ] is really necessary : for small moments of the parabolic anderson model are bounded . since moments of symbiotic branching are dominated by moments of the parabolic anderson model ( see lemma [ la : mdual ] ) , for small moments are bounded for all .in the case there seems to be no reason why exponential growth should fail .unfortunately , in this case there is no moment duality and hence the most useful tool to analyze exponential growth is not available .[ conj:1 ] in the recurrent case the moment diagram for symbiotic branching ( figure [ fig : cl ] ) describes the moments as follows : pairs below the critical curve correspond precisely to bounded moments , pairs at the critical curve correspond to moments which grow subexponentially fast to infinity and pairs above to the critical curve correspond to exponentially growing moments . a deeper understanding of the lyapunov exponents as functions of remains mainly open ( for an upper bound see proposition [ up ] ) . for second moments[ ] this is carried out in .it is shown that exponential growth holds for and arbitrary in the recurrent case , whereas only for in the transient case . here denotes the green function of the simple random walk .the exponential ( and subexponential ) growth rates were analyzed in detail by tauberian theorems .a direct application of theorem [ thm : im ] is so - called intermittency of solutions .one says a spatial system with lyapunov exponents is -intermittent if intermittent systems concentrate on few peaks with extremely high intensity ( see ) .the results above show that as tends to , solutions ( at least for large ) are -intermittent for tending to infinity .this holds since for fixed , the moments are bounded if lies below the critical curve .increasing ( and if necessary ) there is a first such that the lyapunov exponent is positive .intermittency for higher exponents suggests that the effect gets weaker .this is to be expected since for solutions with homogeneous initial conditions are bounded and , hence , solutions do not produce high peaks at all . making this effect more precise ,in particular combined with the effect of proposition [ prop : convlaw ] , is an interesting task for the future .let us conclude with a direct application of the moment bounds . here, we will be concerned with an improved upper bound on the speed of the propagation of the interface of continuous - space symbiotic branching processes which served to some extent as the motivation for this work . to explain this, we need to introduce the notion of the interface of continuous - space symbiotic branching processes introduced in .[ def : ifc ] the interface at time of a solution of the symbiotic branching model with ] there exists a constant and a finite random - time so that almost surely for all .\ ] ] heuristically , due to the scaling property of the symbiotic branching model ( lemma 8 of ) one expects that the interface should move with a square - root speed . indeed , with the help of theorem [ thm : mc ]one can strengthen their result , at least for sufficiently small , to obtain almost square - root speed .[ cor : wavespeed ] suppose is a solution of with and . then there is a constant and a finite random - time such that almost surely \ ] ] for all .the restriction to is probably not necessary and only caused by the technique of the proof . though is rather close to , the result is interesting .it shows that sub - linear speed of propagation is not restricted to situations in which solutions are uniformly bounded as they are for . the proof is based on the proof of for linear speed which carries over the proof of for the stepping stone model to nonbounded processes .we are able to strengthen the result by using a better moment bound which is needed to circumvent the lack of uniform boundedness .[ r ] we believe that , at least for , the speed of propagation should be at most , for some suitable constant , that is , for all greater than some , .\ ] ] however , it seems unclear how to obtain such a refinement of theroem [ cor : wavespeed ] based on our moment results and the method of ( resp . , ) .as subexponential bounds of higher moments can not be avoided ( see the proof of the fluctuation term estimate lemma [ la : ma ] ) , our results on the behavior of higher moments show that at present , in light of conjecture [ conj:1 ] , one can only hope for stronger results for very small . to overcome this limitation , new methods need to be employed .the authors think that a possible approach could be based on the scaling property ( lemma 8 of ) and recent results by klenke and oeler .recall that the scaling property states that if is a solution to , then is a solution to ( with suitably transformed initial states ) .in other words , a diffusive time space rescaling leads to the original model with a suitably increased branching rate .klenke and oeler show that , at least for the mutually catalytic model in discrete space , a nontrivial limiting process as exists .this limit is called `` infinite rate mutually catalytic branching process '' ( see also for a further discusion ) . in particular , in corollary 1.2 of they claim that , under suitable assumptions , a nontrivial interface for the limiting process exists , which would in turn predict a square - root speed of propagation in our case .however , to make this approach rigorous is beyond the scope of the present paper .note that our results give only limited information about the shape of the interface .for the case , that is , with locally constant total population size , it is shown in that there exists a unique stationary interface law , which may therefore be interpreted as a `` stationary wave '' whose position fluctuates at the boundaries , according to , like a brownian motion , hence explaining the square - root speed ( note that for both results , suitable bounds on fourth mixed moments are required ) . however , for , the population sizes of the interface are expected to fluctuate significantly and it seems unclear how this affects the shape and speed of the interface , in particular the formation of a `` stationary wave . ''the significance of fourth mixed moments might even lead to a phase - transition in .this gives rise to many interesting open questions .in this section we review the setting and properties of the discrete - space model , whereas for continuous - space we refer to . note that instead of using the state space of tempered functions alternatively we may use a suitable liggett spitzer space .as the results are only presented for the discrete laplacian this does not play a crucial role . for a discussion of the mutually catalytic branching model in the liggett spitzer space see . for functions we abbreviate . with the space of pairs of tempered sequencesis defined by the space of continuous paths is denoted by similarly , the space of pairs of rapidly decreasing sequences is defined by and the corresponding path space by weak solutions are defined as in for . in much the same way as for theorems 1.1 and 2.2 of , we obtain existence and the green - function representation . [prop : bp1 ] suppose ( resp . , ) , ] . then , for any , , ={\mathbb{e}}\bigl[(u_0,v_0)^{l_t}e^{\kappa(l_t^=+\varrho l_t^{\neq } ) } \bigr],\ ] ] where the dual process behaves as explained above .note that for homogeneous initial conditions , the first factor in the expectation of the right - hand side equals . in the special case , lemma [ la : mdual ]was already stated in , reproved in and used to analyze the lyapunov exponents of the parabolic anderson model . for , the difficulty of the dual process is based on the two stochastic effects : on the one hand , one has to deal with collision times of random walks which were analyzed in ; additionally , particles have colors either or which change dynamically .[ rdual ] similar dualities hold for and . for continuous - space ,the random walks are replaced by brownian motions and the collision times of the random walks by collision local times of the brownian motions ( see section 4.1 in ) .the simplest case is the nonspatial symbiotic branching model where the particles stay at the same site and local times are replaced by real times ( see theorem 3.2 of or proposition a5 of ) .mytnik introduced a self - duality for the continuous - space mutually catalytic branching model to obtain uniqueness of solutions of the corresponding martingale problem .this can be extended to symbiotic branching models for as shown in proposition 5 of .the discrete - space self - duality for was proved in theorem 2.4 of .we first need more spaces of sequences : and in the sequel , the space and its subspaces will be used for .the duality function for maps to via with this definition the generalized mytnik duality states : [ la : sduality ] for , , and let be a solution of and be a solution of .then the following holds : \\ & & \qquad={\mathbb{e}}^{\tilde{u}_0,\tilde{v}_0 } [ h(u_0+v_0,u_0-v_0,\tilde{u}_t+\tilde v_t,\tilde u_t-\tilde{v}_t ) ] .\end{aligned}\ ] ] analogously , the self - duality relation holds for the nonspatial model with duality function mapping to .in this section we discuss weak longtime convergence of symbiotic branching models and prove proposition [ prop : convlaw ] .we proceed in two steps : first , we prove convergence in law to some limit law following the proof of for .second , to characterize the limit law for the spatial models , we reduce the problem to the nonspatial model . [ prop : wconv ] let and a solution of either or with initial conditions .then , as , the law of converges weakly on to some limit .the proof is only given for the discrete spatial case and the continuous case is completely analogous .let us first recall the strategy of for which can also be applied with the generalized self - duality required here .convergence of in follows from convergence of in . using lemma 2.3(c ) of , it suffices to show convergence of ] ( see lemma 2.3(b ) of ) .hence , it suffices to show convergence of ={\mathbb{e}}^{{\mathbf{u}},{\mathbf{v } } } \bigl[e^{-\sqrt { 1-\varrho}\langle u_t+v_t,\phi\rangle+i\sqrt{1+\varrho}\langle u_t - v_t,\psi\rangle } \bigr],\hspace*{-32pt}\ ] ] for all .note that the technical condition of lemma 2.3(c ) of is fullfilled since due to proposition [ prop : bp1 ] =(u+v)\langle { \mathbf{1}},p_t \phi_{-\lambda}\rangle < c<\infty.\ ] ] to ensure convergence of ( [ 456 ] ) we employ the generalized mytnik self - duality of lemma [ la : sduality ] with : \nonumber\\ & & \qquad={\mathbb{e}}^{u_0,v_0 } \bigl[e^{-\sqrt{1-\varrho}\langle u_t+v_t,\tilde { u}_0+\tilde{v}_0\rangle + i\sqrt{1+\varrho}\langle u_t - v_t,\tilde{u}_0-\tilde{v}_0\rangle } \bigr ] \nonumber\\[-8pt]\\[-8pt ] & & \qquad={\mathbb{e}}^{\tilde{u}_0,\tilde{v}_0 } \bigl[e^{-\sqrt{1-\varrho}\langle u_0+v_0 , \tilde{u}_t+\tilde{v}_t\rangle+i\sqrt{1+\varrho}\langle u_0-v_0 , \tilde{u}_t-\tilde{v}_t\rangle } \bigr]\nonumber\\ & & \qquad={\mathbb{e}}^{\tilde{u}_0,\tilde{v}_0 } \bigl[e^{-\sqrt{1-\varrho}(u+v)\langle \mathbf{1 } , \tilde{u}_t+\tilde{v}_t\rangle+i\sqrt{1+\varrho}(u - v)\langle \mathbf{1 } , \tilde{u}_t-\tilde{v}_t\rangle } \bigr].\nonumber\end{aligned}\ ] ] by assumption , have compact support and hence by proposition [ prop : tmmart ] the total - mass processes and are nonnegative martingales . by the martingale convergence theorem and converge almost surely to finite limits denoted by , .finally , the dominated convergence theorem implies convergence of the right - hand side of ( [ eq : dualconv ] ) to .\ ] ] combining the above , we have proved convergence of ,\ ] ] which ensures weak convergence of in to some limit which is uniquely determined by ( [ eq : elimit ] ) .again , as in remark [ all ] , the previous proposition can be proved for nondeterministic initial conditions as in .the rest of this section is devoted to identifying the limit in the recurrent case . before completing the proof of theorem [ prop : convlaw ]we discuss a version of knight s extension of the dubins schwarz theorem ( see , 3.4.16 ) for nonorthogonal continuous local martingales .[ la : eds ] let and be continuous local martingales with almost surely .assume further that , for , = [ n_{\cdot},n_{\cdot}]_t \quad\mbox{and}\quad [ m_{\cdot},n_{\cdot}]_t=\varrho [ m_{\cdot},m_{\cdot}]_t\qquad \mbox{a.s.},\ ] ] where ] a.s . , then is a pair of brownian motions with covariances =\varrho t ] the situation becomes slightly more delicate but one can use a local version of lemma [ la : eds ] . indeed , define , for , where the time - change is given in ( [ eq : timeshift ] ) and define analogously for ( recall that = [ n_{\cdot},n_{\cdot}]_t ] .thus , by lemma [ la : eds ] , reasoning as in ( [ eq : invtime ] ) , , where are brownian motions started in , with covariance =\varrho t ] . as argued in the proof of proposition [ prop : sconv ] , is a nonnegative martingale and due to the same arguments satisfies {t}^{p/2 } ] \leq e^{1,1}[\tau^{p/2}]<\infty ] .using fatou s lemma and almost sure convergence of to , the proof for the nonspatial case is finished with \geq{\mathbb{e}}^{1,1}[u_{\infty } ^p]=e^{1,1}[(b_{\tau})^p]=\infty.\ ] ] again , this lower bound is independent of ._ step _ 2 .the proof for is started by reducing the moments for homogeneous initial conditions to finite initial conditions . indeed , employing lemma [ la : sduality ] with , where denotes the indicator function of site , gives & = & { \mathbb{e}}^{{\mathbf{1}},{\mathbf{1 } } } \bigl[e^{-\sqrt{1-\varrho}\langle u_{t}+v_{t},\phi + \psi \rangle } \bigr]\\ & = & { \mathbb{e}}^{\phi,\psi } \bigl[e^{-\sqrt{1-\varrho}\langle{\mathbf{1}}+{\mathbf{1}},\tilde { u}_{t}+\tilde{v}_{t}\rangle } \bigr]\\ & = & { \mathbb{e}}^{{\mathbf{1}}_{k},{\mathbf{1}}_{k } } \bigl[e^{-\sqrt{1-\varrho}\theta\langle{\mathbf{1 } } , \tilde { u}_{t}+\tilde{v}_{t}\rangle } \bigr],\end{aligned}\ ] ] where we used the argument of remark [ shift ] .note that , due to our choice of initial conditions , the complex part of the self - duality vanishes . since the above is a laplace transform identity , we have and hence = { \mathbb{e}}^{\mathbf{1}_k,\mathbf{1}_k } [ ( \langle{\mathbf{1 } } , \tilde{u}_t \rangle+ \langle{\mathbf{1 } } , \tilde{v}_t \rangle ) ^p ] .\ ] ] we are now prepared to finish the proof of the theorem for the discrete case . `` . ''suppose .let , which due to lemma [ prop : tmmart ] is a square - integrable martingale with quadratic variation = [ \langle{\mathbf{1 } } , \tilde{u}_{\cdot}\rangle]_t+ [ \langle{\mathbf{1 } } , \tilde{v}_{\cdot}\rangle]_t+2 [ \langle{\mathbf{1 } } , \tilde { u}_{\cdot } \rangle,\langle{\mathbf{1 } } , \tilde{v}_{\cdot } \rangle]_t=(2 + 2\varrho ) [ \langle{\mathbf{1 } } , \tilde{u}_{\cdot } \rangle]_t.\ ] ] to apply the burkholder gundy inequality , we switch again from to , which is a martingale null at zero. hence , ={\mathbb{e}}^{\mathbf{1}_k , \mathbf{1}_k } [ ( \bar{m}_t+m_0)^p ] \leq c_p+c_p{\mathbb{e}}^{\mathbf{1}_k , \mathbf{1}_k } [ \bar{m}_t^p ] .\ ] ] then we get from ( [ 23 ] ) and the burkholder davis gundy inequality & \leq & c_p+c_p{\mathbb{e}}^{\mathbf{1}_k , \mathbf{1}_k } [ \bar{m}_t^p ] \\ &\leq & c_p+c_p{\mathbb{e}}^{\mathbf{1}_k,\mathbf{1}_k } \bigl [ \sup_{0\leq s\leq t } \bar m_s^p \bigr]\\ & \leq & c_p+c'_p { \mathbb{e}}^{\mathbf{1}_k , \mathbf{1}_k } [ [ \bar m_{\cdot } ] _ t^{p/2 } ] \\ & = & c_p+ c'_p(2 + 2\varrho)^{p/2 } { \mathbb{e}}^{\mathbf{1}_k,\mathbf{1}_k } [ [ \langle { \mathbf{1 } } , \tilde{u}_{\cdot } \rangle]_t^{p/2 } ] \end{aligned}\ ] ] for some constants independent of and . as in the proof of theorem [ prop : convlaw ] , the random time change which makes the pair of total masses a pair of correlated brownian motions is bounded by , that is , \leq\tau ] diverges .equation ( [ 23 ] ) now shows that ] as can be seen as follows : & \leq&{\mathbb{e}}^{{\mathbf{1}},{\mathbf{1}}}\bigl[(2u_t(k))^p { \mathbf{1}}_{\{u_t(k)\geq v_t(k)\}}\bigr]\\ & & { } + { \mathbb{e}}^{{\mathbf{1}},{\mathbf{1}}}\bigl[(2v_t(k))^p { \mathbf{1}}_{\{u_t(k)<v_t(k)\}}\bigr]\\ & \leq&2^p{\mathbb{e}}^{{\mathbf{1}},{\mathbf{1}}}[u_t(k)^p]+ 2^p{\mathbb{e}}^{{\mathbf{1}},{\mathbf{1}}}[v_t(k)^p]\\ & = & 2^{p+1}{\mathbb{e}}^{{\mathbf{1}},{\mathbf{1}}}[u_t(k)^p],\end{aligned}\ ] ] where we used lemma [ la : mdual ] to see that ={\mathbb{e}}^{{\mathbf{1}},{\mathbf{1}}}[v_t(k)^p] ] independently of and .this can be done as before : and are random time - changed correlated brownian motions with initial conditions for all .using , as before , the auxiliary martingale we obtain ( as in the discrete case ) with the help of the burkholder davis gundy inequality & = & \lim_{\varepsilon\rightarrow0}{\mathbb{e}}^{{\mathbf{1}},{\mathbf{1 } } } [ \langle u_t+v_t , p_{\varepsilon}\rangle^{p } ] \\ & = & \lim_{\varepsilon\rightarrow0 } { \mathbb{e}}^{p_{\varepsilon},p_{\varepsilon } } [ \langle{\mathbf{1}},\tilde { u}_t+\tilde{v}_t \rangle^p ] \\ & \leq & c_p+c_p\lim_{\varepsilon\rightarrow0 } { \mathbb{e}}^ { p_{\varepsilon},p_{\varepsilon } } [ \bar{m}_t^p ]\\ & \leq & c_p+c'_p\lim_{\varepsilon\rightarrow0 } { \mathbb{e}}^{p_{\varepsilon } , p_{\varepsilon } } [ [ \bar{m}_{\cdot } ] ^{p/2}_{t } ] \\ & \leq & c_p+ c'_p(2 + 2\varrho)^{p/2}e^{1,1}[\tau^{p/2}].\end{aligned}\ ] ] the positive constants are independent of and , whereas and the random time change ] holds for all and since . for right - hand side is finite by theorem [ thm : theo2 ] and independent of . since \leq{\mathbb{e}}^{{\mathbf{1}},{\mathbf{1 } } } [ ( u_t(x)+v_t(x))^p ] ] is independent of recurrence / transience .we now study the `` criticality '' of the critical curve in more detail . as a preliminary result ( mixed ) moments of the nonspatial model are analyzed .the idea is to combine three different techniques : the martingale argument which led to theorem [ thm : mc ] for ] , and finally moment equations which yield exponential increase / decrease for all mixed moments ] grows to a finite constant if , * * ] grows exponentially fast if . * for all , and : * * ] neither grows exponentially fast nor decreases exponentially fast if , * * ] for strictly smaller than , this is bounded for all and .hence , for all mixed moments decrease exponentially fast proving the first part of ( 2 ) .note that since is a submartingale , the moment ] at the critical point .hence , the second part of ( 1 ) is proven and combined with ( [ df ] ) so is the upper bound of the second part of ( 2 ) ._ step _ 3 .a direct application of it s lemma and fubini s theorem yields =1+\kappa\pmatrix{n\cr2}\int_0^t{\mathbb{e}}^{1,1}[u_s^{n-1}v_s ] \,ds.\ ] ] since we already know from the martingale arguments that ] can not decrease exponentially fast proving the lower bound of part two of ( 2 ) .furthermore , with the same arguments as above , for , this leads to ={\mathbb{e}}\bigl[e^{\kappa(l_t^=+\varrho(n ) l_t^{\neq } ) } e^{\kappa(\varrho-\varrho(n))l_t^{\neq } } \bigr]\geq{\mathbb{e}}\bigl[e^{\kappa ( l_t^=+\varrho(n ) l_t^{\neq } ) } \bigr]e^{\kappa(\varrho-\varrho(n))t}.\ ] ] since the first factor of the right - hand side equals ] grows exponentially fast in , this implies exponential growth of ] , where the dual process starts with particles of the same color all placed at site . by the tower property and the strong markov property ,we obtain ={\mathbb{e}}^{n_0 } \bigl[e^{\kappa(l_{t}^=+\varrho l_{t}^{\neq})}{\mathbb{e}}^{n_t } \bigl[e^{\kappa(l_{s}^=+\varrho l_{s}^{\neq } ) } \bigr ] \bigr].\ ] ] we are done if we can show that \leq{\mathbb{e}}^{n_0 } \bigl[e^{\kappa(l_{s}^=+\varrho l_{s}^{\neq } ) } \bigr]\ ] ] for any given initial configuration of the dual process consisting of particles .the general initial conditions of the dual process consist of particles of one color and particles of the other color ( ) distributed arbitrarily in space at positions . using the duality relation of lemma [ la : mdual ] , we obtain &=&{\mathbb{e}}^{{\mathbf{1}},{\mathbf{1}}}[u_s(k_1 ) \cdot\cdot\cdot u_s(k_{n^1})v_s(k_{n^1 + 1})\cdot\cdot\cdot v_s(k_{n^1+n^2})]\\ & \leq&{\mathbb{e}}^{{\mathbf{1}},{\mathbf{1 } } } [ u_s(k)^{n } ] = { \mathbb{e}}^{n_0 } \bigl[e^{\kappa ( l_{s}^=+\varrho l_{s}^{\neq } ) } \bigr],\end{aligned}\ ] ] where , in the penultimate step , we have used the generalized hlder inequality . having established existence of the lyapunov exponents, we now turn to the more interesting question of positivity .the boundedness for in theorem [ thm : mc ] immediately implies that in this case .now suppose , that is , lies on critical curve .we use the perturbation argument which we already used for the nonspatial case combined with lemma [ la : mdual ] and theorem [ thm : mc ] to prove that in this case moments only grow subexponentially fast .this implies that the lyapunov exponents are zero .again we switch from ] , where the dual process is started with all particles at the same site and the same color .since moments below the critical curve are bounded , we can proceed as for the nonspatial model . for any , we get \geq{\mathbb{e}}\bigl[e^{\kappa(l_t^=+\varrho l_t^{\neq } ) } \bigr]e^{-\kappa \varepsilon{n\choose2}t } \\ & \geq&{\mathbb{e}}^{{\mathbf{1}},{\mathbf{1 } } } [ u_t(k)^n ] e^{-\kappa\varepsilon{n\choose 2}t},\end{aligned}\ ] ] where we estimated the collision time of particles of different colors by the collision time of all particles which is bounded from above by .since on the right - hand side is arbitrary , can not be positive .finally , we assume .the idea is to reduce the problem to the nonspatial case which we already discussed in proposition [ 0 ] . actually , we prove more than stated in the theorem since we also show that mixed moments d=1 d=2 d\geq3 ] .we now use hlder s inequality and theorem [ thm : mc ] to reduce the mixed moment to the first moment : \\ & & \qquad={\mathbb{e}}^{1_{{\mathbb{r}}^-},1_{{\mathbb{r}}^+ } } [ u_t(x)^{1/2}u_t(x)^{n-1/2}v_t(x)^n ] \\ & & \qquad\leq({\mathbb{e}}^{1_{{\mathbb{r}}^-},1_{{\mathbb{r}}^+ } } [ u_t(x ) ] ) ^{1/2 } ( { \mathbb{e}}^{{\mathbf{1}},{\mathbf{1 } } } [ u_t(x)^{4n-1 } ] ) ^{({2n-1})/({8n-2})}\\ & & \qquad\quad{}\times ( { \mathbb{e}}^{{\mathbf{1}},{\mathbf{1 } } } [ v_t(x)^{4n-1 } ] ) ^{{n}/({4n-1})}.\end{aligned}\ ] ] this follows from the generalized hlder inequality with exponents and .the first factor yields the heat flow and theorem [ thm : mc ] shows that the latter two factors are bounded by constants for .we now strengthen the estimate of lemma 23 of of the stochastic part of the convolution representation of solutions of corollary 20 of . [la : ma ] for there is a constant such that for , , the following estimate holds : the proof is along the same lines of replacing only in ( 116 ) the weaker ( exponentially growing ) moment bound of by our stronger ( bounded ) moment bound . in the following we sketch the arguments to show where the moments appear . before performingthe `` dyadic grid technique , '' increments of need to be estimated .first , by definition \\ & & \qquad={\mathbb{e}}^{1_{{\mathbb{r}}^-},1_{{\mathbb{r}}^+ } } \biggl [ \biggl|\int_0^t\int_{{\mathbb{r}}}\bigl(p_{t - s}(b - a)-p_{t'-s}(b - a')\bigr)m(ds , db ) \biggr|^{2q } \biggr],\end{aligned}\ ] ] which by burkholder davis gundy and hlder s inequality gives the upper bound ^ 2 \,db \,ds \biggr|^{q-1}\\ & & \qquad{}\times\int_0^t\int_{{\mathbb{r}}}[p_{t - s}(b - a)-p_{t'-s}(b - a')]^2 { \mathbb{e}}^{1_{{\mathbb{r}}^-},1_{{\mathbb{r}}^+ } } [ ( u_s(b)v_s(b))^q ] \,db \,ds.\end{aligned}\ ] ] using lemma [ la : m ] and classical heat kernel estimates we can derive ( see the calculation on pages 153 , 154 of ) the upper bound \\ & & \qquad\leq c_2\bigl((|t'-t|^{1/2}+|a'-a|)\wedge t^{1/2}\bigr)^{q-1 } \bigl(\sqrt { tp_t1_{{\mathbb{r}}^-}(a)}+\sqrt{t'p_{t'}1_{{\mathbb{r}}^-}(a ' ) } \bigr).\end{aligned}\ ] ] this upper bound corresponds to ( 119 ) of where they have an additional exponentially growing factor coming from their moment bound .the dyadic grid technique can now be carried out as in , choosing , without carrying along their exponential factor .hence , we may delete the exponential term from their final estimate ( 110 ) .note that the necessity of comes from our choice and lemma [ la : m ] .the following lemma corresponds to proposition 24 of . if then , for some constants , the following estimate holds for and : all we need to do is to argue that proposition 24 of is valid for instead of .we perform the same decomposition and note that the estimates of step 2 of are already given for if is large enough .the only trouble occurs in their step 3 . up to the estimate ( 154 ), this step works for but here their ( weaker ) lemma 23 produces an exponential in .more precisely , they need to justify which is only valid for . as our lemma[ la : ma ] avoids the exponential on the left - hand side the estimate holds for with suitably chosen and .the significant distinction of the previous lemma to the result of is that the inequality is not only valid for but for . at this pointone might hope to obtain a square - root upper bound for the growth of the interface but this fails in the final step in which we validate ( [ abc ] ) : which is finite for large enough .this work is part of the ph.d .thesis of the second author who would like to thank the students and faculty from tu berlin for many discussions .the authors would like to express their gratitude to an anonymous referee for a very careful reading of the manuscript and for pointing out and correcting an error in an earlier version of theorem [ cor : wavespeed ] .
in this paper we introduce a critical curve separating the asymptotic behavior of the moments of the symbiotic branching model , introduced by etheridge and fleischmann [ _ stochastic process . appl . _ * 114 * ( 2004 ) 127160 ] into two regimes . using arguments based on two different dualities and a classical result of spitzer [ _ trans . amer . math . soc . _ * 87 * ( 1958 ) 187197 ] on the exit - time of a planar brownian motion from a wedge , we prove that the parameter governing the model provides regimes of bounded and exponentially growing moments separated by subexponential growth . the moments turn out to be closely linked to the limiting distribution as time tends to infinity . the limiting distribution can be derived by a self - duality argument extending a result of dawson and perkins [ _ ann . probab . _ * 26 * ( 1998 ) 10881138 ] for the mutually catalytic branching model . as an application , we show how a bound on the moment improves the result of etheridge and fleischmann [ _ stochastic process . appl . _ * 114 * ( 2004 ) 127160 ] on the speed of the propagation of the interface of the symbiotic branching model . , and .
correlation functions are important structural descriptors that arise in a variety of disciplines such as solid state physics , signal processing , computer vision , statistical physics , geostatistics , and materials science .many techniques for structural characterization of materials over a wide range of length scales provide data in the form of correlation functions including , but not limited to , scattering methods .other examples are absorption spectroscopy , energy transfer analysis , nuclear magnetic resonance , and also grey - scale image analysis . moreover , in the case of _ in situ _ studies with a nanometer resolution , correlation functions are often the only data available experimentally . despite the widespread use of correlation functions ,the nature of the structural information they contain remains an open area of active research .the central question of the present paper can be put as follows : how accurately is a microstructure known when the only data available is a two - point correlation function ?we shall focus our analysis on two - phase microstructures and the two - point correlation function , which is defined to be the probability that two random points at a distance from one another both belong to the same phase .two - point statistics are not sufficient to determine unambiguously a microstructure .the specification of a given two - point function is equivalent to defining a macrostate of the system , the degeneracy of which can be expressed as a configurational entropy . in particular , all space transformations that preserve distances - translation , rigid rotation and inversion - lead to microstructures with identical two - point statistics . following previous work, we will call such degeneracies _ trivial _ .the focus of the present paper is on _ non - trivially _ degenerate microstructures , which can not be obtained from each other through any of the aforementioned trivial transformations .examples of non - trivially degenerate microstructures are poisson polyhedra tesselations of three - dimensional space and debye random media , which although having distinct microstructures , have identical .non - trivial degeneracy is not limited to infinite systems .examples of finite point patterns having the same two - point statistics have been given as early as 1939 , and patterson coined the word `` homometric '' to qualify them .very recently , general equations have been derived that can in principle be solved analytically to obtain homometric microstructures . in the context of crystallography ,the degeneracy of the structural information contained in correlation functions is referred to as the _ phase problem_. the phase problem , however , is not universally applicable .a spectacular counterexample is the so - called direct method of crystallography , for which hauptman and karle received the 1985 nobel prize for chemistry . in the field of computer vision, it has been shown that finite textures are completely characterized by their orientation - dependent correlation functions .many theoretical examples of microstructures with a low degeneracy can be accurately reconstructed from their correlation function alone .all these examples have in common that they incorporate orientation information .the focus of the present work is on radial correlation functions in which orientation information is averaged out .this simplification is relevant to many experimental contexts , notably small - angle scattering , where the correlation function is generally rotationally averaged through the measurement of powder scattering patterns , as well as to isotropic disordered systems in general .the understanding of the structural information in radial correlation functions has been considerably advanced through the use of reconstruction algorithms , which aim at producing microstructures with a specified correlation function via the minimization of a prescribed energy functional . in the case of a reconstruction based on two - point correlation functions ,a natural choice for the energy functional is ^ 2 \ , \ ] ] where is the target two - point correlation function , is the correlation function of the microstructure , i.e. , the configuration being optimized , and the sum is over all measurable distances .this definition of the energy is equivalent to a norm-2 error : it is non - negative and it vanishes only for those configurations that satisfy . in this context , the question of the degeneracy associated with a given correlation function is equivalent to determining the number of microstructures having zero energy , i.e. , the _ ground - state degeneracy _ of the energy functional .the minimization of eq .( [ eq : definition_e ] ) is generally done by discretizing the microstructure on a grid with periodic boundary conditions , and by using either a steepest descent or a simulated annealing algorithm . in the case of a two - phase microstructure , which can be thought of as an image with black and white pixels , the simulated annealing reconstruction proceeds as follows .starting from any configuration , with value of the energy functional eq .( [ eq : definition_e ] ) , a black pixel is chosen randomly and moved to any available white position .the function is updated and the new energy is calculated .the move is accepted with probability where is a temperature parameter .all energy - decreasing moves are therefore accepted but some energy - increasing moves are accepted as well , depending on the chosen temperature .the latter moves are necessary to ensure that the entire configuration space be explored in principle , and that the system is not trapped in a local minimum of .simulated annealing algorithms consist in starting at a high temperature , and progressively decreasing the temperature until convergence is reached ( ) .this type of approach has been widely used for microstructure reconstruction , in the context of both applications and theoretical investigations .the latter include generalizations to other types of statistical microstructure descriptors besides , most notably to higher - order correlation functions as well as to cluster correlation functions . ) and reconstructed ( ) correlation functions are indistinguishable on the scale of the figure .the size of the grid is pixels with .these examples strongly suggest that the two - point function of a single sphere under periodic boundary conditions is only trivially degenerate through translation , but that the two - point degeneracy of a poisson point process has a large non - trivial contribution.,width=264 ] examples of reconstructions of two - phase microstructures under periodic boundary conditions are given in fig .[ fig : rec_examples ] . in the case of the single disk ,the reconstructed microstructure is almost identical to the target , except for a translation ( top portion of fig .[ fig : rec_examples ] ) . in the case of the reconstruction of the hard disks ( middle portion of fig .[ fig : rec_examples ] ) , the characteristic size of the disks as well as the average distance between them is recovered .however , an exact reconstruction of the target configuration is not possible ; spurious objects are also formed through the partial merging of neighboring disks . in the case of a realization of a poisson point process( randomly coloring a pixel black according to a prescribed volume fraction ) , the reconstructed and the target microstructures might look superficially similar because they both appear to be random distributions of black pixels ( bottom portion of fig .[ fig : rec_examples ] ) .however , the two microstructures have very little in common if one is interested in the exact configurations of the pixels , although an excellent match is obtained between and .this illustrates the concept of non - trivial degeneracy .in a recent letter , we presented a general theoretical framework for estimating quantitatively the structural degeneracy corresponding to any specified correlation function .this was achieved by mapping the problem to the estimation of a ground - state degeneracy through the use of eq .( [ eq : definition_e ] ) . herewe provide a more comprehensive presentation of the methodology and analyses , including a quantitative characterization of the energy landscape associated with the reconstruction as well as a detailed derivation of the degeneracy metric .moreover , we show that our results can be expressed in terms of the _ information content _ of the two - point correlation functions .although the present work focuses on two - dimensional media in euclidean space , our procedure can be applied in any space dimension and generalized to non - euclidean spaces ( e.g. , compact and hyperbolic spaces ) .the remainder of the paper is organized as follows . in sec .ii , we discuss the degeneracy of the two - point statistics for a variety of microstructures that are used as benchmarks throughout the rest of the paper .we consider successively small systems - for which all the configurations can be enumerated - intermediate systems - for which the degeneracy can be determined via a monte carlo method we presented elsewhere - and large systems for which neither of the aforementioned two methods apply and one needs to use the reconstruction method . in sec .iii , we devise an analytical method , based on a random walk in configuration space , to characterize the energy landscape associated with reconstruction .in particular , we determine a characteristic energy profile for the basin of each ground state . in sec .iv , we show that the ground - state degeneracy of reconstruction problems is related to the roughness of the energy landscape .we introduce a roughness metric that can be calculated from alone , and we show definitively that it is correlated with the microstructure degeneracy . in sec .v , the degeneracy is expressed in terms of the _ information content _ of , and a formula is proposed relating the roughness metric to this information content .the practical usefulness of our results is discussed .the present paper is restricted to two - phase digitized microstructures , which can be thought of as images with black pixels and white pixels .however , our analysis can be easily generalized to multiphase microstructures .we shall first consider the very small microstructures of fig .[ fig : degenerate ] with .they will be analyzed in some detail and will serve as a benchmark for analytical methods applicable to larger and more complex microstructures . , , , , , with the total number of pixels in the grid ( see text ) .systems c to e have a non - trivial contribution to their degeneracy.,width=283 ] for any finite microstructure it is always possible to refer to the pixels through a linear index , to , independently of the actual dimensionality .a finite microstructure is therefore completely characterized by a -dimensional vector , with components equal to when point is a black pixel , and otherwise .the two - point correlation function of the black phase is defined as the probability that two random pixels at a distance from one another are both black .this can be written formally as where takes the value if the distance between pixels and is , and otherwise .the quantity is defined as . in eq .( [ eq : definition_p ] ) the double sum counts the pairs of black pixels separated by a distance , and the pre - factor normalizes that count by the total number of pixel pairs at a distance from one another .the periodic boundary conditions are incorporated in the definition of the operator .we assume in the rest of the paper that the discretizing grid is uniform in the sense that is independent of .the use of a discrete pixel grid is equivalent to a quantizer " problem , in which every point of the microstructure is quantized to the centroid of its closest pixel .the distances between pairs of points are therefore approximated by distances that are compatible with the grid .a square grid is used throughout the present paper . for finite - size systems ,the quantization naturally introduces some grid - specific artifacts .however , the quantization error decreases and becomes zero in the limit of infinitely large microstructures .the two - point correlation functions of the microstructures of fig .[ fig : degenerate ] are given in table [ tab1 ] under the form .the quantity is equal to the number of pairs of points at distance from one another .note that although configurations and are different , they have identical two - point characteristics .the same applies to and , as well as to and .a complete enumeration of all microstructures with shows that there is no other configuration with the same .lccccccccc & 1 & & 2 & & & 3 & & & + & 0 & 0 & 0 & 0 & 0 & 4 & 0 & 0 & 2 & 0 & 1 & 0 & 4 & 0 & 0 & 0 & 0 & 1 & 2 & 1 & 0 & 2 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 1 & 0 & 2 & 0 & 0 & 1 & 1 & 0 & 2 & 1 & 0 & 1 & 0 & 0 configuration in fig .[ fig : degenerate ] is uniquely defined by its two - point function , and therefore is only trivially degenerate . on grids with pointsthe total number of translations is ; the number of rotations is , or , depending on the rotational symmetry of the configuration ; and the number of mirror configurations is or , depending on its chirality . due to the symmetry and chirality of configuration a, only translation contributes to its degeneracy , which is therefore . in the case of configuration , the two possible orientations contribute an extra factor 2 , i.e. . configurations and are the `` kite & trapezoid '' examples discussed in refs . , which are non - trivially degenerate . in this case , , where the factor 2 is the non - trivial contribution , and the factor 4 accounts for the possible orientations .configurations and are also non - trivially degenerate .configuration is , however , chiral so it has to be counted twice .this leads to .finally , non - trivially degenerate configurations and are both chiral .this leads to .the complete enumeration of degenerate microstructures is intractable for systems even barely larger than those represented in fig . [fig : degenerate ] . in the present section ,we discuss a monte carlo ( mc ) algorithm for estimating , which we introduced previously .it can be applied to larger systems .the approach is based on a general mc algorithm for estimating the density of states ( dos ) developed by wang and landau and further improved and analyzed by others .the algorithm has been applied to a host of problems in condensed matter physics , in biophysics , and in logic .the dos is defined as the number of states having energy equal to .the logarithm of is equal to the entropy calculated in the microcanonical ensemble associated with eq .( [ eq : definition_e ] ) .the ground - state degeneracy is the value taken by for .a canonical monte carlo simulation with transition probability given by eq . ( [ eq : metropolis ] ) leads the system to visit any energy with a probability .the algorithm of wang and landau is based on the observation that a transition probability of the form would lead the system to visit all energies with equal probability .the density of states is , however , unknown so the algorithm is iterative .the starting value is set to for all energies , and the system is let evolve according to eq .( [ eq : wang_landau ] ) , while updating an histogram .each time an energy is visited the corresponding bin is updated , , and the estimated density of states is updated according to where is a numerical factor larger than 1 .the evolution continues according to eq .( [ eq : wang_landau ] ) with the updated value of .the evolution is stopped when a flat histogram is obtained . at this point , the histogram is reset to , is reduced to a value closer to 1 , and the evolution starts over again .the entire procedure is repeated until becomes lower than a prescribed accuracy .algorithmic details are provided in the supplementary material .the accuracy of the mc algorithm was tested by applying it to the microstructures of fig .[ fig : degenerate ] .the results are plotted in fig .[ fig : dos_4points ] in the form of cumulative dos the mc algorithm provides only to within an unknown multiplicative constant , which is determined by imposing to be equal to the total number of configurations .the latter is equal to the number of different ways in which black pixels can be chosen among a total of possible pixels , i.e. the cumulative dos plotted in fig .[ fig : dos_4points ] satisfies and . ) , c ( ) , and e ( ) of fig .[ fig : degenerate].,width=264 ] three independent mc estimations have been calculated for each microstructure in fig .[ fig : degenerate ] , yielding three independent estimates of .the results are : for configuration a compared to the exact value ; for configuration b compared to ; for configuration c compared to ; for configuration d compared to ; and for configuration compared to . the exact values are those calculated in sec .[ sec : small ] with .the agreement with the mc estimates is excellent .figure [ fig : dos_13points ] shows mc estimates of the density of states for larger microstructures , with on a grid .the microstructures are qualitatively similar to those of fig .[ fig : rec_examples ] , namely a single disk , hard disks , and a poisson point process , all under periodic boundary conditions . in the case of a single disk ,the mc estimation provides the value , corresponding to the possible translations .this confirms that the disk is only trivially degenerate .by contrast , the value found for the poisson point process is , which is orders of magnitude larger than any possible trivial contribution from translation and rotation . in the case of the hard disks , we find . in the case of the disk ( ) , hard disks ( ) , and the poisson point process().,width=264 ] the mc algorithm does not converge for systems larger than about pixels . with larger systemsthe criterion for flat histograms is rarely reached , even with as many as mc steps . moreover , when flat histograms are indeed obtained , the estimated value of is much smaller than 1 , which shows that the algorithm explores only a small fraction of the complete configuration space .these numerical difficulties are consistent with previous observations that flat - histogram algorithms have a convergence time that increases exponentially with system size .it is therefore difficult to estimate the -degeneracy of systems as large as the one shown in fig .[ fig : rec_examples ] , except in the particular case where the microstructure is only trivially degenerate .it has to be stressed that reconstructing exactly a degenerate microstructure is very unlikely .therefore , whenever a reconstruction leads to a translated , rotated , and inverterted version of the target , this can be considered as very strong evidence that the microstructure is only trivially degenerate . in the remainder of the paper , we shall refer to a microstructure as being _ non - degenerate _, whenever it has only a trivial degeneracy . in continuous euclidean space under periodic boundary conditions , an example of non - degenerate microstructureis provided by the single sphere ( composed of a large number of pixels ) .this results from the observation that is equal to volume fraction of the solid phase and that the negative slope of for is proportional to its surface area .a sphere is non - degenerate because it is the microstructure that realizes the lowest possible surface area for a given volume fraction : the two - point correlation function of any microstructure other than a single sphere would have a larger slope at the origin , which would result in a positive energy according to eq .( [ eq : definition_e ] ) . this observation can be expressed in a way that generalizes to discrete microstructures : for a given number of black pixels , a single sphere is non - degenerate because it is the microstructure that realizes the largest value of , where is a very small distance .similarly , any configuration with other than the disk of fig .[ fig : dos_13points ] has a smaller value of , where it is to be recalled that is the number of pairs of points with distance .the same applies to configuration of fig .[ fig : degenerate ] , which is not a disk : that particular microstructure is non - degenerate because any other configuration with has a smaller value of .the origin of the degeneracy of hard - disk systems is touched on in sec .vi . ) and reconstructed ( ) correlation functions are indistinguishable on the scale of the figure .the size of the grid is pixels under periodic boundary conditions , and .,width=264 ] the analysis of non - degeneracy in terms of extremal values of leads to non - intuitive results . when microstructures are defined on a grid , distances and orientationsare not independent : for instance , a pair of points at a distance from one another is necessarily oriented at 45 with respect to both axes .a very anisotropic microstructure such as the crystal on the top of fig .[ fig : rec_crystal ] minimizes for a set of distances corresponding to orientations orthogonal to the stripes .the figure clearly shows that vanishes for a set of well - defined distances .it should therefore not be surprising that such a highly anisotropic microstructure is non - degenerate .the non - degeneracy of the crystal is confirmed by the fact that the reconstructed microstructure in fig .[ fig : rec_crystal ] is a translated and rotated copy of the target . the vertical discontinuity in the middle of the reconstruction results simply from the target not having the same periodicity as the box . when a large crystal in a periodic box is split into a collection of randomly oriented smaller crystallites ( fig .[ fig : rec_crystal ] middle and bottom rows ) , its anisotropy is reduced and there are no longer values of at which is extremal .accordingly the reconstruction becomes less accurate , which means that the microstructure becomes more degenerate .a more quantitative analysis of this issue is provided in sec . v.the complete configuration space of two - phase microstructures with pixels is the set of vertices of an -dimensional hypercube .this results from the properties of the indicator vector , , which can take only values and . moving along a given -dimensional direction ( along an edge of the hypercube ) is equivalent to interchanging a white ( black ) with a black ( white ) pixel . in the example of fig .[ fig : hypercube ] , any movement along the fourth dimension ( joining the outer and inner cubes ) corresponds to changing the upper - left pixel .of a two - phase microstructure is an -dimensional hypercube on which hamming distance can be defined .any move along a -dimensional direction corresponds to changing the color of a particular pixel . in the case of a microstructurethe configuration space is a tesseract , with the fourth dimension represented as the edges joining the outer and inner cubes ( corresponding to the upper - left pixel).,title="fig:",width=170 ] + in the situation relevant to reconstruction , not all the vertices of the hypercube are accessible because the number of black pixels is kept constant , i.e. which means that all realizable microstructures lie on the intersection of the hypercube with a hyperplane .once a target correlation function is specified , each vertex is assigned an energy through eq .( [ eq : definition_e ] ) .what we refer to as the energy landscape is the set of values taken by the energy functional on the vertices of the -dimensional hyperplane .a reconstruction consists in exploring the energy landscape until a vertex is found with .the dos determined in section [ sec : mc ] is the number of vertices having a given energy .the problem we address in this section is that of the spatial variability of in configuration space .this analysis is motivated by the observation , in many fields of physics , that systems with large ground - state degeneracies generally have a rough energy landscape . if we can characterize the roughness of the energy landscape in terms of we can estimate the ground - state degeneracy . in order to characterize the spatial variability of in configuration space ,it is necessary to define a distance .a natural choice is the _ hamming _ distance , which counts the number of edges between any two vertices .the hamming distance within the hyperplane defined by eq .( [ eq : hyperplane ] ) takes only even values .the distance ] enables us to write .the general solution is therefore where ^t12 & 12#1212_12%12[1][0] __ ( , , ) _ _ , ed .( , ) _ _ , vol .( , , ) _ _ ( , , ) _ _ ( , , ) _ _ ( , , ) _ _ ( , , ) _ _ , vol .( , , ) _ _ ( , , ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase { 10.1016/s0006 - 3495(99)77443 - 6 } [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1063/1.1327609 [ * * , ( ) ] * * , ( ) link:\doibase { 10.1016/j.actamat.2007.10.044 } [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase { 10.1063/1.3592709 } [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase { 10.1016/s0042 - 6989(99)00191 - 1 } [ * * , ( ) ] link:\doibase { 10.1103/physreve.63.066701 } [ * * , ( ) ] link:\doibase { 10.1103/physrevlett.89.135501 } [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase { 10.1073/pnas.0905919106 } [ * * , ( ) ] link:\doibase 10.1103/physrevlett.108.080601 [ * * , ( ) ] * * , ( ) link:\doibase { 10.1007/s11004 - 008 - 9209-x } [ * * , ( ) ] * * , ( ) link:\doibase { 10.1016/j.advwatres.2010.12.004 } [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase { 10.1103/physreve.75.046701 } [ * * , ( ) ] * * , ( ) link:\doibase { 10.1063/1.1463059 } [ * * , ( ) ] in http://dx.doi.org/10.1007/978-3-642-15396-9_6[__ ] , , vol . ,( ) pp . * * , ( ) link:\doibase { url will be inserted by publisher } [ ] link:\doibase { 10.1038/29487 } [ * * , ( ) ] * * , ( ) * * , ( ) _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.108.093401 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
a two - point correlation function provides a crucial yet an incomplete characterization of a microstructure because distinctly different microstructures may have the same correlation function . in an earlier letter [ gommes , jiao and torquato , phys . rev . lett . 108 , 080601 ( 2012 ) ] , we addressed the microstructural degeneracy question : what is the number of microstructures compatible with a specified correlation function ? we computed this degeneracy , i.e. , configurational entropy , in the framework of reconstruction methods , which enabled us to map the problem to the determination of ground - state degeneracies . here , we provide a more comprehensive presentation of the methodology and analyses , as well as additional results . since the configuration space of a reconstruction problem is a hypercube on which a hamming distance is defined , we can calculate analytically the energy profile of any reconstruction problem , corresponding to the average energy of all microstructures at a given hamming distance from a ground state . the steepness of the energy profile is a measure of the roughness of the energy landscape associated with the reconstruction problem , which can be used as a proxy for the ground - state degeneracy . the relationship between this roughness metric and the ground - state degeneracy is calibrated using a monte carlo algorithm for determining the ground - state degeneracy of a variety of microstructures , including realizations of hard disks and poisson point processes at various densities as well as those with known degeneracies ( e.g. , single disks of various sizes and a particular crystalline microstructure ) . we show that our results can be expressed in terms of the _ information content _ of the two - point correlation functions . from this perspective , the _ a priori _ condition for a reconstruction to be accurate is that the information content , expressed in bits , should be comparable to the number of pixels in the unknown microstructure . we provide a formula to calculate the information content of any two - point correlation function , which makes our results broadly applicable to any field in which correlation functions are employed .
over the last decade , numerical simulations have developed into a cornerstone of astrophysical research . from interactions of vast clusters of galaxies to the formation of proto - planetary discs , simulations allow us to evolve systems through time and view them at every angle .they are an irreplaceable test - bed of our physical understanding of the universe .a number of hydrodynamics codes have been developed that are widely used in this field and still more are being developed .fundamentally , they all do the same job ; they solve the equations of motion to calculate the evolution of matter through time .whether the matter represents a nebula for the birth of a star or a network of galaxy clusters , the basic technique remains the same .however , the algorithms used to solve these equations vary from code to code and this results in differences in the resulting data .understanding the origin of these variations is vital to the understanding of the results themselves ; is an observed anomaly an interesting piece of new physics or a numerical effect ? as observational data takes us deeper into the universe , it becomes more important to pin down the origin of these numerical artifacts .additionally , it is difficult to compare results from simulations run with different codes . with observations , papers clearly state the properties of the instrument such as the diameter of the mirror and the wavelengths it is most sensitive to . while a brief description of the code is always included in theoretical papers , there exists no obvious conversion to other numerical techniques and therefore the results are more difficult for the reader to interpret .the problem of code comparison is not new and it is a topic that has recently created a great deal of interest .the reason for its current importance is a positive one ; improved numerical techniques and increased computer power have resulted in simulations reaching greater resolutions than could have been imagined even a few years ago .however , this high refinement comes at a price ; as we start to pick out the detail of these complex fluid flows , the physics we need to consider gets dramatically more complicated .this brings us to the main question code comparison projects are trying to answer ; can we use the same tools for this new regime of problems ?a number of papers have come out that tackle this .one of the most famous is the santa barbara comparison project which compared many of the progenitors of today s codes by running a model of a galaxy cluster forming .taking a different tack compared the performance of a dozen different implementations of a single approach ( in this case smoothed particle hydrodynamics , sph ) on standard astrophysical problems that included the sod shock , examining the range of outcomes that were available from a single technique .they concluded that one of the weaknesses of sph was the weak theoretical grounding which allows several equally viable formulations to be derived .more recent work includes , who focus specifically on the formation of fluid instabilities , comparing the formation of kelvin - helmholtz and rayleigh - taylor instabilities in six of the most utilised codes .additionally , has completed a direct comparison between two particular codes ( _ enzo _ and _ gadget2 _ , see below ) looking at the formation of galaxies in a cosmological context .all these projects give detailed insights into the differences between the codes , but are unable to provide a quantitative measure of how well a code performs in a particular aspect .this is especially true of the cosmological - based tests of where the problem is not sufficiently well - posed for convergence onto a single answer . sets out a simpler problem and compares it to analytical predictions , but the system is still sufficiently complex not to have an exact solution .additionally , no previous comparison has attempted to quantitatively compare different codes to one another ; asking whether it is possible to obtain identical results and with what conditions . without this crucial piece of information , it is impossible to fully assess a piece of work performed by an unfamiliar code or to judge which code might be the most suited to a given problem type . this has resulted in somewhat general comments being made about the differences between numerical techniques which has led to many myths about a code s ability becoming accepted dogma .the set of tests we present in this paper are designed to tackle these difficulties with the intention that they might become part of an established test programme all hydrodynamical codes should attempt .we present four problems that specifically address different aspects of the numerical code all of which have expected ` correct ' answers to compare to .the first two tests , the sod shock tube and sedov blast , are both strong shock tests with analytical solutions .the third and forth tests concern the stability of a galaxy cluster and are primarily tests of the code s gravitational solver .for all four tests we directly compare the codes against the analytic solution and present an estimate of the main sources of any systematic error .the remainder of this paper is organised as follows : in section 2 we give a short summary of the main features of each of the four codes we have employed . in section 3we deal with the sod shock and sedov blast tests , setting out the initial and final states and comparing each code against them .we repeat this exercise for a static and translating king sphere in section 4 .finally we discuss our results and summarise our conclusions in section 5 .the two major techniques for modelling gases in astrophysics are smoothed particle hydrodynamics ( sph ) and adaptive mesh refinement ( amr ) . in the first of these ,the gas is treated as a series of particles whose motion is dictated by lagrangian dynamics . in amr, the gas is modelled by a series of hierarchical meshes and the flow of material between cells is calculated to determine its evolution .there are a variety of codes which utilise both techniques and four of the major ones will be used to run the tests presented in the this paper , two of which use sph ( _ hydra _ and _ gadget2 _ ) and two of which use amr ( _ enzo _ and _ flash _ ) . _enzo _ is a massively parallel , eulerian adaptive mesh refinement code , capable of both hydro and n - body calculations .it has two hydro - algorithms which can be selected by the user ; the piecewise parabolic method ( ppm ) and the zeus astrophysical code . the ppm solver uses godunov s method but with a higher - order spatial interpolation , making it third - order accurate .it is particularly good at shock capturing and outflows .the zeus method in _ enzo _ is a three - dimensional implementation of the zeus astrophysical code developed by .it is a simple , fast algorithm that allows large problems to be run at high resolution . rather than godunov s method, zeus uses an artificial viscosity term to model shocks , a technique which inevitably causes some dissipation of the shock front .we compare both these hydro - schemes in these tests . _gadget2 _ is a massively parallel , lagrangian , cosmological code that is publicly available from the author s website .it is an n - body / sph code that calculates gravitational forces by means of the tree method and is also able to optionally employ a tree - pm scheme to calculate the long range component of the gravitational interactions . in order to follow the hydrodynamic behaviour of a collisional medium ,the code uses the entropy - conserving formulation of sph described in : the main difference of this approach with respect to the standard formulation of sph resides in the choice of describing the thermodynamic state of a fluid element in terms of its specific entropy rather than its specific thermal energy .this leads to a tight conservation of both energy and entropy in simulating dissipation - free systems . additionally , _gadget2 _ employs a slightly modified parametrisation of the artificial viscosity ( by introducing the so called `` signal velocity '' as in ) .the user is allowed to set the strength of this artificial viscosity for the specific problem being considered via an input parameter , .the time stepping scheme adopted by the code is a leap - frog integrator guaranteed to be symplectic if a constant timestep for all particles is employed . in this workwe have exploited the possibility of using fully adaptive individual timesteps for all the particles in the simulation , this being a standard practice ._ hydra _ is an adaptive particle - particle , particle - mesh code combined with smoothed particle hydrodynamics .it has the significant disadvantage that even though massively parallel versions exist , the publically available version is not a parallel implementation and so this code can not , as released , be used for very large simulations .akin to _ gadget2 _ , _ hydra _ uses an entropy conserving implementation of sph , but unlike _ gadget2 _ , _ hydra _ does not have fully adaptive individual timesteps .although the timestep adapts automatically from one step to the next all the particles move in lockstep . _ flash _ is a publicly available massively parallel eulerian amr code developed by the alliances center for astrophysical thermonuclear flashes .originally intended for the study of x - ray bursts and supernovae , it has since been adapted for many astrophysical conditions and now includes modules for relativistic hydrodynamics , thermal conduction , radiative cooling , magnetohydrodynamics , thermonuclear burning , self - gravity and particle dynamics via a particle - mesh approach ._ flash _ uses the oct - tree refinement scheme of the paramesh package , with each mesh block containing the same number of internal zones .neighbouring blocks may only differ by one level of refinement with each level of refinement changing the resolution by a factor of two .the hydrodynamics are based on the prometheus code .the input states for the riemann solver are obtained using a directionally split ppm solver and a variable time step leapfrog integrator with second order strang time splitting is adopted .this work uses a modified hybrid fftw based multigrid solver to solve poisson s equation and determine the gravitational potential at each timestep .this results in a vast reduction in time spent calculating the self - gravity of the simulation relative to a conventional multigrid solver ._ flash s _ refinement and de - refinement criteria can incorporate the adapted error estimator .this calculates the modified second derivative of the desired variable , normalised by the average of its gradient over one cell .one of the greatest differences between simulations performed now versus those undertaken five years ago is the increasing importance of modelling strong shocks accurately .while it has long been known that the universe is a violent place , with events such as supernovae , galaxy mergers and agn generating blasts which rip through the intergalactic medium , simulations did not have the resolution to see such phenomena in detail , so these sharp discontinuities were largely ignored .now , as we struggle to understand the effects of feedback in galaxy formation , multiphase media are essential physics . in order to attack such problems codes must be able to capture shocks with some proficiency .these two problems , the sod shock test and the sedov blast test , explicitly test the resolution of shock jumps and allow comparison with exact analytical solutions . plane , while the right - hand image shows it oriented along the [ 1,1,0 ] plane .in actuality our second test is oriented in the [ 1,1,1 ] plane i.e. oblique to all the axes . ]the shock tube problem has been used extensively to test the ability of hydrodynamics codes to resolve a sharp shock interface .the test set - up is simple , consisting of two fluids of different densities and pressures separated by a membrane that is then removed .the resulting solution has the advantage of showing all three types of fluid discontinuities ; a shock wave moving from the high density fluid to the low density one , a rarefaction ( sound ) wave moving in the opposite direction and a contact discontinuity which marks the current location of the interface between the two fluids . for this testthe initial conditions are traditionally chosen such that the pressure does not jump across the contact discontinuity .we extend the traditional one - dimensional shock tube problem to consider two three - dimensional set - ups ; the first of these has the fluid membrane at 90 to the x - axis of the box ( [ 1,0,0 ] plane ) , causing the shock to propagate parallel to this axis . in the second test, the membrane is lined up at 45 to each of the and axes ( [ 1,1,1 ] plane ) .this change in orientation of the shock is designed to highlight any directional dependencies inherent in the code , as illustrated in figure [ fig : shock_setup ] .all analysis was performed perpendicular to the original shock plane . for our particularset - up we chose the initial density and pressure jump either side of the membrane to be from to with the fluid initially at rest .the polytropic index was .periodic boundary conditions were used and the results were analysed at .figures [ fig : sod_density ] and [ fig : sod_all ] show the results from all four codes running this test . in the results presented in these figures , both _enzo _ and _ flash _ used an initial ( minimum refinement ) grid with two levels of higher refinement each of which decreased the cell size by a factor of 2 .the smallest cell size in this case was therefore of the unit box ._ enzo _ refined anywhere the gradient of the derived quantities exceeded a critical value whereas _ flash _ placed refinements according to the error estimator described in section [ sec : flash ] which places subgrids based on the second derivative of the derived quantities . for the sph codes , both _hydra _ and _ gadget2 _ were run with 1 million particles formed from two glasses containing 1.6 million and 400,000 particles .the solid black line in all cases is the analytical solution which requires a shock at and a contact discontinuity in the density at at .what is clear from figures [ fig : sod_density ] and [ fig : sod_all ] is that all of the codes pass the zeroth level test and successfully reproduce the shock jump conditions , although both the sph codes suffer from visible ringing and broadening around any discontinuities ._ enzo ( zeus ) _ does not produce the shock jump condition as accurately as _ enzo ( ppm ) _ and _ flash _ , as seen in the plots of energy and entropy in figure [ fig : sod_all ] , where its post - shock values are around 2% lower . in the oblique case ,the quadratic viscosity term , , in _ enzo ( zeus ) _ was increased from its default value of 2.0 to 10.0 .the effect of this value is discussed more fully in relation to the sedov blast test in section [ sec : code_spec ] . for the planar case , was kept at 2.0 .pleasingly none of the other codes appear to have any visible directional dependence , performing equally well in both the grid aligned and oblique cases we tried .all the codes could equally well resolve the location and smooth rise of the rarefaction wave but both the sph codes struggle with the contact discontinuity , with a large overshoot not seen by either of the mesh based codes .this is partly due to the initial conditions as for the sph codes the sudden appearance of a density jump introduces a local source of entropy . in this paperwe are contrasting the results from the different approaches rather than studying the sod shock problem itself in detail . as it is a standard test case higher resolution results can be found in the individual code s method papers ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?closer inspection of the data reveals differences in each code s capacity to handle strong shocks . in the bottom row of figure[ fig : sod_density ] , we show a close - up of the density over the shock - front . capturing shocksaccurately is an area that sph codes traditionally struggle with more than their eulerian counterparts due to their inherent nature of smoothing between particles .indeed , we see in this figure both _ gadget2 _ and _ hydra _ have a smeared out the interface compared to _enzo _ and _ flash s _ steep drop in density .small differences between the amr codes are also visible here ._ enzo ( ppm ) _ spreads the shock front over three cells , whereas _ enzo ( zeus s ) _ use of the artificial viscosity term extends this to five . _ flash s_ ppm scheme gives very similar results to _ enzo ( ppm ) _ , also spreading the shock front over three cells .figure [ fig : sod_all ] shows the pressure , internal energy , velocity and computational entropy over the region of interest for the planar [ 1,0,0 ] set - up ( left column ) and the oblique [ 1,1,1 ] set - up ( right column ) .both _ hydra _ and _ gadget2 _ show signs of post - shock ringing in the velocity plot .the two lagrangian codes adopt different implementations of artificial viscosity ; with a frequently used choice of the viscosity parameter ( artbulkvisc = 1 ) the ringing features in _ gadget2 _s profiles appears to be more pronounced than in hydra s ( not shown in the plot ) .in order for _gadget2 _ to get closer to hydra s performance , a choice of a significantly higher viscosity parameter has been necessary ( namely ) .the results produced under such a choice are shown in figure [ fig : sod_all ] .the sph codes also exhibit a large spike in both internal energy and entropy at the location of the contact discontinuity .this is driven by the initial conditions , where two independent particle distributions suddenly appear immediately adjacent to one another .we saw in the previous section that most of the codes model shock development and sound wave propagation with reasonable success and that they show no directional preference to the orientation of the shock interface .however , differences were apparent between each of the codes , most obviously between the sph and amr techniques ( unsurprising , since their numerical algorithms fundamentally differ ) . in this section, we quantitatively compare results from each code and attempt to get as close a match between their results as possible .for simplicity , we confine the amr codes to using static meshes for this comparison .figure [ fig : sod_comparison ] shows a graphical comparison of the density projection between a _run of 1 million particles and _ enzo ( ppm ) _ for different grid sizes .it is clear that a grid size of produces significantly poorer results than the _ hydra _ data whereas a grid size of produces significantly better results particularly in the low density region .although the situation is confused by the lack of points available with _ enzo _ , a grid size of can produce no more than 20 distinct values across the length of the volume being modelled , it is clear that this is too few to recover this model .however , even with cells in the box the major features are largely recovered . to make further progress we require a more quantitative way of comparing the results from the codes . to achieve this, we employ a cubic spline to interpolate the data from all runs to the same 178 points at which we have calculated the analytical values .the residue between each new curve and the analytical solution was then summed and divided by the number of points .we considered the residue both across the whole region of interest from [ -0.15 , 0.15 ] and just across the shock - front from [ -0.13 , -0.06 ] ..residue of from the analytical solution of the sod shock density in the planar [ 1,0,0 ] set - up for different static grid sizes and sph resolutions for _ enzo , hydra _ and _ gadget2_. [ cols="^,^,^ " , ] [ table : times ] while this paper attempts to cover the major features of astrophysical simulations , it does comprise only four tests .further examples would extend this paper beyond its scope ( and readability ) , but the differences between codes can not be fully cataloged without further testing .this paper then , is designed as a starting point for a suite of tests to be developed from which codes can be quantitatively compared for the jobs they are intended for . to assist groups wishing to run these tests on their own code and to compare new updates, we are making these results and initial conditions available on the web at http://www.astro.ufl.edu/codecomparison .the authors would like to thank alexei kritsuk , volker springel & richard bower for helpful suggestions and advice and lydia heck for computational support .ejt acknowledges support from a theoretical astrophysics postdoctoral fellowship from dept . of astronomy / clas , university of florida andboth ejt and glb acknowledge support from nsf grants ast-05 - 07161 , ast-05 - 47823 , and ast-06 - 06959 .dm acknowledges support from the eu magpop marie curie research and training network . many simulations with _ enzo _were performed at the national center for supercomputing applications and at the university of florida high - performance computing center who also provided excellent computational support .the _ flash _software used in this work was in part developed by the doe - supported asc / alliance center for astrophysical thermonuclear flashes at the university of chicago .hydra _ and _ gadget2 _ simulations were carried out on the nottingham hpc facility .flash _ simulations were carried out on the virgo consortium computing facility in durham .couchman h. m. p. , thomas p. a. , pearce f. r. , 1995 , apj , 452 , 797 pearce f. r. , couchman h. m. p. , 1997 ,newa , 2 , 411 agertz o. , et al . , 2007 , mnras , 380 , 963 barnes j. , hut p. , 1986, natur , 324 , 446 bryan , g. l. , & norman , m. l. 1997 , arxiv astrophysics e - prints , arxiv : astro - ph/9710187 feng , l .- l . , shu , c .- w . , &zhang , m. 2004 , apj , 612 , 1 fryxell b. , et al . , 2000 , apjs , 131 , 273 fryxell b. , mller e. , arnett d. , 1989 , nuas.conf , 100 frenk c. s. , et al . , 1999 , apj , 525 , 554 king i. r. , 1966 , aj , 71 , 64 landau , l. d. , & lifshitz , e. m. 1959 , course of theoretical physics , oxford : pergamon press , 1959 , lhner , r. , 1987 , comp .61 , 323 mitchell , n. l. , mccarthy , i. j. , bower , r. g. , theuns , t. , & crain , r. a. , in prep , mnras monaghan j. j. , 1992 , ara&a , 30 , 543 monaghan j. j. , 2001 , jkas , 34 , 203 oshea b. w. , nagamine k. , springel v. , hernquist l. , norman m. l. , 2005 , apjs , 160 , 1 oshea , b. w. , bryan , g. , bordner , j. , norman , m. l. , abel , t. , harkness , r. , & kritsuk , a. 2004 , arxiv astrophysics e - prints , arxiv : astro - ph/0403044 padmanabhan , t. 2002 , theoretical astrophysics , by t. padmanabhan , pp . 638 .isbn 0521562422 .cambridge , uk : cambridge university press , october 2002 . , robertson b. , kravtsov a. , 2007 , arxiv , 710 , arxiv:0710.2102 ryu , d. , ostriker , j. p. , kang , h. , & cen , r. 1993 , apj , 414 , 1 shapiro , p. r. , martel , h. , villumsen , j. v. , & owen , j. m. 1996 , apjs , 103 , 269 sedov , l. i. 1959 , similarity and dimensional methods in mechanics , new york : academic press , 1959 , sod , g. a. 1978 , journal of computational physics , 27 , 1 springel v. , 2005 , mnras , 364 , 1105 springel v. , hernquist l. , 2003 , mnras , 339 , 289 springel v. , hernquist l. , 2002 , mnras , 333 , 649 stone , j. m. , & norman , m. l. 1992 , apjs , 80 , 753 strang , g. , 1968 , siam j. numer ., 5 , 506 tasker , e. j. , & bryan , g. l. 2008 , apj , 673 , 810 tasker e. j. , bryan g. l. , 2006 , apj , 641 , 878 thacker r. j. , tittley e. r. , pearce f. r. , couchman h. m. p. , thomas p. a. , 2000 , mnras , 319 , 619 wada k. , norman c. a. , 2007 , apj , 660 , 276 wadsley , j. w. , veeravalli , g. , & couchman , h. m. p. 2008 , mnras , 387 , 427 woodward , p. r. , & colella , p. 1984, journal of computational physics , 54 , 174
we test four commonly used astrophysical simulation codes ; enzo , flash , gadget and hydra , using a suite of numerical problems with analytic initial and final states . situations similar to the conditions of these tests , a sod shock , a sedov blast and both a static and translating king sphere occur commonly in astrophysics , where the accurate treatment of shocks , sound waves , supernovae explosions and collapsed haloes is a key condition for obtaining reliable validated simulations . we demonstrate that comparable results can be obtained for lagrangian and eulerian codes by requiring that approximately one particle exists per grid cell in the region of interest . we conclude that adaptive eulerian codes , with their ability to place refinements in regions of rapidly changing density , are well suited to problems where physical processes are related to such changes . lagrangian methods , on the other hand , are well suited to problems where large density contrasts occur and the physics is related to the local density itself rather than the local density gradient . hydrodynamics methods : numerical cosmology : theory
phase transitions are ubiquitous in nature .they are a dramatic change in a system s properties triggered by a minuscule shift in its environment .phase transitions are often associated with spontaneous symmetry breaking , where the transition is between an unordered phase and an ordered , less symmetric phase . in simple models of phase transitions an order parameter is defined , which is zero in the unordered phase and non - zero in the ordered phase .this is the basis for a mean - field approach to phase transitions , where an expansion of the free energy in the order parameter is performed .many powerful methods have been developed over the years to study phase transitions , especially in the study of universality in so - called second order ( or critical ) transitions , such as the landau - ginzburg theory and wilson s renormalization group approach .for those transitions the order parameter is continuous , thermodynamic quantities obey scaling laws in the vicinity of the critical point , and there is a diverging correlation length . for first order transitions , on the other hand, there is a jump in the value of the order parameter , there is no diverging correlation length and thus no scaling of the thermodynamic functions near the transition point . during the transitionthere can be a mixed phase with a stable interface between the two phases .the study of first order transitions is very important for complex systems , especially social or ecological complex systems , because the sudden jump between two phases ( which is discontinuous ) can be quite dramatic .while the ginzburg - landau - wilson approach has been tremendously successful in explaining universality in second - order phase transitions , it requires the definition of an order parameter .different approaches , which do not require an order parameter , can be useful for cases where an order parameter is difficult to identify , or does not exist ( e.g. ) .in one such approach we study the probabilistic description of the system while changing the parameters to bring the system across a transition .the statistical properties of the system in the different phases are very different .therefore , at the phase transition the shape of the probability distribution function will change drastically .this is captured by the fisher information matrix ( fim ) through the cramr - rao bound . a compelling differential - geometric framework to study the changes that the probability distribution undergoes is information geometry ( ig ) . in ig the family of probability distributions that are parametrized by a set of continuous parameters ( designated here as the vector )is seen as a differential manifold .the parameters form a coordinate system on the manifold , and distances are measured by the fisher - rao metric : which is a positive semi - definite , symmetric matrix that changes covariantly under reparametrizations of the probability distribution . here is a derivative with respect to one of the parameters , indexed by .the ig of many models in statistical mechanics has been studied , e.g. , in .of particular interest in these studies is the role of the scalar ( riemannian ) curvature .it was shown to diverge at critical transition points and on the spinodal curve , thus effectively preventing geodesics from crossing into the unphysical area of phase space .see also for a general renormalization group analysis of ig near criticality .a connection between phase transitions in ig and the ginzburg - landau - wilson approach can be made when an order parameter is the derivative of a thermodynamic potential with respect to some thermodynamic variable .then there exists a collective variable such that and the fisher information matrix can be shown to obey : where is the inverse temperature , being boltzmann s constant . at second order phase transitions in the thermodynamic limitthis derivative diverges and therefore a corresponding entry of the fim also diverges . when the system is finite , the fisher information does not diverge but rather attains a maximum . the maximum of the fisher information has been used to accurately find the phase transition point in finite systems and as a definition of criticality in living systems .we have two main goals for the current work : first to test a conjecture set forth by prokopenko __ in that a divergence ( or maximization ) of the entries of the fim can detect phase transitions even in the absence of an order parameter .second , to measure the fisher information matrix without resorting to the underlying dynamics of the system and without assuming a specific parametric model for equation .this addresses the problem that often the microscopic dynamics of complex systems are unknown , and an analytic description of the probability density function is missing . to accomplish these two goals we chose to study the specific example of the two dimensional gray - scott ( gs ) reaction diffusion model .we chose the gs model for its rich variety of spatial and spatio - temporal patterns and we consider the transitions between the different patterns as critical transitions . among the different types of patterns one can find self - replicating spots , spatio - temporal chaos , and labyrinthine patterns .these were first systematically classified by pearson .our goal is to use the fisher information matrix to construct a phase map for the gray - scott model , where we expect areas with high values of the fisher information to demarcate the different patterns . as a probabilistic description for the system we chose the blob - size distribution , which we take to be a function of the control parameters of the model and and which we estimate non - parametrically by using image processing on the resulting spatial concentration from our simulations .the paper is organized as follows : in section [ sec : fisher_criticality ] we discuss the relationship between fisher information and criticality , which forms the motivation for our approach .we revisit the arguments in and extend them to our case . in section [ sec : gs ]we introduce the gray - scott model and discuss some of its properties . in section [ sec : results ] we present the results of the computations and in section [ sec : methods ] we explain the methods we used to compute the fisher information in this settings .last we discuss our results in section [ sec : conclusions ] .in this section we summarize the derivation performed in leading to eq . that relates order parameters and the entries of the fisher information matrix .this derivation leads to the conjecture that it is enough to consider the fisher information matrix entries rather than order parameters and is presented here because it is important for our exposition .the gibbs ensemble can be generically written in the following way : \ ] ] with set by normalization and with summation convention over repeated indices , which we use throughout the paper .for this distribution , the fisher information eq .is : where is the gibbs free energy . performing the derivatives we obtain eq . . for many systems ,the order parameter can be defined by introducing an external field to the free energy that couples to the order parameter .the canonical example being the magnetization that couples to the external magnetic field , so that . the external fieldbeing one of the external parameters .we then have : setting for a particular we obtain : the meaning of eq .is that if an order parameter is a derivative of the free energy , then there exists a collective variable whose average is proportional to the order parameter .it is important to note that many models exist whose order parameter is indeed the average of a collective variable . combining equations and we obtain eq . .this links the value of the entries of the fisher information with the derivatives of the order parameters of the system .since at phase transitions the order parameter or its derivatives becomes non - analytic we can expect the fisher information matrix to have diverging entries at the phase transition point .for example , at a ferromagnetic transition point , with , the diagonal elements of the fisher information matrix are given by : since and are proportional to the magnetic susceptibility and heat capacity respectively , we expect both entries to diverge at the point of the magnetic second order phase transition . more generally , it is easy to show that this is also called the generalized susceptibility in statistical mechanics .this derivation led prokopenko to propose that the maximization of the appropriate fisher information matrix can detect phase transitions , without explicitly defining an order parameter .the relation suggests the introduction of an order parameter derived from the fisher information by integrating it from one phase to the next , in the following way : we absorbed the inverse temperature in the definition of , and the integration path starts at which is in one phase and ends at which is in the other phase . in generalthis will depend on the integration path and the end - points .while the derivation above assumes the form for the probabilistic description of the system , the idea of fisher information maximization at phase transition points can be generalized for other probabilistic descriptions based on the cramr - rao bound that states that the variance an unbiased estimator is bounded from below by the inverse of the fisher information .we make the following heuristic argument : when a system is said to undergo a phase transition , it means that there is an observable change in some aspect of the system ( often having to do with the symmetries of the system ) .this means that the statistical properties of the system in the two phases differ significantly .for example , if we sample the energy per spin of an ising spin system repeatedly in the high - temperature phase , we will obtain a broad distribution of energies .conversely , at the low - temperature phase the energy distribution is very narrow , since in the low - temperature phase the spins are aligned and the system is in the ground state .thus , if we look at the probability density function describing these observables change as a function of the control parameter , it undergoes a drastic change in its functional form .this , in turn , implies that we can estimate the value of the control parameter at the phase transition point accurately , because of the large change in the behavior of the density function . according to the cramr - rao inequality , the inverse of the value of the fisher information serves as a lower bound on the variance of the estimated parameter .if this parameter can be estimated accurately then this implies a high value of the fisher information .we therefore surmise that under very general circumstances the fisher information is maximized at phase transition points .the gray - scott model is a non - linear reaction - diffusion model of two chemical species and with the reactions is constantly supplied into the system and the inert product removed . we can simulate the reaction using the law of mass action , where we assume that the rate of each reaction is proportional to the concentration of the reactants at each point .the resulting non - linear coupled differential equations are : where , are the ( dimensionless ) concentrations of the two chemical species , is the laplacian with respect to , and are diffusion coefficients of and respectively , represents the rate of the feed of and the removal of , and from the system and is the rate of conversion of to . in practicethis is a model of chemical species in a gel reactor where the rate can be relatively easily modified , and is dependent on the temperature of the system . and are more difficult to change and we will consider them constant , with .we start with the standard stability analysis for the homogeneous system ( i.e.without diffusion ) .the gray - scott model has a trivial homogeneous steady state solution , referred to as the red state , at = [ 1,0] ] .the diffusion coefficients were held constant at such that .we performed a simulation at each point of parameter space , starting with identical initial conditions ( same seed ) and repeated for the same number of time steps ( depending on the simulation grid size ) .the simulation was started with an initial condition of the red state with a finite perturbation in the form of a square in the center of the simulation grid in the state and an additional gaussian noise with an amplitude of covering the entire simulation grid .this initial state was then evolved by integrating numerically eq .using an euler scheme until the final state was reached .we repeated the experiment with different simulation grid sizes , ranging between up to .the simulation times ranged from time steps ( for the smallest grid sizes ) to for the grid .this was chosen such that the self - replicating stable spots will fill in the entire simulation window .the simulations were performed on the lisa cluster run by surfsara .the python code to perform the simulation is based on the code found at . for each simulation we extracted the pdf by following the steps described in section [ sub : probabilistic_description ] .we used the python package ` simplecv ` for the binarization and blob detection of the images and ` scipy.stats.gaussian_kde ` for the computation of the pdf from the blob sizes .the computation of the fisher information from the pdf followed the description in and the code for this computation is available online at .as mentioned in sec .[ sec : results ] we also computed the shannon entropy for each pdf we obtained .this was done by simple integration of the pdf using eq . and the python function ` scipy.integrate.quad ` .in addition to the gaussian kde , we used the novel density estimation method deft , and tried two different ways to integrate eq . - once as it is written in eq . and once by first performing the differentiation of the logarithm ( replacing with \partial_\mu p12 & 12#1212_12%12[1][0] link:\doibase 10.1088/0034 - 4885/50/7/001 [ * * , ( ) ] _ _( , , ) , ( ) _ _ ( , , ) link:\doibase 10.1016/0370 - 1573(74)90023 - 4 [ * * , ( ) ] link:\doibase 10.1073/pnas.1414708112 [ ( ) ] link:\doibase 10.1063/1.1665530 [ * * , ( ) ] link:\doibase 10.1098/rsta.1922.0009 [ * * , ( ) ] _ _ ( , , ) * * , ( ) http://books.google.com/books?hl=en{&}lr={&}id=euhbluw31hsc{&}pgis=1[__ ] ( , ) p. _ _ ( , ) http://scholar.google.com/scholar?q=related:ksdofjta768j:scholar.google.com/{&}hl=en{&}as{_}sdt=0,5{#}1 http://scholar.google.com/scholar?hl=en{&}btng=search{&}q=intitle:information+thermodynamics+and+differential+geometry{#}0 [ ( ) ] http://scholar.google.com/scholar?q=related:ksdofjta768j:scholar.google.com/{&}hl=en{&}as{_}sdt=0,5{#}0 http://scholar.google.com/scholar?hl=en{&}btng=search{&}q=intitle:information+geometry+of+quantum+statistical+systems{#}0 [ ( ) ] link:\doibase 10.1103/physreva.39.6515 [ * * , ( ) ] link:\doibase 10.1088/0305 - 4470/23/4/016 [ * * , ( ) ] link:\doibase 10.1088/0305 - 4470/23/4/017 [ * * , ( ) ] link:\doibase 10.1103/physreva.20.1608 [ * * , ( ) ] link:\doibase 10.1103/physreva.24.488 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.67.605 [ * * , ( ) ] link:\doibase 10.1103/physreva.41.2200 [ * * , ( ) ] link:\doibase 10.1103/physreve.86.052103 [ * * , ( ) ] http://pre.aps.org/abstract/pre/v51/i2/p1006{_}1 [ * * , ( ) ] link:\doibase 10.1016/s0393 - 0440(02)00190 - 0 [ * * , ( ) ] link:\doibase 10.1088/1751 - 8113/42/2/023001 [ * * , ( ) ] link:\doibase 10.1016/j.physa.2004.01.023 [ * * , ( ) ] link:\doibase 10.1103/physreve.86.051117 [ * * , ( ) ] link:\doibase 10.1103/physreve.92.052101 [ * * , ( ) ] , http://www.ncbi.nlm.nih.gov/pubmed/22181096 [ * * , ( ) ] link:\doibase 10.1162/artl{_}a{_}00041 [ * * , ( ) ] link:\doibase 10.1073/pnas.1319166111 [ * * , ( ) ] \doibase doi : 10.1016/0009 - 2509(84)87017 - 7 [ * * , ( ) ] link:\doibase 10.1126/science.261.5118.189 [ * * , ( ) ] http://journals.aps.org/prl/abstract/10.1103/physrevlett.72.2797 [ * * , ( ) ] http://www.sciencedirect.com/science/article/pii/s0167278900001834 [ * * , ( ) ] http://www.sciencedirect.com/science/article/pii/s0167278902007431 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.99.214102 [ * * , ( ) ] * * , ( ) link:\doibase 10.1080/00018730050198152 [ * * , ( ) ] , link:\doibase 10.1016/0378 - 4754(95)00044 - 5 [ * * , ( ) ] link:\doibase 10.1016/s1468 - 1218(03)00020 - 8 [ * * , ( ) ] link:\doibase 10.1016/s0167 - 2789(99)00010-x [ * * , ( ) ] link:\doibase 10.1016/s0167 - 2789(00)00214 - 1 [ * * , ( ) ] link:\doibase 10.1002/cplx [ * * , ( ) ] http://web-ext.u-aizu.ac.jp/course/bmclass/documents/otsu1979.pdf [ * * , ( ) ] http://arxiv.org/abs/1507.00964 [ ( ) ] , _ _ ( , , ) link:\doibase 10.1103/physreve.90.011301 [ * * , ( ) ]
the fisher - rao metric from information geometry is related to phase transition phenomena in classical statistical mechanics . several studies propose to extend the use of information geometry to study more general phase transitions in complex systems . however , it is unclear whether the fisher - rao metric does indeed detect these more general transitions , especially in the absence of a statistical model . in this paper we study the transitions between patterns in the gray - scott reaction - diffusion model using fisher information . we describe the system by a probability density function that represents the size distribution of blobs in the patterns and compute its fisher information with respect to changing the two rate parameters of the underlying model . we estimate the distribution non - parametrically so that we do not assume any statistical model . the resulting fisher map can be interpreted as a phase - map of the different patterns . lines with high fisher information can be considered as boundaries between regions of parameter space where patterns with similar characteristics appear . these lines of high fisher information can be interpreted as phase transitions between complex patterns .
optimal control theory has become one of the most dominant and indispensable techniques for analyzing dynamical systems in which optimal decisions are sought at each moment . surely , the principal part in the establishment of the theory as an important and rich area of applied mathematics arises in the strong utilization of the subject area in a great breadth of applications and research areas such as engineering , computer science , astronautics , biological sciences , chemistry , agriculture , business , management , energy , path planning problems , and a host of many other areas ; cf . . the most popular analytical methods for solving optimal control problems such as the calculus of variations , pontryagin s principle , and bellman s principle , can generally solve only fairly simple problems .however , such methods are largely deficient to handle the increasing complexity of optimal control problems since the advent of digital computers , which led to a revolution in the development of numerical dynamic optimization methods over the past few decades . among the popular numerical methods for solving optimal control problems , the so - called `` direct orthogonal collocation methods '' and `` direct pseudospectral methods '' have become two of the most universal and well established numerical dynamic optimization methods due to many merits they offer over other competitive methods in the literature ; cf . . both classes of numerical dynamic optimization methods convert the continuous optimal control problem into a finite dimensional constrained optimization problem based on the elegant spectral and pseudospectral methods , which are known to furnish exponential / spectral convergence rates faster than any polynomial convergence rate when the problem exhibits sufficiently smooth solutions ; cf . .direct hp - pseudospectral methods were specifically designed to handle optimal control problems with discontinuous or nonsmooth states and controls ; cf .such methods generally recover the prominent exponential convergence rates of pseudospectral methods by dividing the solution domain into an increasing number of mesh intervals and increasing the degree of the polynomial interpolant within each mesh interval .in particular , a local -refinement is a suitable technique on regions where the solution is smooth , while a local -refinement is preferable on elements where the solution is discontinuous / nonsmooth . to avoid high computational costs , adaptive strategies strive to control the locations of mesh intervals , minimize their number , and answer the question of whether increasing the number of collocation points within each mesh interval is necessary or not to achieve a certain accuracy threshold . in the most general formulation of an hp finite element method ,the solution over each element is approximated by an arbitrary degree polynomial .the spectral element method uses instead a high - degree piecewise polynomial defined by an appropriate set of interpolation nodes or expansion modes . to achieve the highest interpolation accuracy ,the interior interpolation nodes are distributed at positions corresponding to the zeros of certain families of orthogonal polynomials ; cf . .while direct hp - pseudospectral methods were thoroughly investigated in the past few years , comparable literature for direct adaptive spectral element methods for solving special classes of optimal control problems is rather very few , and to the best of our knowledge , it seems that such methods do not exist for solving more general nonlinear optimal control problems .we acknowledge though the existence of some posteriori error analyses of hp finite element approximations of special forms of convex optimal control problems ; cf .posteriori error estimates for the spectral element approximation of a linear quadratic optimal control problem in one dimension was recently presented by .however , all three papers lacked any adaptive strategies to efficiently implement their numerical schemes .perhaps , the earliest and sole adaptive spectral element method for solving a special class of optimal control problems described by a quadratic cost functional and linear advection - diffusion state equation was put forward by . in their presented work , an approximate saddle point of the lagrangian functionalis sought by iterating on the karush - kuhn - tucker optimality conditions to seek their satisfaction numerically using a galerkin spectral element method for the space discretization .the adaptive algorithm relies on a posteriori error estimate of the cost functional , from which the parameters of the spectral element discretization are selected .the main purpose of this paper is to derive high - order numerical solutions of nonlinear optimal control problems exhibiting smooth / nonsmooth solutions using a novel direct adaptive gegenbauer integral spectral element ( gise ) method . in particular, the proposed method converts the nonlinear optimal control problem into an integral multiple - phase optimal control problem .the multiple - phases are then connected using state continuity linkage conditions with easily incorporated control continuity linkage conditions when the control functions are assumed continuous .the numerical discretization is carried out using truncated shifted gegenbauer series expansions and a novel numerical quadrature defined on each mesh interval henceforth called the elemental shifted optimal barycentric gegenbauer quadrature ( kesobgq) based on the stable barycentric representation of lagrange interpolating polynomials .such a quadrature can produce excellent approximations while significantly reducing the number of operational costs required for the evaluation of the involved integrals .the proposed method is further invigorated by a novel adaptive strategy that uses a multicriterion for locating the mesh intervals where the state and control functions are smooth / nonsmooth based on information derived from the residual of the discrete dynamical system equations , and the magnitude of the last coefficients in the state and control truncated series .in fact , the idea of using the spectral coefficients of the state trajectories as a measure to verify the convergence of the computed solution was previously presented by . nonetheless , in this article, we shall exploit the spectral coefficients instead to check the smoothness of the approximate solutions on the interval of interest .the proposed method generally produces a small / medium - scale nonlinear programming problem that could be easily solved using the current powerful numerical optimization methods .the current paper casts further the light on the judicious choice of the shifted gegenbauer - gauss collocation points set to be utilized on each mesh interval during the discretization process of optimal control problems based on numerical simulations .the remaining part of the paper is organized as follows : in section [ sec : ps ] , we describe the optimal control problem statement under study . in section [ sec :tgsem ] , we present our novel gise method . a novel adaptive strategy is presented in section [ subsec : as1 ] . section [ subsec : err ] is devoted for the error analysis and convergence properties of the kesobgq . in section [ sec : ne ] , two test examples of nonlinear optimal control problems are included to demonstrate the efficiency and the accuracy of the proposed gise method followed by some concluding remarks illustrating the advantages of the proposed gise method in section [ conc ] .consider the nonlinear time - varying dynamical system where and are the state and control vector functions , respectively ; is the vector of first - order time derivatives of the states ; is the initial time , is the terminal time .the problem is to find the optimal control and the corresponding state trajectory satisfying eq . while minimizing the cost functional subject to the mixed state and control path constraints and the boundary conditions where is the terminal cost function , is the lagrangian function , is a nonlinear vector field , is a mixed inequality constraint vector on the state and control functions ; are constant specified vectors ; is a boundary constraint vector .here it is assumed that , and each system function are nonlinear continuously differentiable functions with respect to .it is also assumed that the nonlinear optimal control problem has a unique solution with possibly discontinuous / nonsmooth optimal control .we shall refer to the above optimal control problem in bolza form by problem 1 .using the affine transformation we could easily rewrite problem 1 as follows : [ pr:2 ] ,\\ & { \bm{c}_{\min } } \le \tilde { \bm{c}}(\tilde { \bm{x}}(\tau ) , \tilde { \bm{u}}(\tau ) , \tau ) \le { \bm{c}_{\max } } , \quad \tau \in [ - 1,1],\\ & \psi \left ( { \tilde { \bm{x } } ( - 1),{t_0},\tilde { \bm{x}}(1),{t_f } } \right ) = \bm{0},\end{aligned}\ ] ] where .we refer to the optimal control problem described by eqs . by problem 2 .one of the primary advantages of spectral element methods is the ability to resolve complex geometries and problems exhibiting discontinuous / nonsmooth solutions with high - order accuracies through the decomposition of the solution interval into small mesh intervals or elements `` -refinement , '' and approximating the restricted solution function on each mesh interval with high - order truncated spectral expansion series `` -refinement . ''considering the solution interval ] : = \bigcup\limits_{k = 1}^k { { \mkern 1mu } { \bm{\omega } _ k } } , \quad { \bm{\omega } _ k } = [ { \tau _ { k - 1}},{\tau _ k}],\quad - 1 = { \tau _ 0 } <{ \tau _ 1 } < \ldots < { \tau _ k } = 1.\ ] ] we denote the state and control vector functions in the element by and , respectively . based on this initial setting , we can put problem 2 into its multiple - interval form as follows : [ pr:3 ] to take advantage of the well - conditioning of numerical integration operators , we further rewrite eq . in its integral formulation so that to impose the states continuity conditions , the following conditions must be fulfilled at the interface of any two consecutive mesh intervals : or equivalently , if the control vector function is assumed to be continuous , then we further add either of the following two sets of constraints : we refer to the optimal control problem , , , , provided with any coupled sets of conditions or and conditions or by problem 3 .let be the - degree shifted gegenbauer polynomial defined on the mesh interval henceforth referred to by the - degree elemental shifted gegenbauer polynomial , where is the classical - degree gegenbauer polynomial associated with the real parameter ; cf .moreover , let denote the set of the zeroesth elemental shifted gegenbauer - gauss ( kesgg ) nodes in .] of the - degree elemental shifted gegenbauer polynomial , , for some , and set .the elemental shifted gegenbauer polynomials form a complete -orthogonal system with respect to the weight function , and their orthogonality relation is defined by the following weighted inner product : where is the kronecker delta function , is the normalization factor , and is as defined by ( * ? ? ?* eq . ( 2.6 ) ) . for and , we recover the shifted chebyshev polynomials of the first kind and the shifted legendre polynomials , respectively , on each mesh interval .let , \forall l \in \mathbb{z}_0^ + , \ ] ] and denote the identity matrix of order by .moreover , define ] as the spectral coefficient vectors obtained through collocating the state and control vectors at the augmented kesgg nodes , respectively , where ^t},{\bm{b}}_s^{(k ) } = { \left [ { b_{s,0}^{(k)},b_{s,1}^{(k ) } , \ldots , b_{s,{l_{u , k}}}^{(k ) } } \right]^t}\ ; \forall k \in \mathbb{k } , r = 1 , \ldots , n_x ; s = 1 , \ldots , n_u ] , by .the sought discrete cost function can be written as where is the all ones vector , is the all alternating ones vector for all , and ^t.\end{aligned}\ ] ] to account for the state continuity conditions , say eqs ., the discrete integral dynamical system equations on the elemental domains ( or simply elements ) can be approximated by [ eq : discdynsys1 ] where ^t.\end{aligned}\ ] ] furthermore , the discrete path and boundary constraints are given by the discrete control continuity constraints are imposed as follows : hence , the optimal control problem has been reduced to a nonlinear programming problem in which we seek the minimization of the objective function defined by eq .subject to the generally nonlinear constraints , , , and the linear constraints .the present gise method adopts both collocation and interpolation techniques to obtain the sought approximations .in particular , the spectral coefficient vectors are determined through collocation at the augmented kesgg nodes on each element , while the kesobgims are constructed through interpolation at the kesgg nodes . in this section, we present a multicriterion for locating the elements where the state and control functions are smooth / nonsmooth based on : (i ) the maximum residual of the discrete dynamical system equations ; i.e. , checking whether the state and control variables at the midpoints of each segment joining two consecutive discretization points on the same element meet the restrictions of the dynamical system equations , ( ii ) the magnitude of the last coefficients in the state and control truncated series . to illustrate the proposed adaptive technique ,let us begin by defining the elemental midpoints vector ^t } : \check\tau _ { { n_k},i}^{(k),\alpha } = \frac{1}{2}\left ( { \hat \tau _ { { n_k},i}^{(k),\alpha } + \hat \tau _ { { n_k},i + 1}^{(k),\alpha } } \right),\ ; i = 0 , \ldots , { n_k } ; k \in \mathbb{k}. ] where `` ] from the largest element of , and calculate the arithmetic mean , , of the elements of as follows : .finally , we find the residual vector via calculating . now , let be a user - specified threshold for the size of the elements of the vector , and define a discrete local maximum ( peak ) of by the data sample that is larger than its two neighboring samples ; i.e. , the value .let be the row vector of the local maxima of .we have the following three cases : : : if , then set ] . : : if , then set ] .the following theorem marks the error bounds of the quadrature truncation error given by the above theorem on each element .[ thm:2 ] given the assumptions of theorem [ sec : erranalysgp1 ] such that , where the constant is dependent on but independent of .then there exist some constants and , dependent on and independent of such that the quadrature truncation error , , on each element is bounded by where ; the constants and are dependent on , but independent of .the proof can be established easily using ( * ? ? ?* lemmas 4.1 and 4.2 ) .it is noteworthy to mention that the accuracy achieved by a spectral differentiation / integration matrix used by traditional pseudospectral methods in the literature is usually constrained by the number of collocation points .therefore , increasing the number of collocation points on each domain , requires a similar grow in the size of the spectral differentiation / integration matrix , which could result in a significant grow in the total computational cost of the method . on the other hand , a notable merit of the present method as shown by theorem [ thm:2 ]occurs in taking advantage of the free rectangular form of the kesobgim .in particular , regardless of the small / large number of collocation points used to determine the approximate states and controls , the present method endowed with the kesobgim can achieve almost full machine precision approximations to the integrals involved in the optimal control problem using relatively moderate values of the parameters and on each element ; thus achieving excellent approximations while maintaining a low operational cost . we shall demonstrate this virtue further in the next section .in this section , we report the results of the present gise method on two nonlinear optimal control problems well studied in the literature .the nonlinear programming problems were solved using snopt software with the major and minor feasibility tolerances , and major optimality tolerance all set at .the numerical experiments were conducted on a personal laptop equipped with an intel(r ) core(tm ) i7 - 2670qm cpu with 2.20ghz speed running on a windows 10 64-bit operating system and provided with matlab r2014b ( 8.4.0.150421 ) software .[ [ example-1 ] ] * example 1 * + + + + + + + + + + + consider the following nonlinear optimal control problem : [ ex1:1 ] ,\\ & u(t ) \in [ 0,1],\;x(0 ) = 1,\;x(1 ) = 0.\end{aligned}\ ] ] this problem was numerically solved in a series of papers ; cf .the work presented in the latest article of this series , , adopted a linearization of the nonlinear dynamical system via a linear combination property of intervals followed by a `` _ _ random _ _ '' interval partitioning ( three switching points were chosen randomly ) and an integral reformulation of the multidomain dynamical system .the transformed problem was then collocated at the legendre - gauss - lobatto points and the involved integrals were approximated using the legendre - gauss - lobatto quadrature rule .moreover , the control and state functions were approximated by piecewise constants and piecewise polynomials , respectively .we applied the present gise method for solving the problem numerically using the parameter settings , and .all state and control coefficients were initially set to one .figure [ fig : ex1statecontrol ] shows a sketch of the obtained approximate optimal state and control profiles on ] . therefore , the adaptivity of the method enables a fast implementation using a single collocation grid without any domain partitioning ; thus .figure [ fig : ex1fun1 ] shows the plot of the approximate optimal cost functional , for several values of and . as observed from the figure , the reported approximate optimal cost function values approach for increasing values of collocation points and spectral coefficients in a close agreement with the results obtained by .figure [ fig : ex1coeff1 ] manifests further the corresponding exponential ( spectral ) decay of the last optimal coefficients in the state and control truncated series , and .in fact , figure [ fig : ex1coeff1 ] shows an interesting behavior of the gise method .in particular , the figure shows that the last optimal coefficients in the state and control shifted gegenbauer truncated series generally decay faster for negative values of the gegenbauer parameter than for positive values , and this deterioration phenomenon seems to happen monotonically as the value of approaches .this numerical simulation is in close consensus with the work of on the numerical solution of the second - order one - dimensional hyperbolic telegraph equation using a shifted gegenbauer pseudospectral method . in particular , the latter showed theoretically that the coefficients of the bivariate shifted gegenbauer expansions decay faster for negative -values than for non - negative -values , but the asymptotic truncation error as the number of collocation points grows largely is minimized in the chebyshev norm exactly at ; i.e. , when applying the shifted chebyshev basis polynomials . figure [ fig : ex1fun1 ] indicates that collocations at negative values of close to is not to be endorsed for increasing values of collocation points and expansion terms . in particular , while the values of seem to be matching for almost all of the -values used in the numerical simulation , a peak in the surface of the approximate optimal cost function is clearly observed at , for , indicating a poor approximation in this case . on the other hand , pointed out that the gegenbauer quadrature ` may become sensitive to round - off errors for positive and large values of the parameter due to the narrowing effect of the gegenbauer weight function , ' which drives the quadrature to become more extrapolatory .in particular , identified the range , as a preferable choice to construct the gegenbauer quadrature , for some relatively small positive number and ] by _ `` the gegenbauer collocation interval of choice , '' _ and denote it by .figure [ ex1fun1_expandedperfect ] shows a sketch of the approximate optimal cost functional for , and , where we can clearly see the rise of hills in the surface profile for increasing values of demonstrating poor approximations for such -values .this formation of hills for increasing values of is salient as well in the surface profiles of the corresponding magnitudes of the last coefficients in the state and control truncated series ; cf .figure [ ex1coeff1expandedperfect ] .in general , we largely endorse the following rule of thumb . [ [ rule - of - thumb ] ] * rule of thumb * + + + + + + + + + + + + + + + _ it is generally advantageous to collocate problem 3 for values of for small / medium numbers of collocation points and gegenbauer expansion terms ; however , collocations at the shifted chebyshev - gauss points should be put into effect for large numbers of collocation points and gegenbauer expansion terms if the approximations are sought in the infinity norm ( chebyshev norm ) . _+ we shall further examine experimentally this rule of thumb in the next example , where the exact control function is given in closed form . ] into the three domains , \bm{\omega}_2 = [ -0.3906,0.3906] ] ; thus .these three mesh intervals correspond to the three domains , \bm{\omega}_{2,\text{orig } } = [ 0.3047,0.6953] ] of the original optimal control problem .figure [ ex2_all ] shows the plots of the approximate state functions , and , exact and approximate control functions , and , respectively , on the interval ] in log - lin scale , where a rapid error decay is clearly seen .in contrast , figure [ ex2_u_err_single ] shows the slow convergence of the absolute error on the interval ] at the exact edge points and , which correspond to the original edge points and in ] in log - lin scale using .high - order control approximations are achieved in all cases , except for , and , where a significant recession in accuracy is reported near the time boundary points and in the former case , while a tangible error growth in the vicinity of the right endpoint is reported in the latter two cases .a typical hp - pseudospectral method would normally apply a square differentiation / integration matrix of size to compute the derivatives / integrals involved in the optimal control problem ; thus requires flops to evaluate the derivatives of a real - valued differentiable function at a set of collocation points for each , or the definite integrals of an integrable function using the same sets of collocation points as the upper limits of the integrations .in contrast , the kesobgim requires flops to evaluate the needed integrals . to prevent an enormous amount of calculations, we can set at a relatively medium value , say , usually sufficient to achieve nearly full machine precision approximations to the integrals of well - behaved functions for large values of .for instance , using and , we can evidently count a substantial difference of flops between the developed kesobgim and a standard operational matrix of differentiation / integration for each derivative / integral per mesh interval . to visualize the big picture , notice that the discretization of the present optimal control problem requires the evaluation of a single integral for the cost functional , and integrals involved in eqs ., for a total of integral evaluations .working out the mathematics , it is not hard to realize a remarkable gap of flops in favor of the present gise method endowed with the kesobgim !the current study casts the light on the judicious choice of the kesgg collocation points set , , to be utilized on each mesh interval during the discretization process of optimal control problems .in particular , the current work supports collocations performed at , for small / medium numbers of collocation points and gegenbauer expansion terms .nonetheless , it would be extremely beneficial to determine theoretically the optimal collocation sets for each domain * a question which yet remains open*. and ( left and middle ) , exact and approximate control functions , and ( right ) , respectively , on the interval ] , and ] , and ] , and ] , and ] in log - lin scale using , and .all plots were generated using linearly spaced nodes in each domain .,width=680,height=566 ]the current gise method was tested on only two numerical test problems in an attempt to reduce the size of the manuscript .however , further test problems may be necessary to verify further the power of the proposed method .moreover , a further theoretical study may be conducted to analyze the convergence of the gise method .motivated by the spectral accuracy offered by spectral element methods , we have proposed a fast , economic , and high - order algorithm for the solution of nonlinear optimal control problems exhibiting smooth / nonsmooth solutions .the coalition of information derived from the residual of the discrete dynamical system equations and the magnitude of the last coefficients in the state and control truncated series forms a powerful multicriterion adaptive strategy to boost the accuracy of the state and control approximations .another major source for the strength of the proposed method lies in the free rectangular form of the kesobgim , which allows for excellent approximations to integrals with accuracy approaching machine precision .remarkably , this significant result is achieved regardless of the number of collocation points used in the discretization process .the numerical experiments support collocations of nonlinear optimal control problems performed at , for small / medium numbers of collocation points and gegenbauer expansion terms .the proposed method can be easily extended to different problems and applications .37 natexlab#1#1[2]#2 , , , ( ) ., , , ( ) . , , , , , ( ) . , , , , , ( ) . . , , , ( ) . , , ,, ( ) . , , , , , , , , ( ) . , , , ( ) . , , , , in : , pp . . , , ph.dthesis , school of mathematical sciences , faculty of science , monash university , ., , , ( ) . , , , , , ( ) . , , , , , , , , ( ) . , , , ( ) ., , , , , springer series in computational physics , , ., , , , , , , , ( ) . , , , , ( ) ., , , , , ( ) . , , , , ( ) ., , , ( ) . , , , ., , , ( ) . , , , , ( ) ., , , ( ) . , , , , ( ) ., , , ( ) . , , . ., , , , , , department of mathematics , university of california , san diego , , ., , , , ( ) . , , in : , , , pp . ., , , ( ) . , , , , , in : , volume , ., , , ( ) . , , , ( ) .
in this work , we propose an adaptive spectral element algorithm for solving nonlinear optimal control problems . the method employs orthogonal collocation at the shifted gegenbauer - gauss points combined with very accurate and stable numerical quadratures to fully discretize the multiple - phase integral form of the optimal control problem . the method brackets discontinuities and `` points of nonsmoothness '' through a novel local adaptive algorithm , which achieves a desired accuracy on the discrete dynamical system equations by adjusting both the mesh size and the degree of the approximating polynomials . a rigorous error analysis of the developed numerical quadratures is presented . finally , the efficiency of the proposed method is demonstrated on two test examples from the open literature .
a two - user genie - aided cognitive interference channel ( cic ) is a two - user interference channel ( ic ) in which one of the transmitters ( termed the secondary transmitter , here ) knows the other transmitter s message ( termed the primary transmitter , here ) noncausally ( i.e. , by a genie ) .devroye , mitran , and tarokh ( dmt ) in their premier paper , titled `` achievable rates in cognitive radio channels , '' derived an achievable rate region for the discrete memoryless cic ( [ [ dmt : cic ] , th . 1 ] ) .we observe that the coding scheme proposed in is correct but unfortunately the derived achievable rate region is incorrect because of occurring some mistakes in decoding and analysis of error probability .we first intuitively show that some rate - terms in the dmt rate region seem to be incorrect ( in fact , they are incomplete ) .then , we correct the dmt achievable rate region and thereby show that the corrected achievable rate region includes the dmt rate region given in .the two - user discrete memoryless cic ( dm - cic ) , denoted by consists of four finite alphabets and a collection of conditional probability mass functions on .the channel is memoryless in the sense that . in this channel transmitter ,wants to send a message , uniformly distributed on , to its respective receiver .the primary transmitter generates the codeword as , and the secondary transmitter , being non - causally aware of the primary message , generates the codeword as . the decoding function is given by . a pair of non - negative real values is called an achievable rate for the dm - cic if for any given and for any sufficiently large , there exists a sequence of encoding functions , and a sequence of decoding functions , such that l p_e^(n)=p_r\ { g_1(y_1^n ) m_1 g_2(y_2^n ) m_2 | ( m_1,m_2 ) } where is the average probability of error .the closure of the set of all achievable rate pairs is called the capacity region . in ,devroye _ et al . _, by using rate splitting , divided each message , , into two independent sub - messages : * common sub - message at rate ( to be sent from ) , * private sub - message at rate ( to be sent from ) , such that . in thispaper , auxiliary random variables ( rvs ) and represent the sub - messages and , respectively . moreover , rv is time sharing rv which is independent of all other rvs .we now present the dmt achievable rate region for the two - user genie - aided dm - cic ._ theorem 1 [ [ dmt : cic ] , th .1 ] : _ let be the set of all joint distributions that factor as l p(q , u_1c , u_1p , u_2c , u_2p , x_1,x_2)= + p(q)p(u_1c|q)p(u_1p|q)p(x_1|q , u_1c , u_1p)p(u_2c|q , u_1c , u_1p)p(u_2p|q , u_1c , u_1p)p(x_2|q , u_2c , u_2p ) .[ pdf_dmt ] for any , let be the set of all quadruples of non - negative real numberssuch that there exist non - negative real satisfying then * is an achievable rate region for the genie - aided dm - cic in terms of , * is the implicit description of the dmt achievable rate region where is the set of all pairs of non - negative real numbers such that and for some .we first saw that the rate - terms ( 2.7)(2.9 ) and ( 2.13)(2.16 ) of intuitively seem to be incomplete because of not utilizing some dependencies among rvs .for example , in ( 2.7 ) we have the main term . by considering the coding exploited in and and also since in the main term ,rv is known ( or given ) and rvs are unknown , we expect that the dependency between known rv and unknown rvs as well as the dependency between unknown rvs and help communication and boost the rates . as we observe in ( 2.7 ) , the term is added to the main term but unfortunately , the term is not .similarly , in ( 2.8 ) the dependency between and , in ( 2.9 ) the dependency between and , in ( 2.13 ) the dependency between and , in ( 2.14 ) the dependency between and , in ( 2.15 ) the dependency between and , and in ( 2.16 ) the dependencies among , and can help communication and boost the rates , while they are overlooked in . in this section, we present the corrected version of the dmt rate region that utilizes the aforementioned dependencies among auxiliary rvs in boosting the rates . _ theorem 2 [ corrected th . 1 ] : _ for any , let be the set of all quadruples of non - negative real numberssuch that there exist non - negative real satisfying then * is an achievable rate region for the genie - aided dm - cic in terms of , * is the implicit description of the corrected dmt achievable rate region where is the set of all pairs of non - negative real numbers such that and for some .the generation of the codewords and can be performed independently of and by using binning scheme . in other words ,the codebook is generated according to the distribution * generate independent and identically distributed ( i.i.d . ) _n_-sequences , each according to ; * generate i.i.d ._ n_-sequences , each according to ; * generate i.i.d ._ n_-sequences and , each according to ; ( i.e. , bins and sequences in each bin ) * generate i.i.d ._ n_-sequences and each according to ; ( i.e. , bins and sequences in each bin ) the cognitive transmitter , being non - causally aware of and , to send and , first looks for indices and in bins and , respectively , such that l e^enc2_1=\{(q^n , u^n_1p(1),u^n_1c(1),u^n_2c(1,l_2c))a^(n)_(q , u_1p , u_1c , u_2c ) for all l_2c\{1 , , 2^nr^_2c } } + e^enc2_2=\{(q^n , u^n_1p(1),u^n_1c(1),u^n_2p(1,l_2p))a^(n)_(q , u_1p , u_1c , u_2p ) for all l_2p\{1 , , 2^nr^_2p } } [ enc.error ] l e^dec1_1=\{(q^n , u^n_1p(m_1p),u^n_1c(1),u^n_2c(1,l^*_2c),y^n_1)a^(n)_1 for m_1p 1 } + e^dec1_2=\{(q^n , u^n_1p(1),u^n_1c(m_1c),u^n_2c(1,l^*_2c),y^n_1)a^(n)_1 for m_1c 1 } + e^dec1_3=\{(q^n , u^n_1p(1),u^n_1c(1),u^n_2c(m_2c , l_2c),y^n_1)a^(n)_1 for m_2c 1 and l_2c l^*_2c } + e^dec1_4=\{(q^n , u^n_1p(m_1p),u^n_1c(m_1c),u^n_2c(1,l^*_2c),y^n_1)a^(n)_1 for m_1p 1 and m_1c 1 } + e^dec1_5=\{(q^n , u^n_1p(m_1p),u^n_1c(1),u^n_2c(m_2c , l_2c),y^n_1)a^(n)_1 for m_1p 1 , m_2c 1 and l_2c l^*_2c } + e^dec1_6=\{(q^n , u^n_1p(1),u^n_1c(m_1c),u^n_2c(m_2c , l_2c),y^n_1)a^(n)_1 for m_1c 1 , m_2c 1 and l_2c l^*_2c } + e^dec1_7=\{(q^n , u^n_1p(m_1p),u^n_1c(m_1c),u^n_2c(m_2c , l_2c),y^n_1)a^(n)_1 for m_1p 1 , m_1c 1 , m_2c 1 + and l_2c l^*_2c } [ dec1.errors ] where , for simplicity , is denoted by .note that the probability of decoding error events will be evaluated by considering : ( i ) the encoding distribution ( i.e. , ) and the actual transmitted sequences , and ( ii ) the codebook generation distribution , the correctly decoded sequences and how to generate the sequences . as we mentioned earlier , for decoder 1 only the rate - terms ( 2.7)(2.9 ) are wrong , therefore we only evaluate the probabilities of , and .the probability of the event can be bounded as , goes to zero as if ( 3.7 ) is satisfied .similarly , and can be bounded as , and as if ( 3.8 ) and ( 3.9 ) are satisfied , respectively .l e^dec2_1=\{(q^n , u^n_2p(m_2p , l_2p),u^n_2c(1,l^*_2c),u^n_1c(1),y^n_2)a^(n)_2 for m_2p 1 and l_2p l^*_2p } + e^dec2_2=\{(q^n , u^n_2p(1,l^*_2p),u^n_2c(m_2c , l_2c),u^n_1c(1),y^n_2)a^(n)_2 for m_2c 1 and l_2c l^*_2c } + e^dec2_3=\{(q^n , u^n_2p(1,l^*_2p),u^n_2c(1,l^*_2c),u^n_1c(m_1c),y^n_2)a^(n)_2 for m_1c 1 } + e^dec2_4=\{(q^n , u^n_2p(m_2p , l_2p),u^n_2c(m_2c , l_2c),u^n_1c(1),y^n_2)a^(n)_2 for m_2p 1 , l_2p l^*_2p , m_2c 1 + and l_2c l^*_2c } + e^dec2_5=\{(q^n , u^n_2p(m_2p , l_2p),u^n_2c(1,l^*_2c),u^n_1c(m_1c),y^n_2)a^(n)_2 for m_2p 1 , l_2p l^*_2p , m_1c 1 } + e^dec2_6=\{(q^n , u^n_2p(1,l^*_2p),u^n_2c(m_2c , l_2c),u^n_1c(m_1c),y^n_2)a^(n)_2 for m_2c 1 , l_2c l^*_2c , m_1c 1 } + e^dec2_7=\{(q^n , u^n_2p(m_2p , l_2p),u^n_2c(m_2c , l_2c),u^n_1c(m_1c),y^n_2)a^(n)_2 for m_2p 1 , l_2p l^*_2p , m_2c 1 , + l_2c l^*_2c and m_1c 1 } [ dec2.errors ] where , for simplicity , is denoted by .as we mentioned earlier , for decoder 2 only the rate - terms ( 2.13)(2.16 ) are wrong , therefore we only evaluate the probabilities of , , and . the probability of the event can be bounded as where , is obtained by considering this fact and are dependent in general and the encoding distribution says that they are independent only when are given , i.e. , form a markov chain .hence , goes to zero as if ( 3.13 ) is satisfied .similarly , , and can be bounded as , , and as if ( 3.14 ) , ( 3.15 ) and ( 3.9 ) are satisfied , respectively .this completes the proof of theorem 2 . 6 [ dmt : cic ] n. devroye , p. mitran , v. tarokh , `` achievable rates in cognitive radio channels , '' _ ieee trans .inform . theory _5 , pp . 18131827 , may 2006 .[ gelfand : pinsker ] s. gelfand and m. pinsker , `` coding for channels with random parameters , '' _ probl . contr . andtheory , _ vol .1 , pp . 1931 , 1980 .
in a premier paper on the information - theoretic analysis of a two - user cognitive interference channel ( cic ) , devroye _ et al . _ presented an achievable rate region for the two - user discrete memoryless cic . the coding scheme proposed by devroye _ et al . _ is correct but unfortunately some rate - terms in the derived achievable rate region are incorrect ( in fact incomplete ) because of occurring some mistakes in decoding and analysis of error probability . we correct and complete the wrong rate - terms and thereby show that the corrected achievable rate region includes the rate region presented in . achievable rate region , cognitive interference channel , gelfand-pinsker coding .
the distortion of sound waves in materials with spatially varying index of refraction can be controlled using concepts from rays and high frequency propagation .for instance , all ray paths can be made to behave according to a prescribed pattern , e.g. convergence at a point on the other side .transformation acoustics ( ta ) goes further in making the material replicate an equivalent volume of `` virtual '' acoustic fluid which faithfully mimics the wave equation itself rather than some asymptotic approximation .such ta - based gradient index ( grin ) lenses fall under the umbrella of acoustic metamaterials , a field which has seen tremendous innovation in recent years . however , to build a ta - grin lens in the laboratory often demands compromise between the frequency range of operation , transmission loss and lensing effectiveness , particularly in water , the acoustic medium of interest here .a successful ta - grin lens simultaneously displaying high transmission and accurate wave steering can be achieved in water using a sonic crystal ( sc ) array of elastic scatterers .quasi - periodic scs are capable of filtering , guiding and/or steering an incident wave based on a gradient of effective properties . unlike phononic crystals, scs can not support shear waves in the bulk , hence , energy loss to mode conversion is minimized .the localized effective acoustic properties of a sc element are an average of the fluid and contained elastic scatterer .these depend on the shape , filling fraction , the effective bulk modulus and the effective density of the scatterer . in order to display the inhomogeneity required for a grin lens, the properties of the elements have to differ in a quasi - continuous manner .this has been successfully achieved in air and in water by fixing the lattice constant and varying the filling fraction of solid cylinder scatterers in the fluid unit cell . for air - based scs the cylinders can be modeled as rigid . for water - based scsthe elasticity of the scatterer is not only non - negligible , but essential in the modeling of such structures .typical engineering materials , such as metals , are much denser than water leading to impedance mismatch and undesired scattering .a solution is to use a hollow air - filled elastic shell which has an effective density and bulk modulus much closer to that of water as compared to the solid material .the effective acoustic properties ( speed , impedance ) depend on the material of the shell and , in particular , on its thickness .the sensitivity of the effective compressibility to shell thickness is a consequence of hoop stress in thin shells , which combined with the dependence of the effective density , results in the fact that thin shells have effective sound speed that is independent of thickness .the effective impedance , on the other hand , is a linear function of thickness in the same thin shell approximation .these two basic facts together indicate that by choosing the material and the thickness , it is possible to achieve a wide range of effective properties , as illustrated in the chart in figure [ f1 ] ( motivated by earlier work in ) .this is the central idea in the present work . andbulk modulus of hollow cylindrical shells for ten commonly available materials normalized relative to water , from eq . .each curve shows the properties as a function of the relative thickness to radius ratio , from small to large as indicated by the arrow .circles indicate the values for .diagonal dashed lines indicate where the effective acoustic speed and impedance coincide with those of water .( color online),width=326 ] the purpose of the present paper is to demonstrate the potential for ta - based grin lens design in water using the wide variety of shells available .the transformation acoustics example considered in detail here is the cylindrical - to - plane wave lens discussed by layman et al . it works by steering waves from a monopole source at the center away from the corners to the faces of the lens .the sc of ref . is based on constructive multiple scattering from finite embedded elastic materials in a fluid matrix , something previously investigated by torrent and sanchez - dehesa .the grin lens device considered here expands the possibilities in ref . by increasing the range of achievable properties over those presented by martin et al . .the cylindrical - to - plane wave lens is designed to increase radiation in specific directions .enhanced directionality has also been experimentally observed for an acoustic source placed inside a two - dimensional square lattice phononic crystal operating at the band - edge frequency .highly directional acoustic wave radiation is also possible in 2d pcs at pass band frequencies far away from the band edge states , as shown in simulations of a square lattice of steel cylinders in water .the use of the band structure of a periodic square array to produce directional water wave radiation was proposed by , and subsequently demonstrated in experimental measurements on a 6 array of surface - breaking cylinders with a monopolar source at the array center .directional radiation has been demonstrated in air using a non - periodic array of cylinders to produce scattering enhancement in the forward direction .martin et al . produced acoustic grin focusing by changing the lattice constant in a pc with elastic shell elements .parallel zigzag rigid screens have also been proposed as potential focusing and directional beaming devices . while the spatial filtering device described in this paper uses a fluid matrix , morval et al . show directional enhancement of a monochromatic acoustic source into a surrounding water medium using a square array of cylinders in a solid matrix ; the 2-dimensional quadropolar collimation effect is based on square - shaped equifrequency contours of the phononic crystal . although the solid matrix has obvious practical advantage , the narrow frequency device of yields decreased amplitude in the preferential directions as compared with the free field radiation .the ta - based device described here does not have these limitations , and shows for the first time as far as we are aware , broadband positive gain in a neutrally buoyant square grin lens , with obvious implications for low loss underwater application .the outline of the paper is as follows .transformation acoustics and the mapping for the cylinder - to - square lens are described in section [ sec2 ] .acoustical properties of cylindrical shells are discussed in section [ sec3 ] and the proposed design using available cylindrical tubes is presented .the experimental setup is described and acoustical measurements are discussed in section [ sec4 ] , with concluding remarks in section [ sec5 ] .the transformation of a circular region to a square one can be achieved using a conformal change of coordinates .conformal mapping is a special case of the general theory of transformation acoustics ( ta ) .usually , in ta one can expect the material properties associated with a spatial transformation to display anisotropy .this could be in the density or the bulk modulus , or in both simultaneously , but usually something has to become anisotropic .conformal maps are unique in ta in that they do not require anisotropy . in this case both the inertial and the pentamodal forms of ta converge , and there is no ambiguity or degrees of freedom , a feature that distinguishes ta from its electromagnetic counterpart . at the same time , there is some confusion in the application of ta for conformal mappings , e.g. , so we briefly review the correct procedure .we are concerned with a background fluid ( water ) of density and bulk modulus in which the acoustic pressure satisfies where is the speed of sound and time harmonic dependence is understood . under a conformal transformation laplacian in the original variables becomes .if we define the pressure as then satisfies the helmholtz equation in the mapped coordinates with transformed acoustic speed where .this means that the transformed parameters are indeed isotropic , but it does not provide unique expressions for the individual parameters and , only the combination .the necessary second relation comes from the requirement that the pressure in the transformed fluid arises from a particle displacement field which satisfies the momentum equation and the pressure constitutive relation .eliminating gives the transformed helmholtz equation for if and only if is constant , which can be assumed equal to the original density . in summary ,the transformed parameters are _ 1 = , k_1 = | z_1 ( z)|^2 k. the lens is based the transformation of a circle of diameter into a square of side , with the precise form of the circle - to - square mapping given in the appendix . in particular, we note from eqs . and that the mapped value of the bulk modulus associated with the original point in the circle is k_1 = . along the principal directions the bulk modulus decreases from the center of the square to a global minimum at the center of the sides . along the diagonals it increases from its value at the center as it becomes unbounded at the four corners of the square .the overall trend is illustrated in figure [ f2 ] .[ h ! ] for the cylindrical - to - square mapping .( color online),title="fig : " ]consider a cylindrical shell of thickness and outer radius made of uniform solid with density , shear modulus , and poisson s ratio .the interior is air filled , which in the context of water as the ambient medium in the exterior means that we can safely ignore the inertia and stiffness of the interior . the shell s effective density is the average value taken over the circular region of radius .the effective bulk modulus is the value for which the radial compression of a uniform circular region of fluid under external pressure is the same as that of the shell under the same pressure , which follows from plane strain elasticity . in summary ,_ & = ( 2h / a-(h / a)^2)_s , + k_&= _ s / ( 2(1-_s ) _ s/ _ - 1 ) .the unit cell of the square array , shown in figure [ f3 ] , consists of a solid cylindrical shell surrounded by a complementary region of water .the equivalent density and bulk modulus , , , of the unit cell depend on the properties of the surrounding fluid as well as the effective shell properties , according to [ -1 ] here is the shell volume fraction in the unit cell , where is the cylinder spacing as well as the side length of the unit cell .since the required density from ta is , it follows that the shell effective density is also constant , .the effective bulk modulus of the shell necessary to achieve the equivalent value from ta is = ( k^-1 + ( k_^-1 - k^-1 ) f^-1 ) ^-1 . as a function of the effective bulk modulus of the tuned shell for several filling fractions .( color online),width=316 ] the equivalent bulk modulus of the unit cell is significantly affected by the surrounding fluid . with the exception of , all in - plane modes produce no volume change and hence do not change the effective bulk modulus of the unit cell .no significant volume altering modes were observed in the frequency range considered .shells of radius cm with a relatively tight packing of yields a filling fraction of . in this case , in order to have the effective quasi - static bulk modulus of the unit cell , the effective bulk modulus of the shell - springs - mass system must be , see figure [ f4 ] .the proposed array contains 7 by 7 unit cells of size with cm giving a lens side length of cm .the central square element is left empty , requiring 48 cylinders .this was considered the minimal number necessary to provide both a reliable and an accurate gradient index effect .the spacing was chosen to reduce the overall dimension of the lens as much as possible , without making the filling fraction unduly large .inter - cylinder spacing in the fabricated lens was controlled from the two ends using preformed holders , see figure [ f8 ] below .figure [ f5 ] shows the discretized values for the equivalent stiffness of each unit cell as determined from figure [ f2 ] by spatial averaging .the effective properties of the shells are obtained from equation with , using the required equivalent stiffness of each unit cell in figure [ f2 ] . as noted above, this means that effective properties of the shells must be more extreme than those implied by the mapping alone .the effective density of each shell is tuned to water . in the 7x7 array .the central element is absent ( i.e. water ) in the constructed design.,width=307 ] the three primary design criteria were : 1 ) that the shells are readily available , 2 ) the effective density of each shell approximately matches water and 3 ) the effect is apparent in the designated frequency range of interest : near 20 to 25 khz .the shells must be sub - wavelength in dimension . furthermore all shells are required to have nearly the same outer diameter ; therefore, the common outer diameter of 0.5 inches is selected as practical . fixing this outer dimensionleaves two parameters : the shell material and its relative thickness .the range of effective properties as a function of both the shell material and thickness are succinctly summarized in figure [ f1 ] .several features are apparent from the chart in figure [ f1 ] .first , it is clear that the ten materials considered provide a comprehensive range of effective properties . for each material ,the effective properties are approximately linear functions of the shell thickness for thin shells , with some curvature at larger values of .the present design requires shells with effective density equal to that of water , which restricts values of to those near the vertical dotted line .table [ table1 ] summarizes the properties of available shells which have nearly the same density as water , but varying effective bulk moduli . [h ! ] .readily available shells ( i.e. tubes and pipes ) with different effective bulk moduli that have effective density close to that of water .all properties are normalized to water . [cols="<,^,^,^,^,^,^,^,^",options="header " , ] for the final design we considered only commercially available tubes made from a variety of materials with standard values of radius and thickness . as figure [ f1 ] illustrates , this provides a surprisingly wide range of possible properties , with the added advantage of allowing us to fabricate the lens with minimal effort and cost .based on the available candidates from table [ table1 ] we selected nine different shells as shown in figure [ f6 ] for the fabricated lens . .the actual thicknesses of the individual shells are indicated .( color online),width=259 ] + the total pressure field of the cylindrical - to - square lens made of elastic shells was obtained by numerical computation using comsol .figure [ f7 ] shows a simulation for a monopole source of frequency 22 khz in the center of the lens .also shown for comparison are the pressure fields for the lens with the unit cells replaced by the effective acoustic medium and the free field radiation of the monopole source .the simulations indicate that the cylindrical - to - plane wave lens made from the distribution of nine distinct empty shells performs very well as compared the optimal case of each unit cell having the prescribed effective acoustic properties directly from the conformal mapping .it is also evident that the transmission is enhanced in the four principal directions .the use of empty shells in water acoustics opens up the likelihood of exciting flexural resonances .this is not normally a concern when dealing with isolated shells because the flexural waves are subsonic in speed and hence do not radiate .the present design places the shells in close proximity , leading to the possibility of coherent flexural wave interaction , which can lead to strong scattering .this effect can have positive or negative consequences , depending on one s immediate goal . in the present situation the shells are of varying thickness and comprised of different materials , with the result that the flexural resonances are spread over many frequencies , which decreases the possibility for coherent interaction .in particular , we note that no such coherent effects were observed in the experiments ( see next section ). related and surprising constructive interference effects resulting from coherent interaction of flexural waves in closely packed arrays of shells in water are described elsewhere .the device pictured in figure [ f8 ] was fabricated to validate the cylindrical - to - plane wave lens design .the device tested has the cylinder positions , radii , and material properties provided in figure [ f6 ] and table [ table1 ] .although the model presented section [ sec3 ] is strictly two - dimensional ( 2d ) , which would suggest experimental validation using a 2d water waveguide , the test facilities available to the authors required a three dimensional ( 3d ) test configuration .details of the configuration and rationale for their selection are provided here .the as - tested lens is constructed from cylindrical rods 1 m in height and sealed at either end with urethane end - caps to prevent water intrusion .the cylinders are clamped between 2 cm thick acrylonitrile - butadiene - styrene ( abs ) plates using tensioned monofilament .the plates were machined to precisely locate the top and bottom of the cylinders in the positions dictated by the design .note that an added benefit of the urethane end - caps is that they provide some level of vibration isolation between the end plates and the cylinders. one key challenge to accurately measure the performance predicted in section [ sec3 ] was to minimize the effects of the finite height of the lens and thus observe its 2d response .the associated practical difficulty encountered was in the selection and placement of the appropriate acoustic source .validation of the lens design implies the need for an axis - symmetric source pressure along the vertical axis in the 3d lens , but no such source was available to the authors nor could one be easily constructed .acoustic reciprocity , described below , was invoked to resolve this difficulty .reciprocity is a fundamental principal of quiescent acoustic media , first fully described for acoustics by rayleigh ; it states that the interchange of source and receiver will lead to the same measured acoustic field if the environment is un - perturbed . specifically , if one excites acoustic waves at some point , _ a _ , then `` the resulting velocity potential at a second point , _ b _ , is the same both in magnitude and in phase , as it would have been at _ a _ had _ b _ been the source of sound . ''applying this principal to the problem at hand , it is possible to replace the axis - symmetric source at the center of the cylindrical - to - plane wave lens with a point receiver and then measure the acoustic field at the center of the lens due to a plane wave incident from a specified radial angle . by varying the angle incidence of the plane wave , it is thus possible to construct the far - field radiation pattern expected from an axis - symmetric source placed at the center of the lens .the only remaining problem is the generation of plane waves at a specified angle of incidence .this is achieved using a spherical wave source located sufficiently far away from the lens such that the phase of the pressure field impinging on the lens aperture has variations less than 1 across the entire frequency band of interest . for the lens geometry and frequencies considered ,this can be achieved by at least 10 m separation between the spherical source and the lens . .( color online ) ] the experiment was conducted at the lake travis test station ( ltts ) of the applied research laboratories ( arl ) at the university of texas at austin .an a48 hydrophone and associated pre - amplification electronics , which was fabricated and calibrated by the underwater sound reference division ( usrd ) of the naval undersea warfare center ( nuwc ) , was located at the center of the lens .this hydrophone has less than 1 db of variation across the entire frequency range of interest for this experiment , which was 1540 khz .the acoustic source was an omni - directional itc-1032 fabricated by channel technologies group .the source and lens with internal hydrophone were submerged to a depth of 5.5 m with a separation distance of 10 m. the lens was attached to a column capable of angular rotation through 360 .the source is then driven with 2 ms tone bursts from 1540 khz at 2.5 khz intervals and the time - series voltage output from the hydrophone was collected from 0 - 360 at approximately 0.5 intervals using a sampling frequency of 512 khz .this process was then repeated for the hydrophone without the lens as a reference and referred to as the bare hydrophone case .representative results from the series of experiments conducted on the cylindrical - to - plane wave lens are summarized in figures [ f9 ] and [ f10 ] .the results were obtained by performing post - processing of the time - series data output from the hydrophone , described next .for each angle and frequency , the steady - state portion of the tone - burst is identified through inspection of the time - domain voltage signal and a time gate is set so that only the steady state portion is considered .the magnitude of the signal at each frequency and angle combination is then found by averaging the magnitude of the complex envelope of the received voltage signal during its steady - state response .this process is carried out for the both measurement configurations ( hydrophone in the lens and bare hydrophone ) .the frequency- and angle - dependent gain is then calculated as $ ] .representative polar plots for 15 , 22.5 , 25 , and 40 khz from experimental data and 2d finite element models are shown in figure [ f9 ] .agreement between model and measurement for the both gain and angular dependence ( beam pattern ) match very well , with the location of the main lobes observed at 4 , 91 , 176 , and 268 on average across all frequencies inspected ( with the exception of the 20 khz case as described below ) .unexpected variations in beam pattern between predicted and measured performance are likely owing to imperfections in the constructed device .one very important observation of this data is the broadband performance of this metamaterial lens .the broadband nature of the response is clearly demonstrated by the results provided in figure [ f10 ] , which shows the measured half - power beam width ( -3 db points ) and on - axis gain averaged at across all four main lobes .the data clearly show that the as - tested lens provides broadband on - axis gain and beam - widths ranging from approximately 15 -30 for frequencies from 22.5 - 40 khz , respectively . finally , it is important to note that the red shaded region in figures [ f10 ] indicates a regime of flexural tube resonances where the lens behavior was significantly degraded .this experiment provides clear validation of the broadband impedance matched lensing effect provided by hollow cylinder metamaterial elements . .the shaded region ( 17.5-22.5 khz ) denotes a flexural tube resonance regime predicted in .broadband gain and narrow beamwidth is apparent over the entire range of frequencies inspected .( color online ) ]the results of this paper have shown the practical potential of using cylindrical elastic shells as elements in acoustic metamaterial devices .the demonstration test device considered is a cylindrical - to - plane wave structure for which the required element properties are determined from transformation acoustics .the size and material composition of the elements in the square array are chosen based on availability of shells , minimizing fabrication difficulties .the device has the added advantage that is neutrally buoyant by virtue of the transformation acoustics design .simulations indicated the operating frequency response of the final design would display a surprisingly broadband effect , which is verified in the experimental findings .the underwater measurements show effective conversion of the monopolar source to quadropolar radiation over an octave band ( 20 to 40 khz ) with positive gain in the desired directions , all despite the minimal number of elements used .these features have been demonstrated for the first time in a water - based acoustic lens device .future research will consider other device designs using cylindrical shell passive amm elements .the schwartz - christoffel conformal transformation of the unit disk to a square has been used previously for lens design via transformation optics and transformation acoustics . herewe provide a simpler form of the transformation than that given in .our objective is a transformation from the plane of the unit circle , defined by the complex variable , to the plane containing the mapped square , defined by the complex variable ( for `` square '' ) .we first map the interior of the unit circle to the upper half plane of the variable through a bilinear transformation as with .the mapping that takes the upper half of the -plane to the -plane containing the square is a special case of the more general mapping known for mapping to polygons .thus , consider where taking , , , we find where is the incomplete elliptic integral of the first kind .the parameters and are found by setting , , and using , , where is the complete elliptic integral of the first kind and is the gamma function .hence , in terms of the original -plane containing the unit circle [1 - 14 ] equation and its inverse map the boundary points in the n , s , e , w , ne , nw , se and sw directions in the circle and square plane to one another .the density and bulk modulus are functions of the derivative of the mapping function .the derivative of is found from and , which gives .hence , for , and , latexmath:[\[\label{1 - 15a } this work was supported by onr through muri grant no .n00014 - 13 - 1 - 0631 and uli grant no .n00014 - 13 - 1 - 0417 .many thanks to dr .maria medeiros of onr ( code 333 ) and dr .stephen oregan of nswccd ( code 7220 ) . m. ke , z. liu , p. pang , c. qiu , d. zhao , s. peng , j. shi , and w. wen .experimental demonstration of directional acoustic radiation based on two - dimensional phononic crystal band edge states ., 90(8):083509 , 2007 .t. p. martin , c. j. naify , e. a. skerritt , c. n. layman , m. nicholas , d. c. calvo , g. j. orris , d. torrent , and j. sanchez - dehesa .transparent gradient - index lens for underwater sound based on phase advance ., 4(3 ) , sep 2015 .b. morvan , a. tinel , j. o. vasseur , r. sainidou , p. rembert , a .- c .hladky - hennion , n. swinteck , and p. a. deymier . ultra - directional source of longitudinal acoustic waves based on a two - dimensional solid / solid phononic crystal ., 116(21):214901 , dec 2014 .c. a. rohde , t. p. martin , m. d. guild , c. n. layman , c. j. naify , m. nicholas , a. l. thangawng , d. c. calvo , and g. j. orris .experimental demonstration of underwater acoustic scattering cancellation ., 5:13175 , aug 2015 .v. romero - garcia , c. lagarrigue , j .-groby , o. richoux , and v. tournat .tunable acoustic waveguides in periodic arrays made of rigid square - rod scatterers : theory and experimental realization ., 46:305108 , 2013 .j. snchez - prez , d. caballero , r. mrtinez - sala , c. rubio , j. snchez - dehesa , f. meseguer , j. llinares , and f. glvez .sound attenuation by a two - dimensional array of rigid cylinders ., 80(24):53255328 , jun 1998 .a. s. titovich and a. n. norris .acoustic scattering from an infinitely long cylindrical shell with an internal mass attached by multiple axisymmetrically distributed stiffeners ., 228:134153 , march 2015 .j. o. vasseur , b. morvan , a. tinel , n. swinteck , a .- c .hladky - hennion , and p. a. deymier .experimental evidence of zero - angle refraction and acoustic wave - phase control in a two - dimensional solid / solid phononic crystal . , 86(13 ) , oct 2012 .
the use of cylindrical elastic shells as elements in acoustic metamaterial devices is demonstrated through simulations and underwater measurements of a cylindrical - to - plane wave lens . transformation acoustics ( ta ) of a circular region to a square dictates that the effective density in the lens remain constant and equal to that of water . piecewise approximation to the desired effective compressibility is achieved using a square array with elements based on the elastic shell metamaterial concept developed in . the size of the elements are chosen based on availability of shells , minimizing fabrication difficulties . the tested device is neutrally buoyant comprising 48 elements of nine different types of commercial shells made from aluminum , brass , copper , and polymers . simulations indicate a broadband range in which the device acts as a cylindrical to plane wave lens . the experimental findings confirm the broadband quadropolar response from approximately 20 to 40 khz , with positive gain of the radiation pattern in the four plane wave directions .
for the development of the long - term economic strategy and for realization of appropriate investment policy one needs the theoretical model of economic growth .all developed countries pay a considerable attention to researching of such theoretical models , as well as to developing of corresponding instrumental means that are important for calculation of the concrete prognoses and programs [ 1 ] .the practical experience , accumulated by many countries , demonstrates that the most effective tool for the development of strategic directions of an economic policy on a long - run period are the special economic - mathematical models of small dimension ( macromodels ) .such models are elaborated on the base of the theory of economic growth [ 2 ] . in traditional macromodels the principal attentionis payed to the estimation of future dynamics of the investments that determine trajectories of the economic growth .however , economic growth depends not only on scales of investment resources , enclosed in economics : these trajectories are also determined by a number of the so - called quality factors [ 3 ] .moreover , the final economic growth results depend basically on these factors . the orientation of economy to the extensive growth by escalating of wide area investments only can not ensure an achievement of the useful final results without paying attention to the quality factors .the history of development of many countries , including the former ussr , shows the possibility of economical development by the principle of production for the sake of production .economical system absorbs huge investment resources and augments volumetric parameters .nevertheless this type of investment policy is not able to raise essentially standards of living .the limited prospects of the economic development on the base of only extensive factors are demonstrated in the solow model [ 4 ] .the estimation of influence of technical progress is made on the basis of the solow model with technical progress [ 2 ] .this model accounts for the contribution of technical progress in the simplest way .it is based on the rather relative concept of autonomous progress advance .the intensity of the autonomous progress effect on production growth is completely determined by the time factor .a quantitative estimations of such effect are received on the basis of the production functions method .the parameters for this method are calculated by means of econometric processing of dynamic rows that define change of production volumes .a row of successive numbering of time periods ( years , quarters or other periods ) is used as a dynamic row , that corresponds to the change of technical progress . in essence another approach to define the quality factors contribution to the growth of production volumes has been realized in growth model with effectiveness .this approach does not use insecure parameters of rather relative econometric models .it is based on a direct estimation of the change of an economic system material capabilities that are indispensable for the realization of social purposes .the growth model with effectiveness is most adequate for simulation of economic growth in the present economical situation for the republic of belarus .the main purpose of this work is to formulate the optimal planning problem for the growth model with effectiveness and solution of this problem : such solution is especially important for researches of economic growth problems , because it answers the main question that appears when choosing the investment policy .the essence of this question consists in choosing between maintenance of current demand ( consumption ) and maintenance of the future demand ( capital investment ) .the solution of the optimal planning problem allows to point such investment policy , at which the economic system works at best . the formulation of the depends on the purposes that the economic system has . in this work onetakes the most realistic version of such purpose - maximization of the welfare integral which is consumption per man during the modeled period of time .the article is organized as follows : section ii deals with the concept of production effectiveness . growth model with effectiveness is described in section iii . in sectioniv one considers optimal planning problem formulation .maximum principle is applied in section v. solution of the optmal planning problem is analyzed in section vi .policy options are considered in section vii .main achievements of this work are briefly concluded in section viii .the growth model with effectiveness uses modern approach to predict the economic growth trajectory and define the quality factors contribution in increasing of production volumes .this approach does not rest on accident - sensitive parameters of rather conditional econometric models , but it is grounded on the direct estimation of those changes of material capabilities of an economical system that are necessary for realization of the chosen purposes . on the basis of such estimation it is expedient to draw a conclusion about the advance , reached in economics : one admits the existence of `` advance '' only in the case , when the capabilities for implementation of the social and economic purposes are extended . at the same time , from the standpoint of the target approach , opening capabilities scales shows the degree of the obtained advances . within the framework of such an approach there is a more successful name for the identification of the advance , reached in economics .this name is `` increase of production effectiveness '' ( or , accordingly , its reduction , if the capabilities for the implementation of the social and economic purposes are reduced ) .this term is used in this work to reflect all quality factors influence on economic growth .thus , the primary task of this chapter is to design a criteria index that should characterize change of production effectiveness on the macrolevel .such an index should be used as one of the most important variables of the dynamic macromodel , which reflects the relationship of this macromodel with other macroeconomic indexes that describe the intensity of production and use of resources , as far as accumulation and non - productive consumption . within the framework of the dynamical macromodel of economical growth, the submodel of the production effectiveness has been developed from the standpoint of the target approach [ 5 ] .the economical system under consideration is supposed to be closed in the sense that the total amount of the needs is fulfiled at the expense of the production only .the system does not move its production into other countries in the debt or gratuitously ( that does not eliminate barter with an environment on the equivalent basis for the coordination of commodity pattern of production with consumption pattern ) . to define an effectiveness criterion of production it is necessary to reveal the certain economical form of the operational outcome of the economical system and resources , used for its achievement . for this purposeone needs to define not only spatial - organizational , but also temporal boundaries of this system in order to reveal and to agree final output indexes with the initial input factors involved in the process of production .if we abstract from the natural and external economical factors and consider the effectiveness of an economical system , without taking into account its temporal boundaries , then the labour force is the only used resource , and the amount of created material benefits and services intended only for the non - productive consumption is the outcome of the system operation . in this caseit would be possible to define the effectiveness index on the basis of the simple comparison of the total final consumption and used labour force .it is always necessary to associate an estimation of effectiveness with the particular time frame , therefore time also should play a role of the factor that limits the frameworks of the economical system .the inputs and outputs of the system reflect not the connection with the environment .the evolution of the system depends on the past and defines the future evolution of the system .the relations of the resources reproduction costs to their volumes are the relevant characteristics of the reproduction process .for example the consumption per man reflects the level of workers material benefits .any change of this index substantially determines the dynamics of all indexes of the population welfare .the relation of gross investment to a volume of accumulated productive capital predetermines the rate of capital growth in a decisive measure .the estimations and are connected with the indexes of labour productivity and capital per man as follows ratio is a consequence of the balance identity , where - output , - consumption , which originates from the assumption about closure of the considered economical system ( see fig.1 ) . to derive the equation ( 2 ) from the balance identity, one has to divide it by the labour force volume and take into consideration that relative resource estimations and can be accepted as objective functions that describe the implementation degree of two main purposes of the reproductive process .these purposes form one general purpose of an economical system . herewe denote it as `` integral purpose '' .it implies the maximization of satisfaction of current and future needs of society .the index of effectiveness should serve as the objective function that permits to estimate quantitatively the integral purpose implementation . for the concrete definition of the integral purpose and the integral index of effectivenessit is necessary to distribute the priorities between two introduced primary purposes .this distribution leads to their coordination and resolving of the contradiction between the primary purposes .such priorities are reflected quantitatively at distribution of the gross internal product to parts intended for reproduction of two kinds of manufacturing resources .the purposes of the society define the proportions of such distribution and concretize the effectiveness index formula that should serve as the integral object function of the reproduction process .below all variables are counted for the given year . herewe assume that during t - th year the usage of the manpower quantity and capital of the volume makes the gross internal product of the volume .the corresponding values of the relative indexes of labour productivity and capital per man are peer and , respectively .the indexes and are the most important quantitative characteristics of the process of reproduction as generalized technology on the macrolevel .though their values do nt determine base - line values of the object functions and uniquely , but they limit the area of their possible values . in accordance with the general limiting conditionthe above indicated main balance identity reads the particular values of estimations and can be derived from the area , restricted by the equality ( 4 ) .this area depends on the distribution of the made gross internal product , which arrests the norm of accumulation .the last is defined as the fraction of the gross accumulation in the total amount of the gross internal product at the given norm of accumulation and the certain labour productivity level as well as capital per man , the values of object functions are determined uniquely according to the formulas their values are the starting point for the analysis of the effectiveness dynamics .here we consider the values of variables for the next year .the capital per man increases to the value of .the labour productivity for the given capital per man reaches the value .the dynamics of the reproduction process effectiveness characterizes the change of the implementation capabilities of two main purposes that have to be reflected in the change of area of acceptable values of the object functions and . in order to estimate the degree of the indicated change quantitatively one needs to compare labour productivity in current year with the value , which describes that minimal production volume per manthis allows to keep the values of the object functions and at the level of the basic year . the last corresponds to the equal effectiveness of production in current and basic years .it means that the values of both object functions can be saved at a basic level and both can not be increased at once .note that actual values of estimations and in one year can differ from and , but one of them will be more basic , and another will be less . assume the labour productivity level in one year surpasses the bound therefore in this year there appears the possibility for simultaneous increase of values of two object functions in contrast to the year .as mentioned before these object functions describe the level of implementation of the main purposes of the reproduction process .it gives the ground to suppose that in one year the value of the integral object function is augmented and , therefore , the level of efficiency is raised .moreover , the value of a difference allows to judge that there appeared capability to increase the values of object functions . therefore this difference can serve as the characteristic of the effectiveness increase degree .the actual increment of labour productivity can be decomposed in two parts the value represents the increment of the labour productivity , at which the level of effectiveness remains invariable .it can also be interpreted as the increment of theproductivity reached at the expense of the extensive increase of the capital per man ( at the basic level of the effectiveness ) .another part of the increment of labour productivity quantitatively characterizes the additional capabilities of implementation of the main purposes of production appeared in one year .we start from the reason that the growth of the effectiveness is the only source of increase of such capabilities .therefore it is possible to consider the value as the increment of the productivity reached at the expense of the increase of efficiency .the ratio of the increment to the base - line value of the labour productivity represents the rate of the productivity increment at the expense of the effectiveness increase .this ratio can be identified with the rate of the increment of the effectiveness of production . in equation ( 10 )we replace with and with and simplify it using the balance identity ( 2 ) .the rate of the increment of effectiveness depends on the pure increments of the labour productivity and capital per man the equation ( 11 ) can be rewritten using the absolute increases of the corresponding indexes to the rates of their change . for this purpose onesubstitutes the intensity of reproduction of the capital from the formula ( 6 ) into equation ( 11 ) : here we introduce the notations for rates of increment of the corresponding indexes and .then the estimation of dynamics of a production effectiveness takes the form thus , the conducted analysis results in a conclusion that the reference point for the society that seeks to achieve the social and economic purposes , ca nt serve the increase of labour productivity , which traditionally was considered to be the main criteria index in our economics .the obtained two equivalent dynamical formulas of the index of a production effectiveness ( 11 ) and ( 12 ) demonstrate that for the estimation of outcomes of the economical development it is necessary to correct the growth of labour productivity allowing for the change of the capital per man .the bigger part of the made gross internal product is routed to the reproduction of the productive capital , the more is the deviation of the dynamics of the criteria index of effectiveness from dynamics of labour productivity .the last can serve as the main reference point of control only in that case , when the capital per man remains invariable .in this work we consider the usage of the effectiveness index as the preferred reference point of control .the following six main macroeconomic indexes have been selected as variables of the growth model with effectiveness - capital per man , - output per man , - consumption per man , - saving per capital , - sum of amortization rate and population rate , - the rate of economics effectiveness . for simplification of model and its analysisthe following preconditions are adopted : the number of workers in economics changes with constant rate of increment , ( so that the dynamics of employment can be recorded with the help of the function ) .the amortization rate is also invariable . besides the differential form of notationthis form is more convenient for solution and analysis and widely used in the literature on economic - mathematical modeling [ 5 ] . for conversion to the differential form one supposes , that the values of the considered economical indexes are continuously differentiable functions of time .the increments of indexes per unit of time , selected as a step for the analysis , are substituted by derivative from functions that describe their dynamics .the main equation of the model is based on the definition of the effectiveness index .the formula ( 11 ) , derived earlier for the estimation of dynamics of effectiveness and introduced in the incremental form , can be converted to the following equivalent equation in the differential form the differential equation ( 13 ) is included in the developed macroeconomic model and plays the role of production function , which links indexes of capital per man and labour productivity .besides , the model includes the main balance identity ( 2 ) the ratio ( 6 ) can be rewritten in differential form as follows the differential equation for comes from the equation of capital dinamics to pass from the increment of capital to the index of the increment of capital per man , it is necessary to differentiate the formula for capital per man here we introduce the population rate .the last assumption allows to rewrite equation ( 17 ) in the following form : now in ( 18 ) instead of we put equivalent expression from ( 3 ) , and for the sum of two constants we enter new identification .then it is possible to enter into the model the equation , that reflects the correlation between capital per man and saving per capital . equations ( 13 ) - ( 15 ) and ( 19 ) form the set of four equations , which describes the correlations between six abovelisted main macroeconomic indexes .it is nessesary to emphasize that this model is represented by six variables and four equations .therefore , the obtained set of equations ( 13)-(15),(19 ) is incomplete . in order to close the set of equationsone introduces control variables that describe investment policy .it is possible to do by assuming and is a control variable. then one can formulate optimal planning problem .first , it is necessary to construct a target functional .the task of the central planning establishment is to select a feasible trajectory that is the optimum for achievement of some economic target .the economic target of central planning organ should be based on the standards of living , estimated by the consumption level . in particular, it is presumed that the central planning organ has an utility function , which determines utility at any moment of time as a function of consumption per man ] . during the indicated time period from the welfare , corresponding to the pathway of consumption per man , is determined by integrating of all instantaneous utilities over the whole interval . the time planning horizon can be finite or infinite . in casethis time is finite , it is necessary to set the minimally acceptable value of capital per man in final year to ensure the possibility of consumption outside the given horizon of time .here we try to avoid difficulties , connected with the definition of the minimal value of capital per man in a final moment of time .consider , that is indefinite , so the control trajectory is selected for all times in the future .however , in this case the welfare integral can miss .the convergence of an integral is guaranteed , if the following conditions are satisfied : the initial value of capital per man is less than the maximal accessible level and the norm of discounting is positive . in this case and so the integral of welfare is bounded above .we choose and as state variables . we express consumption per man through control and state variables from ( 14 ) .then we receive optimal planning problem for model with effectiveness this section we derive the equation for the saving per capital trajectory . when solving optimal planning problems with the help of the maximum principle , for each coordinate of state variables vector a costate variable is used .the hamiltonian function is , where stands for the integrand of a target functional , for the vector of motion equations right parts , for control .we find functions , , that satisfy the following conditions .% ( 25 ) \end{aligned}\ ] ] where , are costate variables . finally , by using of eqs .( 22 ) - ( 24 ) we can write the problem to be solved in the following form the system of five equations with five unknown functions is then obtained .four equations of this system are differential , one is algebraic . solution of this system of equations is equivalent to the solution of the problem ( 21 ) .the set of equations ( 26 ) - ( 30 ) has the solution for any kind of the utility function .however in this work we perform the analysis of the elementary case . in this case( 29 ) and ( 30 ) for costate variables and do not depend on state ones and . in order to find an optimal savings trajectory we have to solve only three equations ( 26 ) , ( 29 ) , and ( 30 ) .after that one can find state variables and . as a first step we obtain from eq .( 26 ) an explicit formula for written as a function of costate variables and : where is an exponential function .note , that the solution of eq.(30 ) for can be written in the analytical form where .having differentiated the left and right parts of eq .( 31 ) , one arrives at by rewriting eq .( 31 ) in the following form and substituting it in the right part of eq .( 33 ) , we find then we replace and in the right part of eq .( 29 ) and ( 30 ) respectively with the intermediate result } } { { y_p } } + \frac{1}{2 } \frac { { ( - \chi - y_p \mu ) } } { { y_p } } ( \lambda - 2\omega ) .( 36)\ ] ] finally , we substitute the ratio of costate variables found from eq .( 34 ) in the equation ( 36 ) and obtain the following differential equation for saving per capital found equation together with eq .( 32 ) is equivalent to the optimal planning problem ( 21 ) .solution of this equation gives an optimal savings trajectory .the differential linear equation ( 38 ) is riccati equation which can be solved by standard methods .. for the parameters values , ( corresponding to decreasing significance of consumption in e times for 5 years ) , , and the initial value , the solution of the riccati equation ( 38 ) has been found numerically by the runge - kutta method .the obtained solution is shown in figure 2 .the equation ( 38 ) enables analytical consideration in the asymptotic limit when , i.e. when and . in this casewe can neglect those terms in eq .( 38 ) proportional to with a result where .note , that this equation is known as equation of interacting masses .having integrated it we obtain an asymptotic optimal trajectory where the constant is defined by the values and the value of taken as `` initial '' value for the asymptotic regime of evolution as it follows from obtained solution an asymptotic trajectory will be slightly increased if , being restricted by the value of . an asymptotic behavior of state variable is obtained from eq . ( 27 ) by taking into account eq .( 40 ) where is a rate - fixing constant .there are two main applications of this analysis : recommendations to the government for investment policy development and prediction of a long - run period macroeconomic system development .to define recommended investment volume in the current year , we need to obtain the stock of capital statistical data for this year and to find the optimal saving per capital value using the optimal planning problem solution. then we can evaluate the investment volume as a product of and .the next year we change the optimal planning problem parameters to improve the accuracy of calculation .the procedure of recommended investment volume evaluation remains invariable .thus this method allows the optimization of the investment policy during any period of time .variables from economic growth models play important role in models of other important fields of economic life too .a financial programming method combines these models .the optimal planning problem for the model with effectiveness can serve as a part of financial programming systems [ 7 ] .the solution of the optimal planning problem may be useful for evaluating financial programming system parameters .to predict long - run period macroeconomic system development we put optimal saving per capital trajectory into the growth model with effectiveness and then calculate all macroeconomic variables from this model .table 1 shows predictions based on the republic of belarus economic parameters .optimal saving per capital trajectory for this model is the result of the optimal planning problem solution from this work . table 1 shows that this variant of macroeconomic strategy allows the extension of productivity potential by increasing the capital per man by 72.4 % and the output per man by 55.5 % .the total growth of consumption per man is 51.6 % .the main result of this article is the optimal investment trajectory for belarus for the period of 2000 - 2020 .the number of additional results may be of use for the investment policy development in other countries and other periods .the nonlinear system of equations allows to analyze dependencies between trajectories of main macroeconomic variables .the asymptotic solution shows long - run perspectives of macroeconomic system .equation from this work may be used in financial programming systems .a possible improvment of the present model is connected with the splitting of capital in two parts - private industrial capital and public overhead capital . besides, a more realistic utility function can be chosen for the model .the results of the inprovements will be published elsewhere .useful advises of professor komkov are gratefully acknowledged .
the optimal planning trajectrory is analysed on the basis of the growth model with effectiveness . the saving per capital value has to be rather high initially with smooth decrement in the future years .
mobile devices can establish opportunistic networks a flavour of delay - tolerant networks ( dtns ) among each other in a self - organising manner and facilitate communication without the need for network infrastructure .the capabilities of today s smart mobile devices yield substantial computing and storage resources and means for creating , viewing , archiving , manipulating , and sharing content .in addition , content downloaded from internet is recommended to be cached at the handset .this yields a huge reserve of data stored in mobile devices , which users may be willing to share .while this is typically done using internet - based services , opportunistic networks enable content sharing among users in close proximity of each other . in this paper , we focus on a human - centric dtn , where nodes search for information stored in some of the nodes .the nodes lack a global view of the network , i.e. , there is no service that could index the stored content and assist a searching node in finding content ( or indicate that the sought information does not exist ) .this means that operations are decentralized and , as mobile nodes have resource constraints ( e.g. , energy ) , we need to control the spread of the messages when searching .one such control mechanism is imposing the maximum number of hops a message can travel .we call a search scheme that limits the message s path to maximum hops as -hop search ._ hop - limited search _ is of our interest as it is a lightweight scheme that does not require intensive information collection about the nodes or the content items .however , determining the optimal is not straightforward .although a large tends to increase replication , it also increases the chance of finding the desirable target ( content or searching node ) , thereby decreasing replication .while many works in the literature apply hop limitations , mostly two - hop such as , to the designed routing protocols , the motivation behind setting a particular hop value is not clear . in , we analyze the effect of hop limit on the routing performance by modelling the optimal hop limited routing as _ all hops optimal path _ problem .however , opportunistic search necessitates considering the availability of the sought content as well as routing of the response to the searching node . in our previous work , we analytically modelled the search utility and derived the optimal hop count for a linear network , e.g. , search flows through a single path and response messages follow the same path . in this paper , we provide an elaborate analysis of hop - limited search in a mobile opportunistic network considering the search success , completion time , and the cost .search is a two - phase process .we refer to the first phase in which query is routed towards the content providers as _ forward path _ and the second phase in which response is routed towards the searching node as _return path_. first , we present an analysis on the forward path via an analytical model .we show the interplay between tolerated waiting time ( how long the searching node can wait for the forward path ) , content availability , and the hop count providing the maximum search success ratio for a specific setting .next , we verify our analysis of the forward path and elaborate also on the return path via simulations .our results suggest the following : * generally speaking , search performance increases with increasing especially for scarcely available content .however , the highest improvement is often achieved at the second hop .after that , the improvements become smaller and practically diminish when . *we observe that return path requires on average longer hops / time compared to the forward path , which may imply that optimal settings for the forward path does not yield the optimal performance on the return path .* we show that the search cost first increases and after several hops ( ) it tends to stabilise .this is due to the small diameter of the network ; even if is large letting the nodes replicate a message to other nodes , each node gets informed about the search status quickly so that replication of obsolete messages is stopped .the rest of the paper is organised as follows .section [ sec : sysmodel ] introduces the considered system model followed by section [ sec : hoplimited ] introducing the hop - limited search .section [ sec : numericalanalysis ] numerically analyzes the forward path , while section [ sec : sims ] focuses on the whole search process .section [ sec : related ] overviews the related work and highlights the points that distinguish our work from the others .finally , section [ sec : conclusions ] concludes the paper .we consider a mobile opportunistic network of nodes as in fig .[ fig : model ] .nodes move according to a mobility model which results in i.i.d .meeting rates between any two nodes .we use the following terminology : * searching node ( ) * is a node that initiates the search for a content item . we assume that content items are equally likely to be sought , i.e. , uniform _content popularity_. in reality , content items have diverse popularities ; e.g. , youtube video popularity follows a zipf distribution .we choose uniform distribution to avoid a bias toward the `` easy '' searches .* tagged node * is a node that holds a copy of the sought content .only a fraction of the nodes are tagged , where is referred to as the _ content availability_. although it is expected that search dynamics such as caching changes , we assume that it does not change over time .the sets of all and tagged nodes are denoted by and , and their sizes and , respectively . the number of tagged nodes is .every node is equally likely to be a tagged node .* forward path and return path : * in the first step of the search , a query is disseminated in the network to find a tagged node . in casethe first step is completed , a response generated by a content provider is routed back towards in the second step .we refer to the former as the _ forward _ or _ query path _ and to the latter as the _ return _ or _ response path_. * tolerated waiting time ( ) * is the maximum duration to find the content ( excluding the return path ) . note that total tolerated waiting time is if we assume the same delay restriction for the return path .to describe the basic operation of hop - limited search , assume that creates a query that includes information about its search at time .when encounters another node that does not have this query , replicates the query to it to reach a tagged node faster .a node acquiring a copy of the message becomes a _discovered node_. the discovered node also starts to search and replicate the query to _undiscovered nodes_. each message contains a header , referred to as _ hop count _ , representing the number of nodes that forwarded this message so far .messages at are 0-hop messages ; those received directly from are -hop messages .we refer to a search scheme as -hop search if a search message can travel at most hops .hence , when a node has -hop message , it does not forward it further .we also call a node with a -hop message as the -hop node for that particular message . in -hop search , when an -hop and -hop node meet ( without loss of generality we assume ) , the state of -hop node is updated to -hop if .the forward path completes when a copy of the query reaches a tagged node .let us first consider a search scheme that does not put any hop limitation but instead limits the total number of replications on the forward path and the return path to and , respectively . for a content item with availability , we can calculate the forward success ratio of this search scheme as : similarly , we calculate the search success ratio , i.e. , both steps are completed , as : we can expand the above formulation which leads to : let , i.e. , the probability that a response reaches . after replacing into ( [ eq : ps ] ) ,we apply some manipulations by the help of binomial theorem : that is then , we find the search success as : please refer to appendix [ appendix ] for the details of the above derivation .[ fig : search_success_m ] plots the success ratio with increasing fraction of nodes that receive the message .we plot the success of both the forward path and the total search under various content availability values .the results are for equal number of replications for the forward and return path , i.e. , .the figure shows that to ensure a desirable level of success , search has to cover certain fraction of nodes , which depends on the content availability . in hop - limited search, there is no explicit restriction on number of replications , but rather it is implicitly set by and .this result brings us to the question of how search coverage in the number of nodes changes with and .let be the set of discovered nodes excluding at time and its size .et al . _ define for a static graph as node neighborhood within hops .similarly , we define as the number of nodes that can be reached from a source node less than or equal to hops under time limitation . for .,scaledwidth=39.0%] let denote the _ forward path success ratio _ defined as the probability that a query reaches one of the tagged nodes in a given time period under -hop limitation . for a search that seeks a content item with availability , we approximate as follows : \nonumber\\ & \approx 1-(1-\alpha)^{e[{n}_{h}(t ) ] } \label{eq : pcompletion}.\end{aligned}\ ] ] note that ( [ eq : pcompletion ] ) provides an upper bound for and will only be used to understand the effect of and . in numerical evaluations , we relax all the simplifications ( e.g. , i.i.d meeting rates ) and experiment using real mobility traces. given ( [ eq : pcompletion ] ) , our problem reduces to discovering how ] .in addition to the overlaps of opportunistic contacts , the hop restriction may result in lower ] for general time - evolving networks is not straightforward .therefore , we derive ] such transitions from this state where indicator function } ] we call these three events _ type - tagged _ , _ type-1 _ , and _ type-0 _ events ; the corresponding transition rates are denoted by , , and , respectively .if the type-1 event is a meeting with an -hop node , we denote the respective rate as .similarly , if a type-0 event is due to an -meeting , the rate is . for state the transitions leading to a state change are : and the corresponding transition rates are : where is the pairwise meeting rate . since solving the given markov model may not be practical due to the state space explosion for large and , we approximate and show its high accuracy in section [ sec : numericalanalysis ] by comparing it with the results of markov model .let denote the remaining time to search completion under -hop search and when nodes hold the search query . under -hop search, only nodes are actively searching and can forward the query to their encounters .assume that are identical for all .then , the number of searching nodes is : we calculate as : we expand similarly and substitute in ( [ eq : tmh ] ) . after a certain number of nodesare searching , the remaining time to search completion converges to zero , i.e. , .denote .then , we find : solving for gives an approximation for : this section , we evaluate the performance of hop - limited search ( referred to as hop ) while varying ( i ) content availability , ( ii ) tolerated waiting time , ( iii ) network density , and ( iv ) mobility scenarios .we use for _ high _ , _ medium _ , and _ low _ content availability , respectively . in our analysis , we use real traces of both humans and vehicles . for the former, we use the infocom06 dataset which represents the traces of human contacts during infocom 2006 conference . for the latter ,we use the cabspotting dataset that stores the gps records of the cabs in san francisco . to gain insights about more general network settings ,we also analyze a synthetic mobility model that reflects realistic movement patterns in an urban scenario .below , we overview the basic properties of each trace : * infocom06 : * this data set records opportunistic bluetooth contacts of 78 conference participants who were carrying imotes and 20 static imotes for the duration of the conference , i.e. , approximately four days . in our analysis, we treated all devices as identical nodes which host content items and initiates search queries .this trace represents a small network in which people move in a closed region .* cabspotting : * this data set records the latitude and longitude information of a cab as well as its occupancy state and time stamp .the trace consists of updates of the 536 cabs moving in a large city area for a duration of 30 days . for our analysis, we focused only on the first three days and a small region of approximately 10km km area .496 cabs appear in this region during the specific time period .the cab information is not updated at regular intervals .hence , we interpolated the gps data so as to have an update at every 10 s for each cab. next , we set transmission range to 40 m to generate contacts among cabs .this trace represents an urban mobility scenario .* helsinki city scenario ( hcs ) : * this setting represents an opportunistic human network in which the walking speed is uniformly distributed in [ 0.5,1.5]m / s .the nodes move in a closed area of 4.5km.4 km .hcs uses the downtown helsinki map populated with _points - of - interests _ ( e.g. , shops ) , between which pedestrians move along shortest path .we derive the neighborhood size as an average of 500 samples where each sample represents an independent observation of the network starting at some random time , from an arbitrary node and spanning an observation window equal to the tolerated waiting time .next , we calculate using ( [ eq : pcompletion ] ) .we use r for these analysis . in the following ,we mostly focus on the more challenging cases such as short or low , and report the representative results due to the space limitations .[ fig : neighborhood ] illustrates the change in represented as fraction of the network size for infocom06 for various tolerated waiting time and content availability . for each , we plot and corresponding . from fig .[ fig : infocom06nodeunionbyhop_98 ] , we can see the significant growth of at the second hop for all settings .while further hops introduce some improvements , we observe the existence of a _ saturation point _ .after this point , the change in is marginal either because all nodes are already covered in the neighborhood or higher can not help anymore without increasing .we present the resulting for low content availability in fig.[fig : infocom06_ptagged_98_t05 ] . as expected, the second hop provides the highest performance gain compared to the previous hop ( ) as a reflection of highest .as infocom06 has good connectivity , almost all queries reach one of the tagged nodes after .we present the effect of content availability for short in fig .[ fig : ptagged_98_t600 ] . regarding the effect of content availability ,_ we observe that search for a rare content item , i.e. , low , benefits more from increasing compared to highly available content items_. when many nodes are tagged , meets one of these after some time .however , if the probability of meeting a tagged node is fairly low , using additional hops exploiting the mobility of the encountered nodes and spreading the query further is a better way of searching .a smart search algorithm can keep track of the content availability via message exchanges during encounters and can adjust the hop count depending on the observed availability of the content . for low and medium content availability , _the highest benefit is obtained at the second hop_. for a content item which is stored by a significant fraction of nodes , even a single hop search may retrieve the sought content .[ fig : density_vs_neighborhood ] illustrates the growth of under various network size and .we use our synthetic model hcs by setting nodes to observe the effect of network density on . for this particularsetting , the clustering of lines based on shows that time restriction is more dominant factor in determining compared to . as expected, is higher for higher .however , all settings exhibit the same growth trend with increasing . fig . [fig : allbeta ] illustrates the change in with increasing for which represent any - benefit and fair - benefit hops , respectively . as stated before, infocom06 has good connectivity among nodes as conference takes place in a closed area and nodes share the same schedule ( e.g. , coffee breaks and sessions ) . as a result of this good connectivity ,first one or two hops provide almost all the benefits of multi - hop search ( fig .[ fig : infocomnobenefitpoints_98_benefit5e-04fair ] ) .however , fig.[fig : infocomnobenefitpoints_98_benefit5e-04any ] shows that the optimal hop in terms of the highest is achieved at higher . for hcs scenario that represents a network of urban scale ,the effect of increasing is a bit different .[ fig : helsinkifair ] shows that short and longer may have the same operating point in terms of . for low , the benefit of increasing diminishes due to the limited number of encounters and the resulting small set of discovered nodes , whereas for long almost all nodes are discovered without requiring any further hops . the lower with increasing also be explained by `` _ _ shrinking diameter _ _ '' phenomenon , which states that the average distance between two nodes decreases over time as networks grow .indeed , the diameter of the message search tree gets shorter over time and leads to a smaller hop distance between the searching node and a tagged node .note the decreasing _ any - benefit _ hop in fig .[ fig : helsinkiany ] .for example , searching for a content item with medium availability achieves the highest performance at for s , while it decreases gradually to with increasing .table [ tab : searchtime_markovchain ] shows the search time which is obtained by solving ctmc introduced in section [ sec : ttsearchcompletion ] and given by ( [ eq : tmhapproxm1 ] ) .we normalize every value by the maximum search time of each setting . in the table, we also list the approximation errors .first thing to note is the drastic decrease in search time at the second hop . in agreement with our conclusions from , we see that second hop speeds the search significantly resulting in approximately shorter search time for the low availability , for the medium , and decrease for the high availability scenario . as ,the most significant improvement in search time occurs for low content availability .our approximation also exhibits exactly the same behaviour in terms of the change in the search time .as the error row shows , it deviates from the expected search time to some degree : under - estimation to overestimation . [ ! htb ] .search time and its approximation . [ cols="<,<,^,^,^,^,^",options="header " , ] [ tab : correlationanalysis ] table [ tab : correlationanalysis ] summarises this analysis for infocom06 , which also agrees with the analysis for cabspotting scenario .first , we do not observe any strong correlation between return and forward path lengths for any of the settings ( i.e. , and are around 0.10.4 ) .this result may be conflicting with the intuition that the information found in nearby / far - away will also be routed back quickly / slowly . however , search process is more complicated due to the intertwined effects of mobility and restrictions on the total search time .for example , consider a forward path with a very large number of hops . due to the remaining short time before the tolerated waiting time expires , search can only go a few hops towards the searching node .this leads to a long forward path with a very short return path which challenges the above - mentioned intuition . with higher content availability, the required number of forward hops decreases whereas the return path length seems to be barely affected .hence , the corresponding and increase . for all scenarios ,return path is on the average longer than the forward path .using this observation , a tagged node receiving a query can set the time - to - live field of its response message longer than the received query s forward path time . obviously , increasing hop count increases the neighborhood which in turn increases the chance of finding the sought content .however , the larger neighborhood should also be interpreted as larger number of replication , i.e. , higher search cost . in fact , the neighborhood size represents the upper bound of number of replications for a search message if no other search stopping algorithm is in effect . in this section, we evaluate the cost of search considering two simple mechanisms that aim to keep replication much below the upper bound set by .we consider three cases : ( i ) oracle : there is a central entity from which a node retrieves the global state of a message in its buffer and can ignore it if outdated , e.g. a completed search or a query already reaching a tagged node , ( ii ) exch : upon encounter , nodes exchange their local knowledge about search activities in the network , ( iii ) local : nodes exchange their knowledge _ only _ on the shared messages .note that an oracle can be an entity in the cellular network and nodes can access it via a control channel .in fact , this communication with the cellular network is more costly ( e.g. , energy ) compared to the opportunistic communication .nevertheless , this scenario serves as an optimal benchmark to assess the performance of the other scenarios .exch pro - actively spreads information about existing queries to all nodes opportunistically , which may be considered as leaking information to nodes not involved in search .local circumvents this by only using its local knowledge and sharing information with peers only about queries that the other node has already seen .[ fig : costoraclevsnooracle ] shows _ query spread ratio _ which is defined as i.e. , the fraction of nodes that have seen this query , and search time .first , we should note that the resulting search success ( not plotted ) is almost the same under all schemes .second , note the non - increasing query spread ratio for .this result confirms that the network is a small - world network where all nodes get informed quickly about the search status and drop the outdated messages timely .hence , even for larger , nodes detect the outdated messages via local and shared knowledge . in fig .[ fig : queryspreadratio15_600_015 ] , we observe that exch maintains the same performance as oracle , whereas local results in more replication as fewer nodes are informed about the completed queries / responses . nevertheless , because of the small network diameter , a higher does not result in an explosion of query spread in the network .regarding search time , we observe substantial decrease in search time also for and in contrast to the vanishing benefits after second hop derived from our analytical model ( table [ tab : searchtime_markovchain ] ) .search time tends to stabilise after .hence , although several hops are sufficient in terms of search success , further hops ( e.g. , ) can be considered for faster search .our analysis shows that the two factors affecting the optimal hop count are the content availability ( ) and the product of meeting rate and tolerated waiting time ( ) .the latter is the number of meetings before tolerated waiting time expires where depends on network density and mobility model .if both and are low ( scarce content and very few contacts due to network sparsity or short search time ) , search performance is expected to be low .however , it increases with increase in either of these factors . if one of these factors is large , it is sufficient to have a low hop count limit ( e.g. , two or three ) to obtain good performance .that is , when is large relative to the expected number of tagged nodes met during the search time , limiting the search to a few hops still achieves good results . as decreases , the required hop limit to maintain good search performance is larger .this allows devising adaptation when issuing search queries .nodes can monitor the request and response rate for certain ( types of ) content items and thus infer popularity and availability in their area .they can also assess the regional node density and meeting rate and thus determine the required hop count given or vice versa .moreover , nodes can monitor search performance and determine how well their ( region in a ) network operates . to improve performance, nodes can decide to increase the availability of selected contents via active replication of the scarce content , obviously trading off storage capacity and link capacity for availability .such decision could be based entirely upon local observations , but could also consider limited information exchange with other nodes .we believe that with the guidance of our analysis , a node can decide on the best hop count depending on the network density and the content availability that can both be derived from past observations by the node .although we show that search cost stops increasing after a few hops , keeping low may be desirable if we interpret it as a measure of the social relation between two nodes ( e.g. , one hop as friends ; two hop as friend - of - friend ) . in other words , lower can be interpreted as more _ trustworthy _ operation .moreover , protocols involving lower number of relays are more scalable and energy - efficient .while the dtn literature has many proposals for message dissemination , which exploit the information about the network such as ( estimated ) pairwise node contact rates , node centrality , communities , and social ties , efficient mechanisms for content search remain largely unexplored . in a sense , this is reasonable as search can be considered as a two - step message delivery : query routing on the forward path and the response routing on the return path .the forward path is less certain as the content providers are unknown . in this regard ,the return path is less challenging as the target node ( and a recent path to reach it ) is already known .however , routing the response looks for a particular node , whereas the forward search is for a _ subset of nodes _ whose cardinality is proportional to the content availability .thus , search requires special treatment rather than being an extension of the message dissemination .two questions for an efficient search are ( i ) which nodes may have the content and ( ii ) when to stop a search . the former question requires assessment of each node in terms of its potential of being a provider for the sought content .for example , _ seeker assisted search _ ( sas ) estimates the nodes in the same community to have higher likelihood of holding the content as people in the same community might have already retrieved the content .given that people sharing a common interest come together at a certain space " ( e.g. , office , gyms ) , defines _ geo - community _ concept and matches each query with a particular geo - community .hence , the first question boils down to selecting relays with high probability of visiting the target geo - community . as aims to keep the search cost minimal , searching node employs two - hop routing and determines the relays , which thereby reduces the issue of when to stop the search .deciding when to stop search is nontrivial as the search follows several paths and whether the searching node has already discovered a response is not known by the relaying nodes .et al . _ model the expected search utility with increasing hop count and then finds the optimal hop , while estimates the number of nodes having received a query and possible responses by using the node degrees . in our work, we showed that simple schemes via information sharing can stop search timely and maintain similar performance to that of an oracle due to the small world nature of the studied networks .different from all these above works , main focus of our paper is more on a fundamental question : _ how does the hop limitation affect the search performance ? _while this question has been explored in general networking context , content - centricity and the time constraints require a better understanding of flooding in the context of search .therefore , we first provided insights on search on a simplified setting and next analyzed the effect of various parameters , e.g. , time and real mobility traces , via extensive simulations .in the literature , several works focus on two - hop forwarding in which the source node replicates the message to any relay and the relays can deliver the message only to the final destination .et al . _ in their seminal work show that two hops are sufficient to achieve the maximum throughput capacity in an ideal network with nodes moving randomly .the capacity increase is facilitated by the reduced interference on the links from source to relay and relay to the destination .et al . _ assess this two - hop forwarding scheme in a dtn scenario with power - law inter - contact times and employ an _ oblivious forwarding algorithm _( e.g. , memoryless routers that do not use any context information such as contact history ) .similarly , focuses on a dtn with power - law distributed inter - contact times and derives the conditions ( i.e. range of pareto shape parameter ) under which message delivery has a finite delay bound for both two - hop and multi - hop oblivious algorithms .the authors show that `` _ _ as long as the convergence of message delivery delay is the only concern , paths longer than two - hops do not help convergence _ _ '' as two hops are sufficient to explore the relaying diversity of the network .another work supporting two - hop schemes is which shows that two - hop search is favourable for opportunistic networks a resource - scarce setting , as _ one - hop neighbors are able to cover the most of the network in a reasonable time _ " in a network with _sufficiently many _ mobile nodes . in our work, we include those cases when longer paths still yield ( some ) performance improvement .finally , our work supports the conclusion of , which theoretically proves the _ small - world _ in human mobile networks .our work is closely related to the -hop flooding which models the spread of flooded messages in a random graph .unlike , we focus on content search in a realistic setting , and using real mobility traces ( both a human contact network and a vehicular network ) we provide insights on the effect of increasing hop count on the search success , delay , and cost under various content availability and tolerated waiting time settings . despite the differences , it is worthwhile noting that our results agree with the basic conclusions of .given the volume of the content created , downloaded , and stored in the mobile devices , efficient opportunistic search is paramount to make the remote content accessible to a mobile user .current schemes mostly rely on routing based on hop limitations . to provide insights about the basics of such a generic search scheme, we focused on a hop - limited search in mobile opportunistic networks . first , by modelling only the forward path , we showed that ( i ) the second hop and following few hops bring the highest gains in terms of forward path success ratio and ( ii ) compared to single - hop delivery , increase in hop count leads to shorter search time and after a few hops search time tends to stabilize .next , we revisited these findings via simulations of the entire search process .while simulations validated our claim for the forward path , we observed that return path on average requires longer time and more hops .moreover , our results do not indicate strong correlation between the return and forward paths .finally , we showed that search completes in less than five hops in most cases .this is attributed to the small diameter of the human contact network which has also a positive impact on search cost ; nodes are informed about search state quickly and stop propagation of obsolete messages .our simulations validated that increasing hop count to several hops accelerates the search and later search completion time stabilizes .as future work , we will design a search scheme that adapts the hop count of query and response paths based on the observed content availability and popularity .moreover , we believe that content - centric approaches should be paid more attention to implement efficient search schemes in mobile opportunistic networks .this work was supported by the academy of finland in the pdp project ( grant no .260014 ) .10 l. a. adamic , r. m. lukose , and b. a. huberman .local search in unstructured networks . , 2006 .a. al hanbali , p. nain , and e. altman . performance of ad hoc networks with two - hop relay routing and limited packet lifetime ., 65(6):463483 , 2008 .s. bayhan , e. hyyti , j. kangasharju , and j. ott .seeker - assisted information search in mobile clouds . in _acm sigcomm workshop on mobile cloud computing _ , 2013 .s. bayhan , e. hyyti , j. kangasharju , and j. ott .analysis of hop limit in opportunistic networks by static and time - aggregated graphs . in _ ieee icc _ , 2015 .c. boldrini , m. conti , and a. passarella .less is more : long paths do not help the convergence of social - oblivious forwarding in opportunistic networks . in _ acm int .workshop on mobile opportunistic networks _ , 2012 .a. chaintreau , p. hui , j. crowcroft , c. diot , r. gass , and j. scott .impact of human mobility on opportunistic forwarding algorithms ., 6(6):606620 , 2007 .a. chaintreau , a. mtibaa , l. massoulie , and c. diot .the diameter of opportunistic mobile networks . in _acm conext _ , 2007 .k. fall . a delay - tolerant network architecture for challenged internets . in _ acm sigcomm _ , 2003 .m. faloutsos , p. faloutsos , and c. faloutsos . on power - law relationships of the internet topology . in _acm sigcomm computer communication review _ , volume 29 , pages 251262 , 1999 .j. fan , j. chen , y. du , p. wang , and y. sun .delque : a socially aware delegation query scheme in delay - tolerant networks . ,60(5):21812193 , jun 2011 .w. gao , q. li , and g. cao .forwarding redundancy in opportunistic mobile networks : investigation , elimination , and exploitation . , 2014 .p. gill , m. arlitt , z. li , and a. mahanti .youtube traffic characterization : a view from the edge . in _acm sigcomm conf . on internet measurement _ , 2007 .m. grossglauser and d. tse .mobility increases the capacity of ad hoc wireless networks ., 10(4):477486 , aug 2002 .r. gurin and a. orda .computing shortest paths for any number of hops ., 10(5):613620 , 2002 .m. a. hoque , x. hong , and b. dixon .efficient multi - hop connectivity analysis in urban vehicular networks ., 1(2):7890 , april 2014 .e. hyyti , s. bayhan , j. ott , and j. kangasharju .searching a needle in ( linear ) opportunistic networks . in _acm mswim _ , 2014 .e. hyytia and j. ott .criticality of large delay tolerant networks via directed continuum percolation in space - time . in _ieee infocom _ , 2013 .a. kernen , j. ott , and t. krkkinen . .in _ proc . of int .conf . on simulation tools and techniques _ , 2009 .j. leskovec , j. kleinberg , and c. faloutsos .graphs over time : densification laws , shrinking diameters and possible explanations . in _acm kdd _ ,j. ott and m. pitknen . .in _ the first ieee wowmom workshop on autonomic and opportunistic communications ( aoc ) _ , june 2007 .m. piorkowski , n. sarafijanovic - djukic , and m. grossglauser .data set epfl / mobility ( v. 2009 - 02 - 24 ) , feb . 2009 .downloaded from http://crawdad.org / epfl / mobility/. m. pitknen , t. karkkainen , j. greifenberg , and j. ott .searching for content in mobile dtns . in _ieee percom _ , 2009 .f. qian , k. s. quah , j. huang , j. erman , a. gerber , z. mao , s. sen , and o. spatscheck .web caching on smartphones : ideal vs. reality . in _acm mobisys _ , 2012 . .r foundation for statistical computing , vienna , austria , 2013 .j. scott , r. gass , j. crowcroft , p. hui , c. diot , and a. chaintreau .data set cambridge / haggle ( v. 2006 - 01 - 31 ) .http://crawdad.org/cambridge/haggle/ , jan .m. vojnovic and a. proutiere .hop limited flooding over dynamic networks . in _ieee infocom _ , 2011 .given that the message is received by nodes and each node has probability of holding the requested content , the probability that an initiated query reaches one of the content provider(s ) equals to the probability that at least one of the nodes has the content .we denote then the forward success ratio as : if we consider the whole search path with the assumption that nodes at maximum holds the query and only copies are allowed for the response message , we calculate the search success ratio , i.e. , both steps are completed , as : the first factor in ( [ eq : ps_appendix ] ) , corresponding to the event that content providers are discovered , obeys binomial distribution with parameters .the second factor is conditioned on , the number of content providers reached by the query replicas . out of these responses, we calculate the probability that at least one of them reaches the destination , i.e. , the searching node . with the assumption that copies of the response is allowed on the return path , to calculate the probability of finding , we find the probability that none of the responses reaches .since there are nodes a response message created by a particular content provider can reach and selection without replication , the probability that a response reaches equals to : substituting these formulations into ( [ eq : ps_appendix ] ) , we find : separating ( [ eq : app_0 ] ) into two parts , we find : next , we notice that each summation term above can be compactly written by the help of the binomial theorem , which is : taking into account that the summation in ( [ eq : app_1 ] ) starts from 1 , we simplify the first term in ( [ eq : app_1 ] ) as follows : applying the same expansion for the second term in ( [ eq : app_1 ] ) , we find : substituting ( [ eq : app_first_term ] ) and ( [ eq : app_second_term ] ) into ( [ eq : app_1 ] ) , we find : which gives us the search success probability :
while there is a drastic shift from host - centric networking to content - centric networking , how to locate and retrieve the relevant content efficiently , especially in a mobile network , is still an open question . mobile devices host increasing volume of data which could be shared with the nearby nodes in a multi - hop fashion . however , searching for content in this resource - restricted setting is not trivial due to the lack of a content index , as well as , desire for keeping the search cost low . in this paper , we analyze a lightweight search scheme , _ hop - limited search _ , that forwards the search messages only till a maximum number of hops , and requires no prior knowledge about the network . we highlight the effect of the hop limit on both search performance ( i.e. , success ratio and delay ) and associated cost along with the interplay between content availability , tolerated waiting time , network density , and mobility . our analysis , using the real mobility traces , as well as synthetic models , shows that the most substantial benefit is achieved at the first few hops and that after several hops the extra gain diminishes as a function of content availability and tolerated delay . we also observe that the _ return path _ taken by a response is on average longer than the _ forward path _ of the query and that the search cost increases only marginally after several hops due to the small network diameter .
it is possible to consider that cognitive psychology appeared as a reaction to behaviorist approaches , where the mental content plays the role of a black box . in contrast, this content constitutes a central issue in cognitive sciences .consequently , the use of computers to implement or imitate human intellectual tasks naturally emerged as a methodological tool , and even as a powerful metaphor , in the investigation of mental processes such as intelligence , learning , memorization , among others .it is along these lines that we prepared a sequence of experiments with humans , to be later on compared with similar experiments , or simulations , done with computers provided with appropriate algorithms , in particular simple perceptrons .it should of course be clear that such comparisons have philosophical implications ( see , for instance , ) , which we address in the present work only in the conclusions ( section v ) . in the present study ,two different memorization tasks were implemented .the first of them was relatively simple , namely the memorization of simple binary codes in and matrices .most of the present effort is dedicated to this analysis .the second task was , intellectually speaking , sensibly more complex .it consisted of learning ambiguous images , where the figure - background reversal is crucial .although semantic and strategic aspects are present in both learning tasks , the second one is by far more delicate and reveals higher level cognitive phenomena .consistently , our study leads to quantitative information concerning the first task , whereas only some qualitative features were determined for the latter . at the level of the comparison with computational simulations ,our emphasis is put on whether learning occurs in an _ extensive _ or a _ nonextensive _ manner .these terms will be mathematically defined and further analyzed in section iii .they are currently used in statistical mechanics and thermodynamics , branches of physics dedicated to the study of the connections between the microscopic and the macroscopic worlds . at the present stage ,it is enough to think of extensivity as a form of _ nonglobality _ or _ locality _ , as opposed to nonextensivity or _globality _ or _nonlocality_. in a loose manner , they respectively correspond to the _ molecular _ and _ molar _ approaches in psychology , i.e. , the system is perceived as the sum of its parts , or as different from the sum of its parts . in sectionii we present the experimental study with humans . in section iiiwe describe the entropic concepts that we use with regard to the computational simulations that have been implemented . in sectioniv we compare both approaches , namely with humans and computers . finally , we conclude in section v.the experiment consists in individually learning fixed binary codes ( represented by the signs * * and * * ) in matrices .it was presented to each person , initially one matrix ( see fig . 1 ) and , then , two binary matrices ( see figs . 2 and 3 ) . in order to avoid any kind of uncontrolled cultural effect , the location of the symbols and in all the matrices was randomly generated by computer and fixed once for ever .it was carried out with approximatively 150 university students whose specialty was not directly related to geometry , mathematics or images , in order to avoid professional bias .for instance , students of physics , mathematics , engineering , architecture , visual publicity were excluded .the experiment population was mainly constituted by students of psychology , administration , social service and social comunication of the federal university of rio de janeiro .approximately 30 students were used for preliminary tests in order to fix an optimal experimental protocol ( matrix sizes , binary code of each matrix , exhibition and hiddening times , among others ) .then , precisely 120 ( 92 female and 28 male ) students were exposed to the same protocol , and only their results were taken into account for quantitative purposes in the statistical processing ( averaging , in particular ) .their ages ranged between 17 and 27 years old , the majority of them being around 22 years old .each of them was sequentially isolated in a peaceful room and tested , for about 40 minutes , by one of us .information about the scope of the research was given to each one of the 120 individuals .it was told that the experiment was not measuring their intelligence ( so that they would feel relaxed ) , and that it was important for understanding how learning occurs ( so that they would seriously try to perform satisfactorily ) .after these instructions were given to each one of the 120 students , three matrices were shown . in all cases ,a matrix was first shown , one of the two complementary matrices ( i.e , obtained by interchanging the symbols * + * and * o * ) .then , one of the four matrices was shown ( matrix in fig .2 to 30 students , matrix in fig . 3 to other 30 , and so on ) . then the other noncomplementary matrix was shown .the sequence of each individual test was as follows : + _ ( 1 ) _ after some general explanations , an empty matrix and the and symbols were shown , and the subjects were asked to randomly fill the matrix with the two symbols .this step aimed to provide to the individuals some familiarity with the experiment ; + _ ( 2 ) _ then fig . 1 was exhibited during 8 seconds and then hidden . the student had to try to reproduce it on the spot on an empty matrix .when this was done , the operation was repeated after a rest interval of 10 seconds .the matrix was never shown more than 10 times ( the individuals started feeling tired after 10 times ) .the matrix was considered to be learnt if no error was made after two successive exhibitions ; + _ ( 3 ) _ then the learning test was repeated by successively using two of the four matrix , one at a time ; + _ ( 4 ) _ finally , the student was asked to briefly describe how he(she ) proceeded to learn . the _ error _ of the -th individual ( ; corresponds to the initial random filling ) is defined as the number of wrong elements of the filled matrix ( ) ; .typical results are presented in figs . 4 , 5 and 6 for the and in figs . 7 and 8 for one of the four matrices ( the figures associated with the other three matrices are quite similar in fact ) .although in a quite different context , results that have some connection with the present ones have been exhibited in .the averages ( with if the entire set of students is used for averaging ) are shown in fig .these curves already achieved the aspect presented in this figure when the averages were performed with approximately 80 individuals . using the entire set of 120 results just improved the precision but incorporated no new qualitative elements in the curves .if we define the _ learning time _ as the number of times shown before learning , excluding those who did not suceed learning that particular matrix untill the end of the experiment , we can check that it is of the order of 6 .incidentally we verified an interesting ( and indeed unexpected ) cultural phenomenon .for the matrix we were expecting to be close to since , before starting to show the codified matrix , we asked to _ randomly _ fill the empty matrix with symbols * o * and * + * , indicated in _ this _ order above the empty matrix , if we look at them from left to right . in variance with this reasonable expectation , we found , for the first 60 students , .after some hesitation about what could be the cause of this asymmetry ( e.g. , could it be the different semantics humans associate with a circle or a cross ? ) , we speculated that it could be the fact that portuguese language ( within which the brazilian population is educated ) is read _ from left to right_. then , to the second and last set of 60 students , an empty fig .1 was presented with the symbols in the ordering * + o * instead of * o + * .very symptomatically , we then obtained , the overall average being consequently , reasonably close to 12.5 as initially expected ! to confirm this cultural cause of the observed asymmetry , it would be interesting to repeat the experiment with say arabic students ( educated within _ right - to - left _ reading ) .another interesting feature that we observed is that , for many individuals , , thus systematically contradicting the overall monotonic tendency of to decrease with time .the reason for this kind of a priori unexpected behavior appeared to be ( as commented by the individuals themselves during the free final conversation ) that , after seeing for the first time the code to be learnt , the student dedicated a good part of his ( her ) attention to establish astrategy " for learning rather to properly learn the matrix .let us mention , by the way , that in an experiment like the present one it is quite hard to differentiate between learning the strategy and memorizing the matrix within that strategy .this kind of modelization is supported by the fact that , several months after conclusion of the experiment , quite a few students still remembered the strategy , while they had completely forgotten the particular matrix code itself .let us now briefly comment the second experiment we developed .we chose several ambiguous images , all of them being susceptible of two mutually excluding interpretations on the basis of figure - background reversal .for example , if one sees in fig .15 a young woman , one does not simultaneously see the old woman , and reciprocally .we implemented this more complex learning task by showing the image and then asking to the individual what he sees .then we tried to count the time needed by the person in order to recognize the other image interpretation .this perception mechanism is sometimes referred to as _ reversal _ of the figure - background .it turned out that the times involved in this type of experiment , and very specifically the slowness of learning , if any , how to recognize the reversal , were so ill - defined that we decided not to proceed with this protocol .the question of how to conveniently quantify such learning remains , therefore , an open question ( see also ) .a great variety of computer learning algorithms are available in the literature .it is clear that all of them process , in one way or another , information .entropy is well known to be a convenient tool for quantifying ( lack of ) information .it can therefore be used in the context of any learning algorithm , at least in principle .we shall use it here in connection with the specific perceptron we shall describe later on . for convenience ,let us briefly review at this point some basic notions about the entropic forms we are referring to in the present paper .the boltzmann - gibbs - shannon ( bgs ) entropy is the basis of standard statistical mechanics and thermodynamics .it is defined ( in its discrete version ) by where is the number of microscopic possibilities accessible to the system , and are the associated probabilities ( ) ; for simplicity , we have taken boltzmann constant equal to unity . this entropy becomes maximal at equiprobability , i.e. , for all , and achieves the value which is the celebrated boltzmann formula . if we consider a composite system made of two ( probabilistically ) independent systems and , i.e. , if we assume that , and replace this into eq .( 1 ) , we straightforwardly obtain which can be phrased as _ the entropy of the whole is the sum of the entropies of the parts_. the entropic form ( 1 ) is the basis of standard , boltzmann - gibbs ( bg ) , statistical mechanics and thermodynamics , and property ( 3 ) is known as _extensivity _ or _ additivity_. this entropy , as well as others , are in some sense ubiquitous .indeed , they emerge in a great variety of discussions .for example , they have often been used concerning complex phenomena such as the organization of living matter ( see , for instance , ) , as well as other types of organization , including that of knowledge ( e.g. , memorization and learning ) , economics , linguistics , to mention but a few ( see , for instance , ) . before addressing the generalization of we are interested in here ,let us mention that optimization of eq .( 1 ) in the presence of a constraint of the type ( where might be say microscopic energy levels ) , leads to where is a parameter to be determined through the value of the constraint . in statistical mechanics ,( 4 ) is in fact the celebrated boltzmann - gibbs weight . in 1988 ,one of us ( ct ) proposed the generalization of bg statistical mechanics on the basis of a more general entropic form , namely being any real number .we can verify that , in other words , the bg formalism becomes now the particular case of this more general formalism .if we assume , once again , two independent systems and , we can prove that which can be phrased as _ the generalized entropy of the whole is different from the sum of the generalized entropies of the parts_. this property is referred to as _nonextensivity _ or _nonadditivity_. to be more precise , if we take into account that is always zero or positive , eq .( 6 ) implies that if and if .only if we have that .it is from property ( 6 ) that the terms _ nonextensive _ statistical mechanics and thermodynamics have been coined ( for reviews see ) .analogously to what we did before , if we optimize in the presence of the constraint , we obtain ^{1/(1-q ) } \;,\ ] ] where , as before , is a parameter to be determined through the value of the constraint .notice that for the normal regime for , i.e. , , we have , for large values of , a long _ power - law _tail for , whereas we have a short _ exponential _tail for ; finally , for , we have a cutoff .( 7 ) can be re - written in the boltzmann - gibbs form , namely where \;.\ ] ] in other words , in what concerns the optimizing distribution , plays the role of an effective energy which replaces ( in the limit , of course we recover ) .this effective energy can be used to a variety of microscopic and mesoscopic equations .one such example is the langevin equation , on which the perceptron that we use here has been constructed .matrix.,width=226 ] matrix .this matrix is referred to as the matrix .its dual matrix is obtained by permutating the and the symbols , and is referred to as the matrix.,width=207 ] matrix .this matrix is referred to as the matrix .its dual matrix is obtained by permutating the and the symbols , and is referred to as the matrix ., width=207 ] matrix .the three left correspond to three different individuals ( the abscissa is the number of presentations of the matrix ; the ordinate is the number of matrix elements incorrectly reproduced ) .the three right ( see [ 16 ] for further details ) correspond to three different initial conditions for the perceptron ( the abscissa is the number of iterations ; the ordinate is the percentual error).,width=340 ] matrix as a function of the number of presentations : circles , squares and triangles respectively correspond to averaging over , and individuals.,width=302 ] matrix .the abscissa is the ordinal of the presentation at which the individual learnt the matrix : 82 ( out of 120 ) individuals learnt the matrix before or at the 9th presentation ( we recall that the 10th presentation was used to confirm the learning at the 9th one ) ; 38 individuals did not succeed ( and are not computed in the histogram).,width=264 ] matrix as a function of the number of presentations : the circles correspond to averaging the results of individuals to whom the matrix was shown in _ first _ place ; the squares correspond to averaging the results of individuals to whom the matrix was shown in _second _ place ( after having seen , in _ first _ place , the matrix ) ., width=264 ] matrix , either the or the ones .the abscissa is the ordinal of the presentation at which the individual learnt the matrix : 50 ( out of 60 ) individuals learnt the matrix before or at the 9th presentation ( we recall that the 10th presentation was used to confirm the learning at the 9th one ) ; 10 individuals did not succeed ( and are not computed in the histogram).,width=264 ] matrix ) as a function of the number of iterations ( is a parameter of the perceptron ) .the perceptron has been chosen with binary inputs , in order to simulate the human task ( dots ) as closely as possible on the average ( taken on 92 individuals in this example ) .see [ 16 ] for further details.,width=321 ] matrix ) as a function of the number of iterations ( is a parameter of the perceptron ) .we notice the extreme sensitivity to the value of in the neighborhood of .see [ 16 ] for further details.,width=321 ] matrix , where is the value of at which the error becomes half of its value at .the dots correspond to averaging the experimental data with humans .the continuous curve has been obtained averaging a large number of initial conditions for the perceptron with the gain parameter , the temperature - like parameter and the entropic index .it is with these values that optimal fitting was obtained for the experimental data .the parabolic extrapolation of the experimental data corresponding to and provides , for , the value of for the percentual error , which corresponds to for the absolute error .therefore the value of corresponds to the value of at which the absolute error equals . since no integer value of corresponds exactly to this value , a linear interpolation has been performed , yielding for the experimental data with 120 individuals .see [ 16 ] for further details.,width=321 ] as we see , the entropic property ( 6 ) has a kind of _ gestalt_-like flavor .its use constitutes a natural choice if we desire to deal with informational phenomena involving global or nonlocal aspects .since this might well be the case of human learning , we have adopted this formalism in order to have the possibility of comparing human and machine learnings .to do so , a nonextensive perceptron has been implemented which performs a task similar to the learning of the and matrices that were exposed to the students , according to the experimental protocol we described earlier . in order to perform calculations, the perceptron needs an internal dynamical equation . to fulfill this requirement , the -generalized langevin equation previously introduced by stariolo implemented in the perceptron .some typical runs of the perceptron are shown in fig .4 , and typical averages are shown in figs .16 , 17 and 18 ( from ) .the purpose of the present section is to compare the results obtained with humans and those obtained with the nonextensive perceptron .the comparison will be illustrated on the learning / memorizing of the matrix .we shall verify that , for this specific task , the human and perceptron results can be amazingly similar .this can be checked on fig .4 . the three individual results ( on the left ) are indeed similar to the perceptron realizations with three different initial conditions ( on the right ) .the three human examples have been chosen as to exhibit typical cases .the three perceptron examples have been chosen in order to have an overall aspect similar to the human ones .averages over many realizations of the data just presented are shown ( with dots ) in figs .16 and 18 .we have rescaled the time variable in such a way that comparison becomes possible on the same graph .more precisely , we have expressed time in units of the corresponding half - time , defined as the value of time at which the average error curve decays to its half value .a rescaling such as this one is clearly necessary in order to quantitatively compare the results . indeed ,time " is here represented as the number of presentations , whereas perceptron time " essentially corresponds to the number of computer iterations .these two numbers being of a completely different nature ( see also ) , it is clear that rescaling becomes necessary .we may consider fig .18 as the central result of the present work .we verify that , for the specific task of learning / memorizing 25 binary states ( on a matrix for the humans ) , humans and machines are remarkably similar .of course , the parameters of the perceptron have been chosen in such a way as to optimize the overall fitting to the human data .the number of individuals that have been averaged is 120 , and we have verified that no sensible variation is obtained under increase of that number . in the language of statistical mechanics , we may say that we have practically attained the thermodynamic limit .there is a little bump in the human results at : we have not identified its origin ( perhaps fatigue of the tested individuals , perhaps something else ) .the perceptron does not exhibit such bump . as we see, the perceptron that fits the data has some degree of nonextensivity , as was conjectured in the beginning of the present work .although is quite close to unity , we must take into consideration the fact that the error curves have been shown to be ( see fig .17 ) extremely sensitive to the degree of nonextensivity in the neighborhood of ( extensive case ) .in summary , we have implemented an experimental study aiming to measure the learning / memorizing performance of humans on simple codes on matrices .it is on purpose that we simultaneously use the words learning " and memorization " .indeed , the experiments clearly showed that the improvement of correct answers was due to a mixture of memorization of the specific codified example and devising learning _ strategies _ ( symmetry rules and other mnemonic tricks ) in order to efficiently implement the memorization effort .in fact , after several months , we informally verified that the individuals had forgotten the codes , but still remembered the strategy they used for memorizing them . through comparison with machine results, we verified that this particular human task was executed with clear indication of a slight , though very efficient nonextensivity ( or globality ) quantified by the entropic index .this index appeared to be slightly _ above _ unity , which characterizes _ slower _ learning / memorizing ( the error curves takes longer to become basically zero ) , but perhaps higher ability for devising strategies .it is then allowed to conjecture that human nature evolved , during successive generations , not so much to strongly improve the speed associated with such kind of memorization , but rather to improve the capacity of spontaneously and quickly generating intellectual strategies for performing tasks such as memorization .we also applied a similar experimental protocol for learning / memorizing how to analyze complex figure - background images in order to quickly realize alternative ( typically two ) interpretations of the figures .we verified that , unless much more sophisticated experimental protocols and computational algorithms are deviced , cognitive tasks with important semantic content are by all means nontrivial to measure and compare .for such complex tasks , even more than for the simple binary learning / memorization addressed here , the role of strategies might well be fundamental , although this remains to be proved .on more general grounds , the scenario which emerges is that nonextensivity seems to serve to humans for achieving _ abduction _ , one of charles sanders pierce three basic forms of inference ( see and references therein ) . in other words ,given its intrinsic _ nonlocal _ nature ( strong collective correlations are necessary for making the entropic index to differ from unity ) , it is plausible that nonextensivity constitutes the structure necessary to make _ metaphors_. given the very high intellectual level attributed , since aristotle , to metaphors , it is allowed to think that it has some specific relation with the nature of the one that we might consider as the _ animal who makes metaphors_. we may use _ homo metaphoricus " _ to express this concept .further developments , on both philosophical and cognitive - psychological grounds , within the frame that we have outlined here would naturally be very welcome .they could reinforce or exclude the interpretation that our present human - machine comparison suggests .we thank a.b . lima and k.b .miziara for assistance during the early stages of the present work , as well as s.a .cannas and d.a .stariolo for making available to us their perceptron curves . computational assistance and useful remarks by l. silva , c. anteneodo , f. baldovin and m.p .albuquerque are acknowledged as well .we finally thank capes , cnpq , pronex and faperj ( brazilian agencies ) for partial financial support .h. atlan , _ application of information theory to the study of the stimulating effects of ionizing radiation , thermal energy , and other environmental factors _ , j. theoret* 21 * , 45 ( 1968 ) ; _ on a formal definition of organization _ , j. theoret. biol . * 45 * , 295 ( 1974 ) ; _ self - organizing networks : weak , strong and intentional , the role of their underdeter mination _ , in _ functional models of cognition _ , ed .a. carsetti ( kluwer academic , amsterdam , 1999 ) , p. 127 . c. tsallis , j. stat ._ possible generalization of boltzmann - gibbs statistics _ , j. stat .52 * , 479 ( 1988 ) ; e.m.f .curado and c. tsallis , _ generalized statistical mechanics : connection with thermodynamics _ , j. phys .a * 24 * , l69 ( 1991 ) [ corrigenda : * 24 * , 3187 ( 1991 ) and * 25 * , 1019 ( 1992 ) ] ; c. tsallis , r.s .mendes and a.r .plastino , _ the role of constraints within generalized nonextensive statistics _ , physica a * 261 * , 534 ( 1998 ) .a regularly updated bibliography can be accessed at http://tsallis.cat.cbpf.br/biblio.htm s.r.a .salinas and c. tsallis , eds.,_nonextensive statistical mechanics and thermodynamics _* 29 * ( 1999 ) ; s. abe and y. okamoto , eds . , _ nonextensive statistical mechanics and its applications _ , series _ lecture notes in physics _* 560 * ( springer - verlag , heidelberg , 2001 ) [ isbn 3 - 540 - 41208 - 5 ] ; p. grigolini , c. tsallis and b.j . west , eds . ,_ classical and quantum complexity and nonextensive thermodynamics _ , chaos , solitons and fractals * 13 * , number 3 , 371 ( pergamon - elsevier , amsterdam , 2002 ) ; c. tsallis , _ nonextensive statistical mechanics : a brief review of its present status _ , annals of the brazilian academy of sciences * 74 * , 393 ( 2002 ) ; g. kaniadakis , m. lissia and a. rapisarda , eds . , _ non extensive statistical mechanics and physical applications _ , physica a * 305 * , 129 ( 2002 ) ; m. gell - mann and c. tsallis , eds . , _ nonextensive entropy - interdisciplinary applications _ , ( oxford university press , new york , 2004 ) ; h.l .swinney and c. tsallis , eds ., _ anomalous distributions , nonlinear dynamics and nonextensivity _ , physica d * 193 * ( 2004 ) ; c. tsallis , _ algumas reflexoes sobre a natureza das teorias fisicas em geral e da mecanica estatistica em particular _ , in _ tendencias da fisica estatistica no brasil _ , ed . t. tome , volume honoring s.r.a .salinas ( editora livraria da fisica , sao paulo , 2003 ) , page 10 .aristotle , _ ars poetica _ [ the greatest thing by far is to be a master of metaphor .it is the one thing that can not be learned from others ; it is also a sign of genius , since a good metaphor implies an eye for resemblance " ] .
simple memorizing tasks have been chosen such as a binary code on a matrix . after the establishment of an appropriate protocol , the codified matrices were individually presented to 150 university students ( conveniently pre - selected ) who had to memorize them . multiple presentations were offered seeking perfect performance verified through the correct reproduction of the code . we measured the individual percentual error as a function of the number of successive presentations , and then averaged over the examined population . the _ learning curve _ thus obtained decreases ( almost monotonically ) until becoming virtually zero when the number of presentations attains six . a computer simulation for a similar task is available which uses a two - level perceptron on which an algorithm was implemented allowing for some degree of _ globality _ or _ nonlocality _ ( technically referred to as entropic _ nonextensivity _ within a current generalization of the usual , boltzmann - gibbs , statistical mechanics ) . the degree of nonextensivity is characterized by an index , such that recovers the usual , extensive , statistical mechanics , whereas implies some degree of nonextensivity . in other words , is a ( very sensitive ) measure of globality ( gestalt perception or learning ) . the computer curves fit well the human result for . it has been verified that even extremely small departures of from unity lead to strong differences in the learning curve . our main observation is that , for the very specific learning task on which we focus here , humans perform similarly to slightly nonextensive perceptrons . in addition to this experiment , some preliminary studies were done concerning the human learning of ambiguous images ( based on figure - background perception ) . in spite of the complexity of drawing conclusions from such a comparison , some generic trends can be established . moreover , the enormous and well known difficulty for computationally defining semantic , hierarchic and strategic structures reveals clear - cut differences between human and machine learning . .
perhaps more than any other technological contribution in baseball , the deployment of the pitchf / x system has proven to be an invaluable resource to teams and to fans of the game in their statistical analyses of baseball , both in its original form and as augmented by estimates of pitch spin parameters ( known as `` psuedo - spin '' ) .end users of the system , particularly brooksbaseball.net and mlb advanced media ( mlb - am ) , have classified these pitches into common pitch types , both by hand and using neural network classification methods .we hypothesize that the current method for classifying pitches in the database can be refined due to evidence from graphical exploration and previous research [ 1 ] .+ we improve the current pitch classification system by comparing the current neural network classification results to these from various statistical clustering methods , such as k - means , hierarchical clustering , and model - based clustering with a multivariate gaussian mixture model ( mbc ) .we ultimately propose an alternative basis for pitch clustering and classification . in section 2we describe the current methods used and the pitchf / x database . in section 3, we introduce the model - based clustering method , which has several advantages for examining pitch types including pitches with high variance , pitch evolution across time , and pitches with similar characteristics .we also address implementation of the mbc method as well as the selection of the clusters and the stability of this clustering method . in section 4 ,we propose a novel algorithm for classifying pitches based on their characteristics , before concluding with future work in section 5 .publicly available pitchf / x data is available from several sources . ]our data subset consists of pitches thrown by roughly 900 pitchers in the 2010 and 2011 seasons ; we exclude data from before the 2010 season due to reported inconsistencies within the pitchf / x system .one important point to make is that ground truth is not known , which is why we have measured stability of our method in section 3.2 , in terms of cluster memberships .+ the raw database contains trajectory information on each pitch , including acceleration and velocity , though not the spin of each pitch , which is useful in pitch classification because it can help distinguish between different pitch types .some spin variables can be estimated using physics and a number of simplifying assumptions [ 3 ] ; the name `` psuedo - spin '' is given to these quantities due to this .+ there are several important variable definitions that we use throughout our paper and in our figures : * the pitch s * start speed * measured in miles per hour at the release point .this is measured using radar guns and is commonly acknowledged as the speed of the pitch . * the * back spin * of the baseball measured in radians per second .a positive number represents back spin .most fastballs have back spin , while off - speed pitches have a tendency to have more `` top - spin '' , or negative back spin . *the * side spin * of the baseball measured in radians per second .a positive number represents left - to - right spin , or a left - handed pitcher s curveball or slider .a negative number represents right - to - left spin , or the direction of a right - handed pitcher s curveball or slider .figure 1a plots these three variables for pitches thrown by barry zito ; these pitches have been classified with the system developed via a neural network classification system for each pitcher developed by mlb advanced media .specifics pertaining to the method , model , and training data used are not publicly available [ 1 ] .+ we have reason to believe barry zito threw 5 types of pitches in the 2010 and 2011 seasons due to the obvious 5 different clusters observed graphically , and according to http://www.brooksbaseball.net[brooks baseball ] , whose pitchf / x classifications are considered by some the most accurate available to the public , those pitches are a four - seam fastball , sinker , slider , curveball , and changeup ; but they do not make their method or data available , only their results [ 3 ] . moreover , we speculate that the pitches in the two - seam fastball cluster should instead be classified as sinkers since brooks baseball classifies these pitches as such . the neural net classifies brooks baseball s sinker cluster as two - seam fastballs .we investigate whether or not barry zito s two - seam fastball should be labeled as a sinker in section 4 by implementation of a new classification algorithm .we ultimately agree with brooks baseball and label the cluster as a sinker .overall , our model tends to split up four - seam fastballs and two - seam / sinker clusters differently than the neural networks method in a way that more closely resembles brooks baseball classification and empirically makes more sense .we explore this further in section 4 when we discuss the cliff lee classification example . + in addition, the neural net classification appears to have obvious misclassifications .four - seam fastballs are classified as curveballs , and curveballs / changeups are classified as sliders , which are clearly labeled in the incorrect clusters in figure 1a .mixture models for clustering rely on a straightforward generative premise : there is a series of simple probabilistic models for how an event can be generated , and a weight on which model will be used to generate the observation .this description lines up directly with pitcher intent : while we as observers may not know what type of pitch is intended , the pitcher himself makes a choice of a specific pitch type ( fastball , slider , curveball , etc ) with a basic profile : a grip and arm motion that gives the ball a desired speed , spin and trajectory . + a multivariate gaussian model for any particular pitch profile makes intuitive sense .each coordinate has a mean value for example , a typical four - seam fastball might have an initial velocity of 95 miles an hour , a back spin of 100 radians per second and a side spin of 10 radians per second .the resulting pitch is then affected by many different sources of noise , both in the pitcher s delivery and in other external factors like the wind , and this noise can affect multiple pitch characteristics at once .the resulting pattern in three dimensions is an ellipsoid ; in figures 1b , 2 , 3 , and 5b , we visualize the mbc results for a variety of pitchers .one particular advantage to this approach is that we can detect when two clusters overlap , since geometry , and not just proximity , is an important factor .+ given a set of gaussian clusters and weights , this routine determines the probability that each observed pitch belongs to a given cluster as the relative probability density for the pitch if it were a member of each cluster , factoring in each cluster s relative weight most pitchers , for example , throw more fastballs than other off - speed pitches , and this is taken into account directly . in our classificationswe declare a pitch to belong to the class with the highest probability under the model .+ we use the ` mclust ` library in the ` r ` statistical programming software to perform our clustering operations , with some modifications that we describe to account for additional information . given a pre - selected number of clusters, we use an expectation - maximization algorithm to calculate the maximum likelihood estimates ( mles ) for each cluster location , shape , and weight .once we run this across a range of cluster counts , we use the bayesian information criterion ( bic ) model selection criteria to determine the optimal number of clusters in the model , which attempts to maximize the likelihood of the data while penalizing excessive numbers of parameters .+ empirically , mbc creates clusters of pitches that are more tightly confined than current pitch classifications , suggesting there are very few curveballs , changeups , or sliders misclassified .a major concern is the difference between the four - seam fastball and the sinker / two - seam fastball clusters , as seen in the figures section .we expect two - seam fastballs / sinkers to have a slightly slower start speed than four - seam fastballs because it is known four - seam fastballs to be a pitchers fastest pitch , but this is not the case with the pitchf / x neural networks classification method . however , mbc weights the velocity as an important factor between the two clusters and splits them accordingly .+ figure 1b shows the mbc as it performs for the pitches of barry zito ; the red cluster is the faster pitch and light blue cluster is the slightly slower pitch with vertical spin closer to zito s other off - speed pitches .the mbc s potential four - seam fastball cluster ( red ) has the smallest number of pitches , but other sources suggest that zito threw his four - seam fastball most often , leading us to suspect that while the labels ( from section 4 ) may need adjusting , the proper clusters are still being detected .this also may indicate there is not much difference between zito s four - seam and two - seam / sinker pitches from the batter s perspective . to further support subtle advantages of mbc over the neural networks classification , we evaluate the overall stability of our clustering method in section 3.2 .+ in its original form , the model tends to choose more clusters than fewer for the pitchers we have tested . on inspecting the clusters produced , it is clear that the method is favoring relatively `` thin '' clusters , which have high internal correlations between variables , which is highly unrealistic for the physical examples we consider . limiting the model to a smaller number of pitches is not generally feasible , and while manual inspection is possible , it would be far more preferable to automate the method to remove this issue .+ we develop our own criterion for choosing the number of clusters , called the adjusted bayesian information criterion , or bic . since we observe that most pitch clusters are close to spherical , and our prior knowledge suggests that flat ellipsoidal clusters are unlikely , we are motivated to constrain the creation clusters which have high intra - cluster correlations between the three variables of interest. currently , each cluster has three parameter sets : , the cluster mean ; , the standard deviation of each dimension ; and , the intra - cluster correlation matrix .it is the terms of that need to be kept small in absolute value ( 1 or -1 indicates perfect correlation , 0 indicates no correlation ) .+ to account for this , we develop an additional penalty term for the current bic formula that adds a value proportional to each intra - cluster correlation term .using bic , if the clusters the model finds with =6 have high intra - cluster correlations , compared to =5 , than the correlation penalty term will be large , and bic will be smaller for =5 .we choose our based off of the minimum bic or bic .+ figure 1b displays the clustering for barry zito chosen with adjusted bic ( bic ) , which chooses five clusters , which agrees with the pitch number selection of both the neural network classification and brooks baseball . for this application, bic is a substantial improvement over bic , and is the method we use going forward in choosing the number of pitch clusters . + in order to fully grasp all of the benefits the mbc method can offer , we investigate how it performs when clustering pitchers other than barry zito .in fact , we have investigated mbc s performance on the entire 2010 and 2011 season , and simply show a few illustrations from this large analysis performed .+ figure 2 shows how our method is ideal for detecting how a pitch can evolve over time .the purple and black clusters represent jon lester s curveballs .the difference between the two clusters is the speed and horizontal break of the pitch . in 2010 , jon lester s curveball averaged 78 mph while in 2011 it averaged 76 mph with slightly more horizontal break a subtle but important difference .figure 3 visualizes how our method can also cluster pitches with high variances such as tim wakefield s knuckleball ( orange ) ; the pseudo - spins , corresponding to additional break , are considerably wider than for other pitches due to the unpredictable nature of stitch position .+ we evaluate quality of the mbc method by assessing the _ stability _ of the method in terms of how sensitive our model is to smaller sample sizes .we go about accomplishing this by taking the data for each pitcher and running the clustering on an 80% subset of the data , the remaining 20% subset , and then on the full data .next , we calculate the number of pitches in both the 80% and 20% subsets that do not change clusters compared to the full dataset . in order to be confident in our results, we repeat this process 20 times for each pitcher and find the mean and standard error .we find that after 20 samples our standard error is sufficiently small and we are confident with our stability sample mean estimates .figure 4 is the two distributions for the 80% and 20% subsets and the proportion of pitches that are in the same cluster as the full data for all pitchers .overall , for both the 80% and 20% subsets the majority of pitchers have 80% or more of their pitches clustered in the same cluster .we found that this stability holds on sample sizes as low as 100 pitches , or the average number of pitches a starting pitcher throws in one start .it is important to note that we kept the number of clusters chosen ( k ) the same on both subsets as when we ran it on the full data . +in order to have a fully automated pitch clustering and classification system , we propose a simple classification algorithm , which improves pitch clustering by assigning sensible pitch labels to the respective clusters from our heuristic .although ground truth is unknown , we make comparisons with the current classification labels in the pitchf / x database , brooks baseball classifications , and graphical visualization to determine how well our method works .we found that our classification algorithm has many strengths including consistent performance and labeling multiple clusters as the same pitch type when appropriate , such as the jon lester curveball evolution over time scenario referenced in section 3.2 and visualized in figure 2 .our clustering algorithm appears to have similar classification results as brooks baseball ( which is the closest to the `` truth '' that is , a pitcher s actual choice of pitch to throw against which we can compare ) .+ our algorithm classifies and names a key subset of pitches : four - seam fastballs , two - seam fastballs , sinkers , change - ups , cut - fastballs , sliders , curveballs , and knuckleballs . for each classification , the algorithm begins by assigning the cluster mean with the highest starting velocity as a four - seam fastball . for each additional cluster ,the algorithm goes through a series of constraints that are derived from each cluster s mean and variance to determine the label for each cluster .for example , if a cluster mean has the same side spin direction as the four - seam fastball then it checks if the speed differs from the four - seam fastball by more than 6 mph and if the side spin varies less than 60 rotations per second from the four - seam fastballs .if it does , it assigns the cluster as a change - up .if not , it determines if the side or back spin difference from the four - seam fastball is greater .if the side spin difference is the greater of the two , it assigns the cluster as a two - seam fastball ; if the back spin difference is larger , it assigns the cluster as a sinker .the rest of the classification algorithm follows similar decision processes .+ we evaluate how well our classification algorithm performs by taking a sample of various starting and relief pitchers with differing pitch repertoires .we found after analyzing and comparing the results to brooks baseball and the neural network classification system , in 23 of the 25 scenarios our classification algorithm empirically appear correct and has similar classification results as brooks baseball .the two pitchers that the algorithm does not classify correctly are barry zito and derek lowe .it interchanges their sinker and four - seam fastball clusters because in the data sample , the measured speed of the sinkers are bigger than those for the four - seam fastball , a characteristic that is not commonly true for other pitchers .+ in figure 5 , we visualize cliff lee s mlb - am neural network classification compared to our method .of particular note , our method separates and labels the two - seam and four - seam fastball by assigning the four - seam fastball to the fastest pitch with the least amount of back and top spin relative to lee s other off - speed pitches .our method splits the two fastballs similar to brooks baseball , but brooks baseball instead labels the two - seam fastball as a sinker .we also are able to observe that our method s slider ( brown ) cluster is clearly defined and classified with no other obvious misclassified pitches .the neural network classification has obvious misclassification in the slider cluster with pitches labeled as changups , curveballs , and cut - fastballs .our method clearly improves both the clustering and classification of cliff lee compared to the neural network classification .we have proposed a new clustering method and classification algorithm and tested both approaches using the pitchf / x database for 2010 - 2011 .our analysis illustrates better clustering than the current neural network method based upon our plots and measuring the stability of our method .furthermore , based upon our model based clustering method , we recommend a simple algorithm to classify each pitch based on the individual pitcher , which performs extremely well in most situations .its strongest features are correcting any obvious misclassification in the neural network model , accounting for pitch evolution over time , and performing well for pitches with comparatively high internal variance .our method also performs well in a highly debated topic in mlb , namely distinguishing two - seam and four - seam fastballs .+ these improvements suggest others can be made in assigning pitchers themselves to clusters , a problem highly motivated by the need to assess a pitcher s likely performance against unfamiliar players or teams . in these situations ,most hitters likely have never faced the current pitcher , and thus , there is no or very little data to leading to inference about the hitter s history against the current pitcher .placing pitchers into similar groups and looking for relationships with groups of batters would be invaluable to mlb .this information can lead to advanced and more detailed scouting reports of what type of pitches are a hitter s strength or weakness .moreover , since we have prior information about many pitchers , we can utilize advanced bayesian clustering methods and optimize the choice of for better and more stable results .* references * + 1 . foster , adam ._ scouting with pitchf / x _ , june 12 , 2012 .http://www.baseballprospectus.com/ + article.php?articleid=17327 .2 . fast , mike ._ how to build a pitch database _ , august 23 , 2012 .http://fastballs.wordpress.com/2007/08/23/ + how - to - build - a - pitch - database/ 3 .nathan , alan m. _ the physics of baseball _ , accessed september 20 , 2012 .http://webusers.npl.illinois.edu/ - nathan / pob/..named pitch types with corresponding colors . [ cols="^,^,^,^,^,^",options="header " , ] the pitches thrown by barry zito .figure 1a ( left ) displays the mlb advanced media classification developed via a neural network system .there are obvious misclassifications : a small number of four - seam fastballs ( which should be red ) are classified as sliders ( brown ) , as are some curveballs ( black ) and changeups ( green ) .figure 1b ( right ) displays the mbc model and tightly clusters the pitches , as well as splits up the four - seam and sinker ( light blue ) clusters in an empirically sensible way.,title="fig : " ] the pitches thrown by barry zito .figure 1a ( left ) displays the mlb advanced media classification developed via a neural network system .there are obvious misclassifications : a small number of four - seam fastballs ( which should be red ) are classified as sliders ( brown ) , as are some curveballs ( black ) and changeups ( green ) .figure 1b ( right ) displays the mbc model and tightly clusters the pitches , as well as splits up the four - seam and sinker ( light blue ) clusters in an empirically sensible way.,title="fig : " ] the pitches thrown by jon lester classified using model - based clusters , which shows two distinct clusters for curveballs ( purple and black ) corresponding to a change over time .the difference between the two clusters is the speed and horizontal break of the pitch .this is one example of how the mbc model can detect subtle but important pitch evolution differences . ]the pitches thrown by tim wakefield classified using model - based clusters .the mbc model can also detect and classify pitches well with relatively higher variances like the pseudo - spins for tim wakefield s ( spinless ) knuckleball ( orange ) , compared to the fastball ( red ) and curveball ( black ) . ]80% and 20% stability : percent of pitches in the same cluster in subset and full dataset , across 20 replications of the procedure .for both the 80% and 20% subsets the majority of pitchers have 80% or more of their pitches clustered in the same cluster .we also found that this stability holds on sample sizes as low as 100 pitches , or the average number of pitches a starting pitchers throws in one start . ]cliff lee : figure 5a ( left ) displays mlb s classification system developed via a neural network classification system .as before , there are obvious misclassifications , but our methods clusters are clearly defined and classified with no other obviously misclassified pitches .the mbc model in figure 5b ( right ) separates and labels the two - seam ( grey ) and four - seam ( red ) fastballs by assigning the four - seam fastball to the fastest pitch with the least amount of back and top spin relative to lees other off - speed pitches .this method agrees with the manually corrected data from brooks baseball.,title="fig : " ] cliff lee : figure 5a ( left ) displays mlb s classification system developed via a neural network classification system .as before , there are obvious misclassifications , but our methods clusters are clearly defined and classified with no other obviously misclassified pitches .the mbc model in figure 5b ( right ) separates and labels the two - seam ( grey ) and four - seam ( red ) fastballs by assigning the four - seam fastball to the fastest pitch with the least amount of back and top spin relative to lees other off - speed pitches .this method agrees with the manually corrected data from brooks baseball.,title="fig : " ]
the pitchf / x database has allowed the statistical analysis of of major league baseball ( mlb ) to flourish since its introduction in late 2006 . using pitchf / x , pitches have been classified by hand , requiring considerable effort , or using neural network clustering and classification , which is often difficult to interpret . to address these issues , we use model - based clustering with a multivariate gaussian mixture model and an appropriate adjustment factor as an alternative to current methods . furthermore , we describe a new pitch classification algorithm based on our clustering approach to address the problems of pitch misclassification . we illustrate our methods for various pitchers from the pitchf / x database that covers a wide variety of pitch types .
traditional machine learning methods usually assume that there are sufficient training samples to train the classifier .however , in many real - world applications , the number of labeled samples are always limited , making the learned classifier not robust enough .recently , cross - domain learning has been proposed to solve this problem , by borrowing labeled samples from a so called source domain " for the learning problem of the target domain " in hand .the samples from these two domains have different distributions but are related , and share the same class label and feature space .two types of domain transfer learning methods have been studied : * classifier transfer * method which learns a classifier for the target domain by the target domain samples with help of the source domain samples , while * cross domain data representation * tries to map all the samples from both source and target domains to a data representation space with a common distribution across domains , which could be used to train a single domain classifier for the target domain . in this paper , we focus on the cross domain representation problem .some works have been done in this field by various data representation methods .for example , blitzer et al . proposed the structural correspondence learning ( scl ) algorithm to induce correspondence among features from the source and target domains , daume iii proposed the feature replication ( fr ) method to augment features for cross - domain learning .pan et al . proposed transfer component analysis ( tca ) which learns transfer components across domains via maximum mean discrepancy ( mmd ) , and extended it to semi - supervised tca ( sstca ) .recently , sparse coding has attracted many attention as an effective data representation method , which represent a data sample as the sparse linear combination of some codewords in a codebook .most of the sparse coding algorithms are unsupervised , due to the small number of labeled samples .some semi - supervised sparse coding methods are proposed to utilize the labeled samples and significant performance improvement has been reported . in this case, it would be very interesting to investigate the use of cross - domain representation to provide more available labeled samples from the source domain .to our knowledge , no work has been done using the sparse coding method to solve the cross - domain problem to fill in this gap , in this paper , we propose a novel cross - domain sparse coding method to combine the advantages of both sparse coding and cross - domain learning . to this end , we will try to learn a common codebook for the sparse coding of the samples from both the source and target domains . to utilize the class labels ,a semi - supervised regularization will also be introduced to the sparse codes .moreover , to reduce the mismatch between the distributions of the sparse codes of the source and target samples , we adapt the mmd rule to sparse codes .the remaining of this paper is organized as follows : in section [ sec : crodomsc ] , we will introduce the formulations of the proposed cross - domain sparse coding ( crodomsc ) , and its implementations .section [ sec : exp ] reports experimental results , and section [ sec : concl ] concludes the paper .in this section , we will introduce the proposed crodomsc method .we denote the training dataset with samples as , where is the number of data samples , is the feature vector of the -th sample , and is the feature dimensionality . it is also organized as a matrix \in \mathbb{r}^{d\times n} ] , where the -th column is the -th codeword and is the number of codewords in the codebook , sparse coding tries to reconstruct by the linear reconstruction of the codewords in the codebook as , where ^\top \in \mathbb{r}^{k} ] is the sparse code matrix , with its -th collum the sparse code of -th sample .semi - supervised sparse coding regularization : : in the sparse code space , the intra - class variance should be minimized while the inter - class variance should be maximized for all the samples labeled , from both target and source domains .we first define the semi - supervised regularization matrix as \in \{+1,-1,0\ } ^{n\times n} ] with the domain indicator of -th sample defined as and . by summarizing the formulations in ( [ equ : sc ] ) , ( [ equ : semisupervised ] ) and ( [ equ : mismatch ] ) , the crodomsc problem is modeled as the following optimization problem : + \gamma tr[v \pi v^\top ] + \alpha \sum_{i:{{\textbf{x}}}_i\in \mathcal{d } } \|{{\textbf{v}}}_i\|_1 \\ & = \|x - uv\|^2_2 + tr[v e v^\top ] + \alpha \sum_{i:{{\textbf{x}}}_i\in \mathcal{d } }\|{{\textbf{v}}}_i\|_1 \\ s.t.&\|{{\textbf{u}}}_k\|\leq c,~k=1,\cdots , k \end{aligned}\ ] ] where . since direct optimization of ( [ equ: objective ] ) is difficult , an iterative , two - step strategy is used to optimize the codebook and sparse codes alternately while fixing the other one . by fixing the codebook , the optimization problem ( [ equ : objective ] ) is reduced to + \alpha \sum_{i:{{\textbf{x}}}_i\in \mathcal{d } } \|{{\textbf{v}}}_i\|_1 \end{aligned}\ ] ] since the reconstruction error term can be rewritten as , and the sparse code regularization term could be rewritten as $ ] , ( [ equ : object_v ] ) could be rewritten as : when updating for any , the other codes for are fixed .thus , we get the following optimization problem : with .the objective function in ( [ equ : scv ] ) could be optimized efficiently by the modified feature - sign search algorithm proposed in . by fixing the sparse codes and removing irrelevant terms , the optimization problem ( [ equ : objective ] ) is reduced to the problem is a least square problem with quadratic constraints , and it can be solved in the same way as .the proposed * cross domain sparse coding * algorithm , named as * crodomsc * , is summarized in algorithm [ alg : crodomss ] .we have applied the original sparse coding methods to the samples from both the source and target domains for initialization .* input * : training sample set from both source and target sets ; initialize the codebooks and sparse codes for samples in by using single domain sparse coding . update the sparse code for by fixing and other sparse codes for by solving ( [ equ : scv ] ) . update the codebook by fixing the sparse code matrix by solving ( [ eqe : object_u ] ) .* output * : and .when a test sample from target domain comes , we simply solve problem ( [ equ : scv ] ) to obtain its sparse code .in the experiments , we experimentally evaluate the proposed cross domain data representation method , crodomsc . in the first experiment , we considered the problem of cross domain image classification of the photographs and the oil paintings , which are treated as two different domains .we collected an image database of both photographs and oil paintings .the database contains totally 2,000 images of 20 semantical classes .there are 100 images in each class , and 50 of them are photographs , and the remaining 50 ones are oil paintings .we extracted and concatenated the color , texture , shape and bag - of - words histogram features as visual feature vector from each image . to conduct the experiment , we use photograph domain and oil painting domain as source domain and target domain in turns .for each target domain , we randomly split it into training subset ( 600 images ) and test subset ( 400 images ) , while 200 images from the training subset are randomly selected as label samples and all the source domain samples are labeled .the random splits are repeated for 10 times .we first perform the crodomsc to the training set and use the sparse codes learned to train a semi - supervised svm classifier .then the test samples will also be represented as sparse code and classified using the learned svm .we compare our crodomsc against several cross - domain data representation methods : sstca , tca , fr and scl .the boxplots of the classification accuracies of the 10 splits using photograph and oil painting as target domains are reported in figure [ fig : figpho ] . from figure[ fig : figpho ] we can see that the proposed crodomsc outperforms the other four competing methods for both photograph and oil painting domains .it s also interesting to notice that the classification of the fr and sci methods are poor , at around 0.7 .sstca and tca seems better than fr and sc but are still not competitive to crodomsc . in the second experiment , we will evaluate the proposed cross - domain data representation method for the multiple user based spam email detection .a email dataset with 15 inboxes from 15 different users is used in this experiment .there are 400 email samples in each inbox , and half of them are spam and the other half non - spam . due to the significant differences of the email source among different users , the email set of different users could be treated as different domains . to conduct the experiment , we randomly select two users inboxes as source and target domains . the target domain will further be split into test set ( 100 emails ) and training set ( 300 emails , 100 of which labeled , and 200 unlabeled ) .the source domain emails are all labeled .the word occurrence frequency histogram is extracted from each email as original feature vector .the crodomsc algorithm was performed to learn the sparse code of both source and target domain samples , which were used to train the semi - supervised classifier .the target domain test samples were also represented as sparse codes , which were classified using the learned classifier .this selection will be repeated for 40 times to reduce the bias of each selection .figure [ fig : figspam ] shows the boxplots of classification accuracies on the spam detection task . as we can observed from the figure , the proposed crodomsc always outperforms its competitors .this is another solid evidence of the effectiveness of the sparse coding method for the cross - domain representation problem .moreover , sstca , which is also a semi - supervised cross - domain representation method , seems to outperform other methods in some cases .however , the differences of its performances and other ones are not significant .in this paper , we introduce the first sparse coding algorithm for cross - domain data representation problem .the sparse code distribution differences between source and target domains are reduced by regularizing sparse codes with mmd criterion . moreover ,the class labels of both source and target domain samples are utilized to encourage the discriminative ability .the developed cross - domain sparse coding algorithm is tested on two cross - domain learning tasks and the effectiveness was shown .this work was supported by the national key laboratory for novel software technology , nanjing university ( grant no .kfkt2012b17 ) .j. blitzer , r. mcdonald , and f. pereira .domain adaptation with structural correspondence learning . in _2006 conference on empirical methods in natural language processing , proceedings of the conference _ , pages 120 128 , 2006 .
sparse coding has shown its power as an effective data representation method . however , up to now , all the sparse coding approaches are limited within the single domain learning problem . in this paper , we extend the sparse coding to cross domain learning problem , which tries to learn from a source domain to a target domain with significant different distribution . we impose the maximum mean discrepancy ( mmd ) criterion to reduce the cross - domain distribution difference of sparse codes , and also regularize the sparse codes by the class labels of the samples from both domains to increase the discriminative ability . the encouraging experiment results of the proposed cross - domain sparse coding algorithm on two challenging tasks image classification of photograph and oil painting domains , and multiple user spam detection show the advantage of the proposed method over other cross - domain data representation methods .
cooperative diversity is a highly promising technique for coverage extension and reliability improvement of wireless networks .it exploits the additional degrees of freedom of the fading environment , which is introduced by the spatially distributed multiple relays utilizing either amplify - and - forward ( af ) or decode - and - forward ( df ) relaying . proposed recently in , the opportunisticrelaying is a simple yet efficient cooperative diversity protocol , whose diversity - multiplexing tradeoff is identical to that of the more complex distributed space - time coding cooperative schemes . by selecting a single best " relay among the all available relays ,the opportunistic relaying achieves full spatial diversity while maintaining the spectral efficiency of a two - hop communication link .the outage and error probabilities of the opportunistic relaying systems have been studied in - , which clearly demonstrate its excellent performances . however , there are some design issues for which the outage and error probabilities criteria are not sufficient , such as , packet or slot lengths and latencies , switching rates , power and bandwidth allocation or , decision criterion for changing adaptive modulation levels. these issues can be addressed by investigating the system s second - order outage statistics . to the best of authors knowledge ,such statistics that describe the outage events of cooperative systems , such as , the average outage rate ( aor ) and average outage duration ( aod ) , have not been studied previously .we propose the aor and the aod be defined with respect to the capacity outage events derived from information - theoretic capacity of the opportunistic system .similar definition of the outage statistics has been applied over mimo systems in . in this letter , we derive exact expressions for the aor and the aod of opportunistic systems , employing either af or df relaying in rayleigh fading environment .similarly to , we consider a typical half - duplex dual - hop communication scenario , where the communication between the source and the destination is possible only via relays ( denoted by , ) , as the direct path is assumed blocked by an intermediate wall . in the beginning of each slot ( divided into two equal sub - slots ) , a single best " opportunistic relay is selected out of the possible dual - hop paths for relaying the communication between and . during the first sub - slot , transmits its signal over the first hop , while the selected relay forwards that signal toward over the second hop during the second sub - slot .the channel is exposed to rayleigh fading and is assumed to remain constant during the entire slot duration . without loss in generality , we assume that and the selected best " relay transmit with equal powers , rendering the total available transmission power to . denoting the rayleigh - faded channel gains of hops and during a given slot by and , the received signal - to - noise ratios ( snrs ) at the relay and at the destination expressed as and , with as the noise power .specifying the average squared channel gains as = \omega_{sk} ] , the average received snrs at and are respectively given by and .the relay selection is based on the estimation of the end - to - end performance over the dual - hop path using the _ selection variable _ , which is estimated separately by each relay in the beginning of each slot from its channel state information ( csi ) . to facilitate channel state estimation by the relays , and previously exchange short control packets . in the beginning of slot , the best" relay is selected in a distributed manner by using the _ selection policy _ : in the beginning of slot , each df relay estimates the selection variable which actually evaluates the minimal instantaneous received snr among the two hops , .we assume fixed gain af relays that amplify the received signal from the first hop by and forward it to over the second hop . in the beginning of slot , each af relay estimates the selection variable where .actually , each af relay evaluates the dual - hop snr that is relayed over and received at .note that fixed - gain ( e.g. , semi - blind ) af relays have considerably simpler but yet comparably close performance to that of the variable gain af relays .we consider 2-dimensional isotropic scattering around source , relays and destination , all of which are assumed to be mobile and have no line - of - sight with other stations .thus , each ( ) hop behaves as a mobile - to - mobile rayleigh channel .its channel gain , ( , is a time - correlated rayleigh random process with known statistical properties ( e.g. , the doppler spectrum ) .if a station at one end of a hop is fixed , the mobile - to - mobile rayleigh - fading hop is transformed into the classic " fixed - to - mobile rayleigh - fading hop .the time derivative ( ) is independent from the channel gain ( ) , and follows the gaussian probability distribution function ( pdf ) with zero mean and variance [ [ ref7 ] , eq .( a5 ) ] [ [ ref10 ] , eq . ( 39 ) ] in ( [ 3a])-([3b ] ) , , and denote the maximum doppler rates of , and relay , respectively .the doppler rate ( and consequently the aor ) is expressed in the unit . expressing the slot duration in seconds , both the doppler rate ( now becoming the doppler frequency ) and the aorare expressed in hz . in a given slot ,the opportunistic relaying system experiences a capacity outage event when the mutual information of the dual - hop path over the best " relay drops below some predefined spectral efficiency , where thus , the resulting time - varying capacity suffers from the random occurrence of capacity outage events , during which the channel is unable to support the specified . transforming ( [ 4 ] ) , the capacity outage event at slot occurs if where the outage threshold is .note , can be varied by varying ( as in ) or ( as in this work ) , but the functional dependencies of the increasing or the decreasing ( in db ) have almost same shapes .using a similar approach to that presented in for deriving level crossing rates ( lcr ) of classic " selection diversity systems , the joint pdf of and is expressed as where denotes the joint pdf of selection variable and its time derivative . assuming selected best " relay , denotes the conditional probability that the selection variable of drops below , where denotes the cumulative distribution function ( cdf ) of .the aor is evaluated based on the standard lcr definition [ [ ref6 ] , chapter 1 ] , yielding where denotes the aor of the dual - hop path over the relay .the aod is then given by we now focus on the random process , defined by ( [ 1 ] ) .communication through the relay falls in outage if either one of the two hops fail .thus , the pdf of is given by since both hops follow the rayleigh pdf , is also determined to follow the rayleigh pdf , with , and the respective cdf .using and the independence of the channel gains and their respective time derivatives , the required joint pdf is found as in rayleigh fading , ( [ 14 ] ) specializes to thus rendering and as independent rvs , where is given by ( [ 12 ] ) and with and denoting zero mean gaussian pdfs with variances ( [ 3a ] ) and ( [ 3b ] ) , respectively .thus , the aor of dual - hop path over is obtained as with given by ( [ 12 ] ) . inserting ( [ 17 ] ) into ( [ 9 ] ) and ( [ 10 ] ) , we obtain the aor and the aod of df relaying system . the exact expressions for the cdf and the aor of the random process , defined by ( [ 2 ] ) , are respectively given by [ [ ref9 ] , eq .( 9 ) ] and [ [ ref10 ] , eq . ( 19 ) ] , as where is the first - order modified bessel function of the second kind .note that ( [ 19 ] ) can be efficiently and accurately evaluated by applying the gauss - hermite quadrature rule [ [ ref11 ] , eq .( 25.4.46 ) ] . combining ( [ 18 ] ) and ( [ 19 ] ) into ( [ 9 ] ) and ( [ 10 ] ) , we obtain the aor and the aod of af relaying system .note , if source and destination are fixed , the approach presented in this section can be applied to derive analogous analytic expressions for aors and aods for more general fading channels ( such as , rice and nakagami- models ) , because such fixed - to - mobile hops have known second - order statistical properties .in this section , we present illustrative examples for the normalized aor ( fig .1 ) an the normalized aod ( fig .2 ) in function of of opportunistic relaying system employing either df or af relays . the source and the destination are fixed ( ) , whereas all the relays are mobile and introduce same maximum doppler rates , .the aor and the aod are normalized with respect to the doppler rate as and .the average squared channel gains of all hops are equal to ( i.e. , , ) , thus rendering total available transmission power equal to .the spectral efficiency is set to bps / hz . the monte carlo simulations clearly validate our derived analytical results .1 a. bletsas , a. khisti , d. p. reed and a. lippman , a simple cooperative diversity method based on network path selection , " _ ieee j. select . areas .commun . _ , vol .3 , pp . 659 - 672 , mar .2006 j. n. laneman and g. w. wornell , distributed space - time coded protocols for exploiting cooperative diversity in wireless networks , " _ ieee trans .inform . theory _10 , pp . 2415 - 2525 , oct .2003 [ ref3 ] a. bletsas , h. shin , and m. z. win , cooperative communications with outage - optimal opportunistic relaying , " _ ieee trans .wireless commun ._ , vol . 6 , no .9 , pp . 3450 - 3460 , sept .2007 d. s. michalopoulos and g. k. karagiannidis , performance analysis of single relay selection in rayleigh fading " , _ ieee trans .wireless commun ._ , vol . 7 , no . 10 , pp. 3718 - 3724 , oct .2008 k .- s .hwang , y .- c .ko , and m .- s .alouini , outage probability of cooperative diversity systems with opportunistic relaying based on decode - and - forward " , _ ieee trans .wireless commun ._ , vol . 7 , no . 12 , dec .2008 d. michalopoulos , a. lioumpas , g. k. karagiannidis and r. schober , selective cooperative relaying over time - varying channels " , _ submitted to ieee trans ._ , http://arxiv.org/abs/0905.0564v1 b. o. hogstad , m. patzold , n. youssef , v. kontorovitch , exact closed - form expressions for the distribution , the level - crossing rate , and the average duration of fades on the capacity of ostbc - mimo channels " , _ ieee trans . veh .2 , pp . 1011 - 1016 , feb .2009 [ ref6]w .c. jakes , microwave mobile communications , 2nd ed .piscataway , nj : _ ieee press _ , 1994 .s. akki and f. haber , a statistical properties of mobile - to - mobile land communication channel , " _ ieee trans . veh .826 - 831 , nov .1994 x. dong and n. c. beaulieu , average level crossing rate and average fade duration of selection diversity " , _ ieee commun .letters _ , vol . 5 , no . 10 , oct .2001 [ ref9]m . o. hasna and malouini , a performance study of dual - hop transmissions with fixed gain relays " , _ ieee trans .wireless commun .3 , no . 6 , pp .1963 - 1968 , nov .2004 [ ref10]c.s .patel , g.l .stuber and t.g .pratt , statistical properties of amplify and forward relay fading channels , " _ ieee trans . veh .1 , pp . 1 - 9 ,[ ref11]m .abramowitz and i. a. stegun , _ handbook of mathematical functions with formulas , graphs , and mathematical tables _new york : dover , 1970
opportunistic relaying is a simple yet efficient cooperation scheme that achieves full diversity and preserves the spectral efficiency among the spatially distributed stations . however , the stations mobility causes temporal correlation of the system s capacity outage events , which gives rise to its important second - order outage statistical parameters , such as the average outage rate ( aor ) and the average outage duration ( aod ) . this letter presents exact analytical expressions for the aor and the aod of an opportunistic relaying system , which employs a mobile source and a mobile destination ( without a direct path ) , and an arbitrary number of ( fixed - gain amplify - and - forward or decode - and - forward ) mobile relays in rayleigh fading environment . average outage rate , average outage duration , opportunistic relaying , doppler effect , rayleigh fading
vehicular ad - hoc networks ( vanets ) are formed by a collection of vehicles , wirelessly connected to each other to form a communication network .these networks are typically one dimensional as they dynamically and rapidly self - organise on roads and highways , although communication with road side access point infrastructure is possible under different scenarios . primarily ,vehicle to vehicle ( v2v ) and vehicle to infrastructure ( v2i ) communications involve safety related issues , such as collision warnings aimed at preventing imminent car accidents through broadcasting and relaying messages , thereby increasing local situation awareness .v2v communications can also be exploited for applications such as intelligent cruise control or platooning , traffic information and management , as well as internet access and advertising .the ieee 802.11p standard defines a wireless area network ( wlan ) for dedicated short range communication ( dsrc ) among vehicles .the standard defines protocols for the physical and mac layers , has a 75 mhz bandwidth allocated at 5.9 ghz , and is the prime candidate currently being deployed in order to get ieee 802.11p equipped cars on the roads . under the standard , it is possible to bundle together information on position , speed , direction , brake information , steering wheel angle , threat - events , etc . , and append them to a _basic safety message _ ( bsm ) which is then broadcasted .vehicles within range can then actuate on this information , edit it , or append to the content message , and re - broadcast , thus locally flooding the network .flooding algorithms are commonplace in ad hoc networks , however here the algorithm is also spatially constrained to run along a one - dimensional road network .such networks are typically modelled as random geometric graphs formed by a 1d poisson point process ( ppp ) and a communication range ( see fig.[fig : graph ] ) directly related to snr thus lending themselves to mathematical analysis and engineering .a major challenge in vanets is the timeliness and latency in which information must arrive to be useful to a fast approaching vehicle .hop - count statistics find application in a variety of other settings , e.g. , in gas pipe sensor networks , nanowires , and map navigation problems in general .therefore , hop - count statistics have been extensively studied in 1d and 2d networks .they were first studied by chandler , who looked at the probability that two wireless network nodes can communicate in hops .such information can further assist the calculation of network centrality measures , or achieve range - free localisation . )used to model a vanet .the geodesic length between the two extreme nodes is . ] in this paper we are concerned with the statistical properties of the _ shortest _ multihop paths , also referred to as _geodesics _ , between nodes in 1d vanets .to this end , we calculate for the first time the first few moments of the number of geodesics between nodes in a 1d vanet , as a function of the euclidean distance between them and the vehicle density .clearly for , the shortest possible path is of length hops , employing just relay nodes , thus defining a fundamental upper limit on the latency involved with such transmissions . onthe other hand , due to the broadcast nature of wireless transmissions , multiple bsms containing similar information may arrive via different -hop paths almost simultaneously .it is therefore of interest to understand the statistical properties of the number of -hop paths , as a function of , , and .such statistics can be used to enhance throughput , validate threat events , protect against collusion attacks , infer location information , and also limit redundant broadcasts thus reducing interference .must involve at least 1 relay node located in each of the shaded lenses " . ]consider a source node located at the origin , and a destination node a distance to the right of along the positive real line .further , consider a 1d ppp of density vehicles per unit length forming on the real line , with each point ( node ) representing a vehicle along an infinite stretch of road .nodes are then connected via communication links whenever their euclidean distance is less than a predefined communication range ( see fig .[ fig1 ] ) , thus forming a 1d network .the source and destination nodes are unable to communicate directly and must employ multihop communications in order to share information .depending on the density of vehicles , there may be none , one , or several multihop paths connecting and .the _ length _ of these paths is the number of hops required for a message to pass between the two vehicles .it follows that the length of the _ shortest _ multihop paths is .therefore , paths of length are geodesic . running a breadth - first search ( bfs ) algorithm can find all geodesic paths in linear - time since the underlying graph is neither directed , nor weighted .let the set of all geodesics be described by .then the number of geodesic paths is .\end{split}\ ] ] monte carlo simulations of the pmf of are shown in fig .we will first demonstrate the difficulties with obtaining the distribution of for the case of , and then calculate its first few moments for the general case of .the cases of and are trivial and therefore omitted . for and such that geodesics are of length , and respectively .let and as in fig .[ fig1 ] such that there are two sub - domains and within which relay nodes must be situated in order for a 3-hop path to exist .we call these sub - domains lenses , since in two dimensions they are formed by the intersection of two equal disks .this is because the first relay node located at a maximum distance of can form a 3-hop path by connecting with any node in ] such that the two lenses are of equal widths .the number of relay nodes in is therefore a poisson random variable with mean , where we have defined .moreover , for each relay node in there corresponds a subset of within which a second relay node must be located as to form a 3-hop path from to . labelling the relays in descending distances from the source ( i.e. , ) we can identify subsets \subseteq l_2 ] , for with and can be seen that a relay node in connects to relays in .we therefore arrive at a simple expression for the number of shortest 3-hop paths where the is the number of relays in and are thus poisson random variables with mean ( see fig .[ fig : windows ] ) .the widths are also random variables however must satisfy the constraint that , i.e. , the are correlated .nodes in the left lens , and the corresponding sub - domains in the right lens .note that is within range from all three nodes and therefore a fourth node located in will connect to all in , to form three -hop paths from to .in contrast , is in range of nodes 1 and 2 ( not 3 ) , is only in range of node 1 ( not 2 or 3 ) , and is not in range from any of the nodes in . ]the pmf of can be expressed as follows : \!=\ ! \mathbb{e}_{n_1,\mathbf{w}}\big [ \mathbb{p } [ \sigma_{3 } \!=\ ! x \big| n_1 , \mathbf{w } ] \big ] \end{split}\ ] ] where and any configuration of widths is equally likely .we can attempt to obtain the pmf of through the use of probability generating functions ( pgfs ) .namely , we have that the pgf of the random variable is given by \!=\!g_{n_i}(z^i)\!=\!e^{\lambda w_i ( z^i -1)} ] and is the indicator function equal to whenever and zero otherwise such that , and is some normalisation constant .geometrically , the indicator function defines a simplex polytope with vertices at .the integral is therefore over the surface of the -simplex .recall that the -simplex is a triangle , a tetrahedron , a 5-cell , for and respectively , and therefore is an evermore complex polytope embedded in the positive hyperoctant of for which the integration of becomes intractable .for this reason we next restrict our study to the mean and variance of .we now describe a method which allows us to analytically derive the moments of .this involves dividing up the lenses into many small parts and making a simplifying approximation about the interactions .this allows us to treat the problem as one involving many independent random variables rather than trying to account for dependence .the final step is to take the limit of the number of divisions of the lenses to infinity , in which our approximation becomes exact .we firstly split the lenses into a large number of equally sized , disjoint domains where and .the number of relay nodes in each is then a poisson distributed random variable with mean . for finite make the approximation that all relay nodes in connect with all those in , all those in connect with all in and etc .the number of shortest 3-hop paths is then given by using the independence of the we calculate the mean &=\lim_{l \rightarrow \infty}\sum_{q=1}^{l } \sum_{r=1}^{q}{\mathbb{e}}[y_{1q}]{\mathbb{e}}[y_{2r}]\\ & = \lim_{l \rightarrow \infty}\left(\frac{\lambda_3 ^ 2}{l^2}\right)\left(\frac{l^2+l}{2}\right)=\frac{\lambda_3 ^ 2}{2 } \label{expectation2 } \end{split}\ ] ] to extract the variance we first define the random variable given that the variance of a sum is equal to the sum of the variances plus the covariances we have we first evaluate the variance of .we use the independence of and and note that is a poisson random variable with mean .in addition we use the mean of the square of a poisson random variable with mean is equal to and derive using ( [ vartq ] ) we evaluate the limit of the first sum in ( [ var ] ) as for the covariance terms in eq.([var ] ) we let and use the relation -{\mathbb{e}}[t_s]{\mathbb{e}}[t_t] ] , which can be used to analyse the skewness of the distribution . as a function of calculated numerically from ensembles of realisations for a range of values of and . also illustrated is the analytical results of eq.([expectation2 ] ) and eq([variancesigma ] ) in ( a ) and ( b ) respectfully ( grey line ) . ]more generally for with integer there will be lenses of equal width .the method of ( [ sec : meanvariance ] ) can still be used . for general have where the are poisson with mean . using that we can derive the mean =\frac{\lambda_k^{k-1}}{(k-1 ) ! } \label{esigmakhop } .\end{split}\ ] ] now , letting , we recursively define by further defining , such that we can recursively define the expectation of =\frac{\lambda}{l}\sum_{r=1}^{l}{\mathbb{e}}(\tau_l^{(n ) } ) , \label{exptrec } \end{split}\ ] ] where is the mean of the poisson variables .similarly , for the variance of we calculate using the recurrence relation we have for the covariance we have -{\mathbb{e}}[t_{s}^{(n)}]{\mathbb{e}}[t_{t}^{(n ) } ] \\ & \!=\!\frac{\lambda^2}{l^2 } \big({\mathbb{e}}\big [ \sum_{p=1}^s t_{p}^{(n ) } \big(\sum_{p=1}^s t_{p}^{(n ) } \!+\!\!\ ! \sum_{p = s+1}^t \!\ !t_{t}^{(n)}\big ) \big]\!-\ ! { \mathbb{e}}[\tau_s^{(n ) } ] { \mathbb{e } } [ \tau_t^{(n ) } ] \big)\\ & \!=\!\frac{\lambda^2}{l^2 } \big ( \sum_{p=1}^s \sum_{r = s+1}^t \ ! { \mathbb{e } } [ t_{p}^{(n)}t_{r}^{(n ) } ] \!+\ ! { \mathbb{e}}[(\tau_s^{(n)})^2]-{\mathbb{e}}[\tau_s^{(n ) } ] { \mathbb{e } } [ \tau_t^{(n ) } ] \big ) \label{eq : covrecur } \end{split}\ ] ] where we have used that and split the sum into parts .we now have { \mathbb{e } } [ \tau_t^{(n)}]\\ & + \ ! { \mathbb{e}}[\tau_s^{(n)}]^2 \!+\ ! \sum_{p=1}^s \ !\sum_{r = s+1}^t \ ! { \mathrm{cov}}\left(t_{p}^{(n)},t_{r}^{(n)}\right )\frac{\lambda^2}{l^2}{\mathbb{e}}[\tau_p^{(n)}]{\mathbb{e}}[\tau_r^{(n ) } ] \big ] \label{eq : covrecurr } \end{split}\ ] ] letting and we can combine ( [ exptrec ] ) , ( [ varsumtrec ] ) , ( [ vartrec ] ) and ( [ eq : covrecurr ] ) to obtain the variance for .for example , for we have .this recursion relation allows us to derive the variance of , which involves evaluating a -fold sum of products of random variables ( see ( [ sigma4hop ] ) ) in terms of a simpler -fold sum .motivated by the multihop diffusion of information in vanets , realised through the periodic broadcasts of bsms as mandated by the dsrc standard , we have studied the statistics of the number of shortest -hop paths in 1d random networks .namely , we have derived simple closed form expressions for the mean and variance of for , provided a recursive formula for general , and have confirmed them numerically using monte carlo simulations ( see fig .[ fig3 ] ) .we argue that knowledge of such statistics can be used to enhance throughput , validate threat events , protect against collusion attacks , infer location information , and also limit redundant broadcasts thus reducing interference . as an example , consider the realistic scenario where there are about vehicles per km , transmission range is km , and a vehicle detects an event and broadcasts a bsm containing relevant safety information which should reach at least a range of km from the epicentre of the detected event .it follows that the length of the shortest multihop path is , and that the expected number of shortest paths is \!=\ !1333.33 ] vehicles should re - broadcast the original bsm .thus inverting , we can calculate the re - broadcast probability , where is the target number of shortest paths , e.g. , setting we estimate that just of vehicles should re - broadcast the original bsm .the authors would like to thank the directors of the toshiba telecommunications research laboratory for their support .this work was supported by the epsrc grant number ep / n002458/1 for the project spatially embedded networks .k. a. hafeez , l. zhao , b. ma , and j. mark , `` performance analysis and enhancement of the dsrc for vanet s safety applications , '' _ vehicular technology , ieee transactions on _ , vol .62 , no . 7 , pp . 30693083 , 2013 .s. najafzadeh , n. ithnin , s. a. razak , and r. karimi , `` bsm : broadcasting of safety messages in vehicular ad hoc networks , '' _ arabian journal for science and engineering _ , vol .39 , no . 2 , pp .777782 , 2014 .o. georgiou , c. p. dettmann , and j. p. coon , `` connectivity of confined 3 networks with anisotropically radiating nodes , '' _ wireless communications , ieee transactions on _ , vol . 13 , pp . 45344546 , aug 2014 .i. stoianov , l. nachman , s. madden , and t. tokmouline , `` pipenet : a wireless sensor network for pipeline monitoring , '' in _2007 6th international symposium on information processing in sensor networks _ , pp . 264273 ,april 2007 .t. w. larsen , k. d. petersson , f. kuemmeth , t. s. jespersen , p. krogstrup , j. nygrd , and c. m. marcus , `` semiconductor - nanowire - based superconducting qubit , '' _phys.rev.lett._ , vol .115 , p. 127001, 2015 . c. nguyen , o. georgiou , and y. doi , `` maximum likelihood based multihop localization in wireless sensor networks , '' in _ 2015 ieee international conference on communications ( icc ) , london , uk _ , pp .66636668 , 2015 .g. v. rossi , k. k. leung , and a. gkelias , `` density - based optimal transmission for throughput enhancement in vehicular ad - hoc networks , '' in _ communications ( icc ) , 2015 ieee international conference on _ , pp . 65716576 , ieee , 2015 .
in the ieee 802.11p standard addressing vehicular communications , basic safety messages ( bsms ) can be bundled together and relayed as to increase the effective communication range of transmitting vehicles . this process forms a vehicular ad hoc network ( vanet ) for the dissemination of safety information . the number of shortest multihop paths " ( or geodesics ) connecting two network nodes is an important statistic which can be used to enhance throughput , validate threat events , protect against collusion attacks , infer location information , and also limit redundant broadcasts thus reducing interference . to this end , we analytically calculate for the first time the mean and variance of the number of geodesics in 1d vanets .
for complex compressible flows involving multiphysics phenomenons like e.g. high - speed elastoplasticity , multimaterial interaction , plasma , gas - particles etc . , a lagrangian description of the flow is generally preferred . to achieve robustness , some spatial remapping on a regular mesh may be added .a particular case is the family of the so - called lagrange+remap schemes , also referred to as remapped lagrange solvers that apply a remap step on a reference ( eulerian ) mesh after each lagrangian time advance .legacy codes implementing remapped lagrange solvers usually define thermodynamical variables at cell centres and velocity at cell nodes ( see figure [ fig:1 ] ) . in poncet et al . , we have achieved a multicore node - based performance analysis of a reference lagrange - remap hydrodynamics solver used in industry . by analyzing each kernel of the whole algorithm , using roofline - type models on one side and refined execution cache memory ( ecm ) models , on the other side ,we have been able not only to quantitatively predict the performance of the whole algorithm with relative errors in the single digit range but also to identify a set of features that limit the whole performance .this can be roughly summarized into three points : 1 . for typical mesh sizes of real applications ,spatially staggered variables involve a rather big amount of communication to / from cpu caches and memory with low arithmetic intensity , thus lowering the whole performance ; 2 .usual alternating direction ( ad ) strategies ( see the appendix in ) or ad remapping procedures also generate too much communication with a loss of cpu occupancy .3 . for multimaterial flows using vof - based interface reconstruction methods, there is a strong loss of performance due to some array indirections and noncoalescent data in memory .vectorization of such algorithms is also not trivial . from these observations and as a result of the analysis, we decided to `` rethink '' lagrange - remap schemes , with possibly modifying some aspects of the solver in order to improve node - based performance of the hydrocode solver .we have looked for alternative formulations that reduce communication and improve both arithmetic intensity and simd ( single instruction multiple data ) property of the algorithm . in this paper, we describe the process of redesign of lagrange - remap schemes leading to higher performance solvers .actually , this redesign methodology also gave us ideas of innovative eulerian solvers .the emerging methods , named lagrange - flux schemes , appear to be promising in the extended computational fluid dynamics community .the paper is organized as follows . in section [ sec:2 ] ,we first formulate the requirements for the design of better lagrange - remap schemes . in section [ sec:3lag ]we give a description of the lagrange step and formulate it under a finite volume form . in section [ sec:3 ]we focus on the remap step which is reformulated as a finite volume scheme with pure convective fluxes .this interpretation is applied in section [ sec:4 ] to build the so - called lagrange - flux schemes .we also discuss the important issue of achieving second order accuracy ( in both space and time ) . in section [ sec:5 ]we comment the possible extension to multimaterial flow with the use of low - diffusive and accurate interface - capturing methods .we will close the paper by some concluding remarks , work in progress and perspectives .starting from `` legacy '' lagrange - remap solvers and related observed performance measurements , we want to improve the performance of these solvers by modifying some of the features of them but under some constraints and requirements : 1 . a lagrangian solver ( or description ) must be used ( allowing for multiphysics coupling ) .2 . to reduce communication , we prefer using collocated cell - centered variables rather than a staggered scheme .3 . to reduce communication , we prefer using a direct multidimensional remap solver rather than splitted alternating direction projectionsthe method can be simply extended to second - order accuracy ( in space and time ) .the solver must be able to be naturally extended to multimaterial flows . before going further ,let us comment the above requirements .the second requirement should imply the use of a cell - centered lagrange solver .fairly recently , desprs and mazeran in and maire and et al . ( with high - order extension in ) have proposed pure cell - centered lagrangian solvers based on the reconstruction of nodal velocities . in our study , we will examine if it is possible to use approximate and simpler lagrangian solvers in the lagrange+remap context , in particular for the sake of performance .the fourth assertion requires a full multidimensional remapping step , probably taking into account geometric elements ( deformation of cells and edges ) if we want to ensure high - order accuracy remapping . to summarize , our requirements are somewhat contradictory , and we have to find a good compromise between some simplifications - approximations and a loss of accuracy ( or properties ) of the numerical solver .as example , let us consider the compressible euler equations for two - dimensional problems . denoting , and the density , velocity , pressure and specific total energy respectively , the mass , momentum and energy conservation equations read where , , , and . for the sake of simplicity , we will use a perfect gas equation of state , ] .let us now focus into these two steps and the way to solve them .after the lagrange step , if we solve the backward convection problem over a time interval using a lagrangian description , we have actually , from the cell we go back to the original cell with conservation of the conservative quantities . for ( conservation of mass ), we have showing the variation of density by volume variation . for ,it is easy to see that both velocity and specific total energy are kept unchanged is this step : thus , this step is clearly computationally inexpensive . from the discrete field defined on the eulerian cells , we then solve the forward convection problem [ eq:5 ] over a time step under an eulerian description .a standard finite volume discretization of the problem will lead to the classical time advance scheme for some interface values defined from the local neighbor values .we finally get the expected eulerian values at time .+ notice that from and we have also thus completely defining the remap step under the finite volume scheme form .we find that we no more need any mesh intersection or geometric consideration to achieve the remapping process .the finite volume form is now suitable for a straightforward vectorized simd treatment . fromit is easy to achieve second - order accuracy for the remapping step by usual finite volume tools ( muscl reconstruction + second - order accurate time advance scheme for example ) .let us note that the lagrange+remap scheme is actually a conservative finite volume scheme : putting into gives for all : that can also be written we recognize in pressure - related fluxes and convective fluxes that define the whole numerical flux .the finite volume formulation is attractive and seems rather simple at first sight .but we should not forget that we have to compute a velocity lagrange vector field where the variables should be located at cell nodes to return a well - posed deformation .moreover , expression involves geometric elements like the length of the deformed edges . among the rigorous collocated lagrangian solvers ,let us mention the glace scheme by desprs - mazeran and the cell - centered eucclhyd solver by maire et al .both are rather computationally expensive and their second - order accurate extension is not easy to achieve . although it is possible to couple these lagrangian solvers with the flux - balanced remapping formulation , it is also of interest to think about ways to simplify or approximate the lagrange step without losing second - order accuracy .one of the difficulty in the analysis of lagrange - remap schemes is that , in some sense , space and time are coupled by the deformation process .below , we derive a formulation that leads to a clear separation between space and time , in order to simply control the order of accuracy .the idea is to make the time step tend to zero in the lagrange - remap scheme ( method of lines ) , then exhibit the instantaneous spatial numerical fluxes through the eulerian cell edges that will serve for the construction of an explicit finite volume scheme .because the method needs an approximate riemann solver in lagrangian form , we will call it a lagrange - flux scheme .from the intermediate conclusions of the discussion [ sec:44 ] above , we would like to be free from any rather expensive collocated lagrangian solver .however , such a lagrangian solver seems necessary to correctly and accurately define the deformation velocity field at time . inwhat follows , we are trying to deal with time accuracy in a different manner .let us come back to the lagrange+remap formula .let us consider a `` small '' time step that fulfils the usual stability cfl condition .we have by making tend to zero , ( ) , we have , , , , then we get a semi - discretization in space of the conservation laws .that can be seen as a particular method of lines ( ) : we get a classical finite volume method with a numerical flux whose components are in , pressure fluxes and interface normal velocities can be computed from an approximate riemann solver in lagrangian coordinates ( for example the lagrangian hll solver , see ) . then , the interface states should be computed from an upwind process according to the sign of the normal velocity .this is interesting because the resulting flux has similarities with the so - called advection upstream splitting method ( ausm ) flux family proposed by liou , but the construction here is different and , in some sense , justifies the ausm splitting . to get higher - order accuracy in space, one can use a standard muscl reconstruction + slope limiting process involving classical slope limiters like for example sweby s limiter function : with ] is made of two constant states and with initial discontinuity at .we successively test the method on two uniform mesh grids made of 100 and 400 cells respectively .the final computational time is and we use a cfl number equal to 0.25 and a limiter coefficient . on figure[ fig : sod ] , one can observe a nice behavior of the euler solver , with sharp discontinuities and a low numerical diffusion into the rarefaction fan even for the coarse grid .[ [ two - rarefaction - shock - tube ] ] two - rarefaction shock tube + + + + + + + + + + + + + + + + + + + + + + + + + + the second reference example is a case of two moving - away rarefaction fans under near - vacuum conditions ( see toro ) . it is known that the roe scheme breaks down for this case .the related riemann problem is made of the left state and right state .the final time of .we again test the method of a coarse mesh ( 200 points ) and a fine mesh ( 2000 points ) .numerical results are given in figure [ fig : cavit ] .the numerical scheme appears to be robust especially in near - vacuum zones where both density and pressure are close to zero .[ [ case - with - sonic - rarefaction - and - supersonic - contact ] ] case with sonic rarefaction and supersonic contact + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the following shock tube case with initial data and generates a sonic 1-rarefaction , a supersonic 2-contact discontinuity and a 3-shock wave .the final time is and we use 400 mesh points , .numerical results show a good capture of the rarefaction wave , without any non - entropic expansion - shock ( see figure [ fig:4 ] ) . [[ case - of - shock - shock - hypersonic - shock - tube ] ] case of shock - shock hypersonic shock tube + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + this last shock tube problem is a violent flow case made of two hitting fluids with and .both 1-wave and 3-wave are shock waves , and the right state has a mach number of order 40 .final time is , we use 400 grid points and the limiter coefficient is here ( equivalent to the minmod limiter ) .one can observe of very nice behavior of the solver : there is no pressure or velocity oscillations at the contact discontinuity , and the numerical scheme preserves the positivity of density , pressure and internal energy ( see figure [ fig:5 ] ) .in this section , we compare the new lagrange - flux scheme to the reference ( staggered ) lagrange - remap scheme in terms of millions of cell updates per second ( denoted hereafter as mcups ) .tests are performed on a standard cores intel sandy bridge server e5 - 2670 .each core has a frequency of 2.6 ghz , and supports intel s avx ( advanced vector extension ) vector instructions . for multicore support, we use the multithreading programming interface ` openmp ` . in the referencestaggered lagrange - remap solver ( see ) , thermodynamic variables are defined at grid cell centers while velocity variables are defined at mesh nodes . due to this staggered discretization and the alternating direction ( ad ) remapping procedures , this solver is decomposed into nine kernels .this decomposition mechanically decreases the mean arithmetic intensity ( ai ) of the solver . on the other hand ,the lagrange - flux algorithm consists in only two kernels with a relative high arithmetic intensity which leads to two compute - bound ( cb ) kernels . in the first kernel , named `predictionlagrangeflux ( ) ` , an appropriate riemann solver is called , face fluxes are computed and variables are updated for the prediction . the second kernel , named ` correctionlagrangeflux ( ) ` , is close in terms of algorithmic steps , since it also uses a riemann solver , computes fluxes and updates the variables for the correction part of the solver . in order to assess the scalability and absolute performance of both schemes , we present in table [ tab_perfs ] a performance comparison study .first , we notice that the baseline performance e.g. the single core absolute performance without vectorization is quite similar for the two schemes , as can be seen in the first column .however , we the lagrange - flux scheme has a better scalability , due to both vectorization and multithreading : our lagrange - flux implementation achieves a speedup of 31.1x with 16 cores and avx vectorization ( while ideal speed - up is 64 ) whereas the reference lagrange - remap algorithm reaches a speed - up of only 14.8x .this difference is mainly due to the memory - bound kernels composing the reference lagrange - remap scheme .indeed , speedups due to avx vectorization and multithreading are not ideal for kernels with relatively low intensity since memory bandwidth is shared between cores . [ table : multicore ] .performance comparison between the reference lagrange - remap solver and the lagrange - flux solver in millions of cell updates per second ( mcups ) , using different machine configurations .scalability ( last column ) is computed as the speedup of the multithreaded vectorized version compared to the baseline purely sequential version .tests are performed for fine meshes , such that kernel data lies in dram memory .the lagrange - flux solver exhibits superior scalability , because it has by design better arithmetic intensity . [cols="<,^,^,^,^ " , ]although this is not the aim and the scope of the present paper , we would like to give an outline of the possible extension of lagrange - flux schemes to compressible multimaterial / multifluid flows , i.e. flows that are composed of different immiscible fluids and separated by free boundaries . + for pure lagrange+remap schemes , usually vof - based interface reconstruction ( ir ) algorithms are used ( young s plic , etc . ) .after the lagrangian evolution , for the cells that host more than one fluid , fluid interfaces are reconstructed . during the remapping step, one has to evaluate the mass fluxes per material . from the computational point of view and computing performance, this process generally slows down the whole performance because of many array indirections in memory and specific treatment into mixed cells along with the material interfaces . if the geometry of the lagrangian cells is not completely known ( as in the case of lagrange - flux schemes ) , anyway we have to proceed differently . a possibility is to use interface capturing ( ic ) schemes , e.g. conservative eulerian schemes that evaluate the convected mass fluxes through eulerian cell edges .this can be achieved by the use of antidiffusive / low - diffusive advection solvers in the spirit of desprs - lagoutire s limited - downwind scheme of vofire . in a recent work , we have analyzed the origin of known artifacts and numerical interface instabilities for this type of solvers and concluded that the reconstruction of fully multidimensional gradients with multidimensional gradient limiters was necessary .thus , we decided to use low - diffusive advection schemes with a multidimensional limiting process ( mlp ) in the spirit of .the resulting method is quite accurate , shape - preserving and free from any artifact .we show some numerical results in the two next subsections .let us emphasize that the interface capturing strategy perfectly fits with the lagrange - flux flow description , and the resulting schemes are really suitable for vectorization ( simd feature ) with data coalescence into memory .let us first present numerical results on a pure scalar linear advection problem .the forward - backward advection case proposed by rider and kothe is a hat - shaped function which is advected and stretched into a rotating vector field , leading to a filament structure .then by applying the opposite velocity field , one have to retrieve the initial disk shape . in figure[ fig : kr2 ] , we show the numerical solutions obtained on a grid for both the passive scalar field of variable $ ] and the quantity that indicates the numerical spread rate of the diffuse interface .one can conclude the good behavior of the method , providing both stability , accuracy and shape preservation . 0.5 .,title="fig : " ] 0.5 .,title="fig : " ] 0.5 .,title="fig : " ] 0.5 .,title="fig : " ] 0.5 .,title="fig : " ] 0.5 .,title="fig : " ] we then consider our interface capturing method for multifluid hydrodynamics .because material mass fractions are advected , that is one can use the advection solver of these variables but we prefer dealing with the conservative form of the equations in order to enforce mass conservation ( see also ) .it is known that eulerian interface - capturing schemes generally produce spurious pressure oscillations at material interfaces ( ) .some authors propose locally non conservative approaches to prevent from any pressure oscillations . herewe have a full conservative eulerian strategy involving a specific limiting process which is free from any pressure oscillation at interfaces , providing strong robustness .this will be explained in a next paper .the multimaterial lagrange - flux scheme is tested on the reference `` triple point '' test case , found e.g. in loubre et al .this problem is a three - state two - material 2d riemann problem in a rectangular vessel .the simulation domain is as described in figure [ fig : triple ] .the domain is splitted up into three regions , filled with two perfect gases leading to a two - material problem .perfect gas equations of state are used with and . due to the density differences , two shocks in sub - domains and with different speeds .this create a shear along the initial contact discontinuity and the formation of a vorticity .capturing the vorticity is of course the difficult part to compute .we use a rather fine mesh made of points ( about 1.8 m cells ) . on figure [ fig : triple1 ] , we plot the density , pressure , temperature fields respectively and indicate the location of the three material zones .one can observe a nice capture of both shocks and contact discontinuities .the vortex is also captured accurately .0.49 points .final time is . ]0.49 points .final time is . ]0.49 points .final time is . ]0.49 points .final time is . ]this paper is primarily focused on the redesign of lagrange - remap hydrodynamics solvers in order to achieve better hpc node - based performance .we have reformulated the remapping step under a finite volume flux balance , allowing for a full simd algorithm . as an unintended outcome, the analysis has lead us to the discovery of a new promising family of eulerian solvers the so - called lagrange - flux solvers that show simplicity of implementation , accuracy , and flexibility with a high - performance capability compared to the legacy staggered lagrange - remap scheme .interface capturing methods can be easily plugged for solving multimaterial flow problems .ongoing work is focused of the effective performance modeling , analysis and measurement of lagrange - flux schemes with comparison of reference `` legacy '' lagrange - remap solvers including multimaterial interface capturing on different multicore processor architectures .because of the multicore+vectorization scalability of lagrange - flux schemes , one can also expect high - performance on manycore co - processors like graphics processing units ( gpu ) or intel mic. this will be the aim of next developments .r. poncet , m. peybernes , t. gasc and f. de vuyst , performance modeling of a compressible hydrodnamics solver on multicore cpus , proceedings of the int .conf . on parallel computing parco2015 , edinburgh , 2015 ( in press ) .hirt , a.a .amsden and j.l .cook , an arbitrary lagrangian eulerian computing method for all flow speeds .journal of computational physics , 14:227253 ( 1974 ) .d. benson , computational methods in lagrangian and eulerian hydrocodes , cmame , 99(2 - 3 ) , 235394 ( 1992 ) .d. youngs , the lagrange - remap method , in _ implicit large eddy simulation : computing turbulent flow dynamics _ , f.f .grinstein , l.g .margolin and w.j .rider ( eds ) , cambridge university press ( 2007 ) .s. williams , a. waterman , and d. patterson : roofline : an insightful visual performance model for multicore architectures , commun .acm , 52 , pp 6576 ( 2009 ) .j. treibig and g. hager , introducing a performance model for bandwidth - limited loop kernels .proceedings of the workshop `` memory issues on multi- and manycore platforms '' at ppam 2009 , lecture notes in computer science , 6067 , pp . 615624 ( 2010 ) .h. stengel , j. treibig , g. hager and g. wellein , quantifying performance bottlenecks of stencil computations using the execution - cache - memory model .ics15 , the 29th int. conf . on supercomputing , 2015 ,doi : 10.1145/2751205.2751240 .p. colella and p.r .woodward , the numerical simulation of two - dimensional fluid flow with strong shocks , j. comput .phys.,54:115173 ( 1984 ) .b. desprs and c. mazeran , lagrangian gas dynamics in two dimensions and lagrangian systems , arch .rational mech .178 ( 2005 ) 327372 .maire , r. abgrall , j. breil and j. ovadia , a cell - centered lagrangian scheme for compressible flow problems , siam j. sci .29 ( 4 ) ( 2007 ) 17811824 .maire , a high - order cell - centered lagrangian scheme for two - dimensional compressible fluid flows on unstructured meshes , j. comput .228 ( 2009 ) 23912425 .j. k. dukowicz and j. r. baumgardner , incremental remapping as a transport / advection algorithm .j. comput .phys . , 160 , 318335 ( 2000 ) .w. e. schiesser , the numerical method of lines , academic press , isbn 0 - 12 - 624130 - 9 ( 1991 ) .toro , riemann solvers and numerical methods for fluid dynamics , 3rd edition , springer ( 2010 ) .liou , a sequel to ausm : ausm+ , j. comp .phys . , 129(2 ) , 364382 ( 1996 ) .sod , a survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws " .j. comput .27 : 131 ( 1971 ) .sweby , high resolution schemes using flux - limiters for hyperbolic conservation laws , siam j. num .21 ( 5 ) : 9951011 ( 1984 ) .b. desprs and f. lagoutire , contact discontinuity capturing schemes for linear advection and compressible gas dynamics , j. sci .comp . , 16(4 ) , 479524 ( 2001 ) .b. desprs , f. lagoutire , e. labourasse and i. marmajou , an antidissipative transport scheme on unstructured meshes for multicomponent flows , ijfv , 3065 ( 2010 ) .f. de vuyst , m. bchereau , t. gasc , r. motte , m. peybernes and r. poncet , stable and accurate low - diffusive interface capturing advection schemes , submitted to proc . of the multimat2015 conference wrsburg , special issue of the ijnmf ( 2015 ) .j.s . park and c. kim , multi - dimensional limiting process for discontinuous galerkin methods on unstructured grids , chapter in computational fluid dynamics 2010 , springer , 179184 ( 2011 ) .a.j . rider and d.b .kothe , reconstructing volume tracking , j. comput .phys . , 141(2 ) , 112152 ( 1998 ) .a. bernard - champmartin and f. de vuyst , a low diffusive lagrange - remap scheme for the simulation of violent air - water free - surface flows , j. of comput .physics , 274 , 1949 ( 2014 ) .r. abgrall , how to prevent pressure oscillations in multicomponent flow calculations : a quasi conservative approach , j. comp .phys . , 125(1 ) , 150160 ( 1996 ) .r. saurel and r. abgrall , a simple method for compressible multifluid flows , siam j. sci .comput . , 21(3 ) , 11151145 ( 1999 ) .c. farhat , a. rallu and s. shankaran , a higher - order generalized ghost fluid method for the poor for the three - dimensional two - phase flow computation of underwater implosions , j. comp .phys , 227(16 ) , 7640 - 7700 ( 2008 ) .m. bachmann , p. helluy , j. jung , h. mathis and s. mller , random sampling remap for compressible two - phase flows , computers and fluids , 86 , 275283 ( 2013 ) .r. loubre , p .- h .maire , m. shashkov , j. breil and s. galera , reale : a reconnection - based arbitrary - lagrangian - eulerian method , j. comp .phys . , 229 , 47244761 ( 2010 ) .
in a recent paper , we have achieved the performance analysis of staggered lagrange - remap schemes , a class of solvers widely used for hydrodynamics applications . this paper is devoted to the rethinking and redesign of the lagrange - remap process for achieving better performance using today s computing architectures . as an unintended outcome , the analysis has lead us to the discovery of a new family of solvers the so - called lagrange - flux schemes that appear to be promising for the cfd community .
as a first deliverable within their scope and work programme , the mobile phone work group of the trusted computing group ( tcg mpwg ) has published a specification , which offers new potentials for implementing trust in mobile computing platforms by introducing a new , hardware - based trust anchor for mobile phones and devices .this trust anchor , called a mobile trusted module ( mtm ) , has properties and features comparable to a trusted platform module ( tpm , see ) .concurrently the mpwg issued a much more universal security architecture for mobile phones and devices on a higher abstraction level .the pertinent specification is called _ tcg mobile reference architecture ( ra ) _ and abstracts a trusted mobile platform as a set of tamper resistant trusted engines operating on behalf of different stakeholders .this architecture offers a high degree on flexibility and modularity in design and implementation of the trusted components to all participants in hard- and software development .an important aspect of the _ tcg mobile reference architecture _ is the potential to virtualise significant parts of a trusted mobile platform as trusted software applications and services .the trusted execution chain for this rests on the mtm .the implementation of this chip depends on the security requirements of its specific use - case . for high levels of protection and isolation ,an mtm could be implemented as a slightly modified trusted platform module ( tpm ) .this enables cost - effective implementation of new security - critical applications and various innovative business models , in both the mobile and generic computing domain .the present paper discusses the main structural features of the ra , highlighting the capabilities of the mtm as the main functional building block . after this technology review ,we propose two basic methods for usage of the ra , namely the set - up of a trusted subsystem on a device by a remote owner , and its migration from one device to another .this paper is organised as follows . in section [ section :tcg_mpwg_architecture ] , we explore the significant parts of the _ mpwg reference architecture_. it is divided into four parts .subsection [ section : tcg_mpwg_architectural_overview ] gives an overview of the security architecture , and subsection [ section : mobile_trusted_module ] details the concepts of the proposed architectural approach for an mtm and the requirements to virtualise its functionality , whereby a high security and isolation level is maintained .furthermore , we propose a model for remote stakeholder take - ownership in [ section : ro_takeownership ] and migration of trusted subsystems in [ section : migration_remote_stakeholder_subsystems ] . in section[ section : design_trusted_engines ] , we show how such an architecture can be implemented on trustworthy operating platforms .the tcg mpwg has developed an architecture on a high level of abstraction for a trusted mobile platform , which offers numerous variations for design and implementation . in this section ,we reflect essential parts of this architecture and an overview of significant platform components in terms of our objective .a trusted mobile platform is characterised as a set of multiple tamper - resistant engines , each acting on behalf of a different stakeholder .broadly , such an platform has several major components : trusted engines , trusted services customised by trusted resources .a general trusted mobile platform is illustrated in figure [ fig : tcg_mpwg_trusted_mopile_platform_architecture ] .we define a trusted subsystem as a logical unit of a trusted engine together with its interrelated hardware compartment .a of a stakeholder can formally described by a tuple in each trusted subsystem either a remote or local entity acts as a stakeholder , who is able to configure its own subsystem and define his security policy and system configuration within an isolated and protected environment .the _ mpwg reference architecture _ specifies the following principal entities : the local stakeholders _ device owner _ and _ user _ ; and the remote stakeholders _ device manufacturer _ , and more general _ remote owners _ ( e.g. a communication carrier , or service provider ) .the functionality of a is either based on dedicated resources of an embedded engine , or may be provided by trusted services of external engines .each subsystem is able to enforce its security policy and subsystem configuration . as a consequence, the functionality of a trusted subsystem is constrained by the available resources with their derived trusted services , by the offered functionality of external trusted services , by the security policy , and finally the system configuration of an engine s stakeholder .all internal functions executed inside are isolated from other subsystems by the underlying security layer and is only accessible if a proper service interface is defined and exported .a relies on the reputation of the stakeholder as basis for that trust .therefore , each stakeholder issues a security policy and a set of credentials belonging to embedded trusted components of its subsystem .this policy contains reference measurements ( rim ) , quality assertions and security - critical requirements . the most important concept within the _ mpwg reference architecture_ is that of trusted engines . the purpose of a trusted engine is to provide confidence in all its embedded services ,which are internally or externally provided by the engine .it is a protected entity on behalf of a specific stakeholder that has special abilities to manipulate and store data , and to provide evidence of its trustworthiness and the current state of the engine .figure [ fig : tcg_mpwg_generic_engine ] shows a generic trusted engine . in general , each engine has at least following abilities : * implement arbitrary software functionalities as trusted and/or normal services , * provide the evidence for its trustworthiness , * report the evidence of its current state , * obtain and use endorsement keys ( ek ) and/or attestation identity keys ( aik ) , * access a set of trusted resources , and * import and/or export services , shielded capabilities and protected functionality . in order to establish a definite categorisation, the mpwg differentiates engines according to their functional dispensability .therefore , an engine is either dedicated to a mandatory ( of or ) or a discretionary domain ( of ) . engines inside a mandatory domain are permanently located on a trusted platform and hold security - critical and essential functionality .all essential services of a trusted mobile platform should be located inside the mandatory domain , which does _ not permit a local stakeholder to remove a remote owner from the engine_. mandatory engines have access to a _ mobile remote owner trusted module ( mrtm ) _ to guarantee that a valid and trustworthy engine state is always present .non - essential engines and services are replaceable by the device owner and should be located inside the discretionary domain .engines inside the discretionary domain are controlled by the device owner .discretionary engines are required to be supported by a _mobile local owner trusted module ( mltm)_. as illustrated in figure [ fig : tcg_mpwg_generic_engine ] , an internal trusted service has access to several trusted resources .the tcg calls these resources _ root - of - trusts ( rot ) _ representing the trusted components acting on base of the trusted execution chain and providing functionality for measurement , storing , reporting , verification and enforcement that affect the trustworthiness of the platform .the following rots are defined for the mobile domain : * root of trust for storage ( rts ) , * root of trust for reporting ( rtr ) , * root of trust for measurement ( rtm ) , * root of trust for verification ( rtv ) , and * root of trust for enforcement ( rte ) each rot vouches its trustworthiness either directly by supplied secrets ( ek , aik ) and associated credentials , which are only accessible by authenticated subjects of the stakeholder , or indirectly by measurements of other trusted resources .these resources are only mutable by authorised entities of a stakeholder . in this paper, we group several logically self - contained rots to simplify the presentation of interfaces and the communication layer . in a typical arrangement ,the rts and rtr represent one unit , while the rtm and rtv build another unit within an .however , note that the rtv and the rtm depend on protected storage mechanisms , which are provided by the rts .thus , it is also plausible to implement all rots together as a common unit within an engine .* rts / rtr * are the trusted resources that are responsible for secure storage and reliable reporting of information about the state of trusted mobile platform .an rts provide pcrs and protected storage for an engine and stores the measurements made by the rtm , cryptographic keys , and security sensitive data .an rtr signs the measurements with cryptographic signature keys of .* rtm / rtv * in general , an rtm is a reliable instance to measure software components and provide evidence of the current state of a trusted engine and its embedded services . in the mobile domain , to avoid communication costs , this functionality is extended by a local verifier , which checks the measurements against a given _ reference integrity metrics _ ( rim ) .this process can be done instantly as the measurements are performed employing a combination of rtm and rtv .figure [ fig : tcg_mpwg_measurement_and_verification ] depicts such a _measure _ process .an * rte * is required if an engine uses allocated resources and services . in this case , such rot acts as a trusted boot loader and ensures the availability of all allocated trusted resources and services within that trusted subsystem .a trusted engine integrates all functionality by customising available platform resources as software services .such a service offers computation , storage , or communication channels to other internal or external services and applications based on dedicated or allocated resources .the mpwg categorises them into : trusted , normal , and measured services .a trusted service customises trusted resources .thus , a trusted service is implicitly supplied with an or in order to attest its trustworthiness .trusted services are intended to provide reliable measurements of their current state and to provide evidence of the state of other normal services or resources .normal services are customising normal resources and implement functionality , but are not able to provide evidence of their trustworthiness by own capabilities .however , normal services can access internal trusted services to use their provided functionality .therefore , an internal normal services is able to vouch its trustworthiness by associated integrity metrics that have been measured by a trusted service . the generic term _mobile trusted module ( mtm ) _ refers to a dedicated hardware - based trust - anchor .it is typically composed of an rts and rtr and has characteristics comparable to a tpm .according to their design objective the mpwg distinguishes between mrtm and mltm .both must support a subset of tpm commands as specified in . additionally , an mrtm has to support a set of commands to enable local verification and specific mobile device functionality .the _ tcg mpwg reference architecture _ does not exclude to utilise a tpm v1.2 ( or even a tpm v1.1 ) as an mtm , if an appropriate interface consisting of a set of commands conforming to the mpwg specification and associated data structures are provided .although it is possible to implement this architecture upon a standard tpm , we here focus on a more sophisticated solution based on a trustworthy computing platform such as emscb / turaya . in this context , we expect three different solutions for isolation , key management and protection of .a * standard tpm - based model * uses a non - modified standard tpm to build the trusted computing base .the secret keys are stored into a single key - hierarchy on behalf of as specified in . in this case ,an adversary or malicious local owner may be able to access the secret keys of a remote stakeholder and take control of a remote owner compartment .a can also disable the whole mtm or corrupt mandatory engines of remote stakeholders .a * software - based mtm - emulation model * uses a software - based allocated -emulation with an isolated key - hierarchy .all sensitive and security - critical , such as or , are only protected by software mechanisms outside of the tamper - resistant environment of a dedicated mtm . *generic mtm - based model supporting multiple stakeholders and virtual mtms . * in order to circumvent resulting drawbacks and mitigate attacks , we favour a solution with a higher level of security .for this reason , we adopt the proposed secure co - processor variant of and describe a generic mtm with support for multiple stakeholder environments . in a cost - efficient scenario ,the trusted mobile platform is implementable based on a single generic mtm and several virtualised mtms for each trusted engine .hence , at least one dedicated mtm has to be available and additionally a unique vmtm has to be instantiated in each trusted subsystem . in such case ,a physically bounded mtm in the platform acts as a master trust anchor and offers mrtm and mltm functionality with respect to its specific use case .a trusted software layer offers a _ vmtm proxy service _ to all embedded trusted engines .the main task of this service is to route mtm commands from a to its dedicated instance .the advantage is that all security - critical mtm commands are tunnelled to and are executed within the protected environment of the dedicated mtm .figure [ fig : tcg_mpwg_vmtm_virtualization ] illustrates the architecture of a generic mtm with isolated vmtm compartments .this architecture requires a slightly modified tpm .mainly , we add a trusted component , the _ vmtm instance manager _ , which is responsible to separate vmtm instances from each other .this includes administration , isolated execution , memory management and access control for each stakeholder compartment .thus , a vmtm instance is able to hold an autonomous and hardware - protected key hierarchy to store its secrets and protect the execution of security - critical data ( e.g. signature and encryption algorithms ) .the take - ownership operation establishes the trust relationship between a stakeholder and trusted mobile platform .currently , the _ mpwg reference architecture _ does not define how a remote owner is to perform this initial setup and take - ownership of its .hence we propose a method in this section .the main idea behind our procedure is to install and instantiate a blank trusted subsystem containing a pristine engine with a set of generic trusted services .this subsystem is then certified by a remote owner , if the platform is able to provide evidence of its pristine configuration and policy conformance with respect to .figure [ fig : tcg_mpwg_vmtm_takeownership_sequence ] illustrates this process , which we now descirbe . * platform and protocol precondition . * in a preliminary stage , the trusted mobile platform has carried out the boot process and has loaded the trusted computing base and the engine with its trusted services .the trusted platform has checked that the installed hardware and running software are in a trustworthy state and configuration .it is able to report and attest this state , if challenged by an authorised entity .* remote stakeholder take - ownership protocol .* in the first phase , the trusted engine carries out a take - ownership preparation for the remote stakeholder .a blank engine is installed and booted by the , and a clean instance is activated inside the dedicated .an initial setup prepares the pristine engine .a endorsement key - pair is generated within with a corresponding certificate and . ] .next , performs an attestation of its current state .the attestation can be done by the local verifier inside the using certificates of the remote stakeholder .if no suitable and corresponding -certificate are available for an pristine engine , alternatively a remote attestation with an associated privacy ca is also possible . creates a symmetric key and encrypts the public portion of the endorsement key , the corresponding certificate , attestation and purpose information .next , encrypts with a public key and sends both messages to the remote owner . after reception by the remote stakeholder, the messages are decrypted using the private portion of key .we assume that this key is either available through a protected communication channel or pre - installed by the device manufacturer . in a next step , verifies the attestation data and checks the intended purpose of .if the engine and device attestation data is valid and the intended purpose is acceptable , the generates an individual security policy .the signs the and creates certificates for local verification of a complete .furthermore , creates a setup configuration , which enforces the engine to individualise its services and complete its configuration with respect to the intended purpose and given security policy . in this step , encrypts the messages with the public portion of the and transfers this package to the engine .finally , the trusted engine decrypts the received package and installs it inside the and thus completes its instantiation . if a stakeholder wants to move a source to another mtm - enabled platform , for instance to port user credentials from device to device , all security - critical information including the storage root key ( srk ) has to be migrated to the target . in our scenario, we assume the same remote owner ( e.g. mobile network operator ) on both subsystems and .to be able to securely migrate the srk , we suggest a modification of the current mpwg specification to allow _ inter - stakeholder - migration _ of a complete isolated key hierarchy . thus , an isolated key hierarchy is ( 1 ) migratable between environments of identical stakeholders , ( 2 ) if and only if an entitling security policy on both platforms exists .the advantage of migration between identical stakeholder subsystems is that the migration process does nt require a trusted third party .we only involve the owner in combination with local verification mechanisms of the to migrate the trusted subsystem ( including the srk ) to another platform .this enables for instance direct , device - to - device porting of credentials , e.g. using short - range communication .we here propose a complete , multilateral and secure migration protocol , which is illustrated in figure [ fig : tcg_mpwg_vmtm_migration_protocol ]. * platform and protocol precondition . * similar to section [ section : ro_takeownership ] , the trusted mobile platform has carried out the same initial steps as mentioned above .furthermore , the remote owner has performed an remote take - ownership procedure as described in [ section : ro_takeownership ] .* trusted subsystem migration protocol . * at the beginning of the migration protocol , the device owner of the source platform initialises the migration procedure and requests an appropriate migration service of .next , the trusted platform is instructed by to establish a secure channel to the target platform . after the connection is available, activates the corresponding migration service of to perform the import procedure .thereupon , the target subsystem performs a local verification of .if revoked , it replies with an error message and halts the protocol .otherwise requests an confirmation from the local owner .next , the target subsystem generates a nonce . in order to provide evidence of its trustworthiness, sends all necessary information to the source subsystem .this includes the current state , a certificate of , security policy and the nonce .having received the target subsystem s message , verifies the state of .if the target system is in a trustworthy state and holds an acceptable security policy and system configuration , the current state of is locked to nonce . the generates a symmetric migration key , serialises its instance and encrypts it with the migration key , which is bound to an acceptable configuration of .next , the key - blob and the encrypted instance are sent to the destination . in particular , this includes the whole isolated key - hierarchy with , the security policy , and the required subsystem configuration . finally , the target subsystem decrypts the received blob and uses as its own .the subsystem verifies the obtained security policy and the subsystem configuration . with this information, rebuilds the internal structure of the source .the source system should then be notified of the success of migration and ultimately delete the migrated key hierarchy ( or even do it before sending the migration package as indicated for simplicity in figure [ fig : tcg_mpwg_vmtm_migration_protocol ] ) .otherwise one obtains replicated trusted subsystems , by themselves indistinguishable to the remote owner . butthis may depend on the policies to be enforced in the particular use case .a prototypical implementation of the trusted engines and the specified trusted services was realised as an extension to the existing emscb / turaya computing platform .turaya is an implementation of the emscb security architecture .it provides fundamental security mechanisms and a protected and isolated execution environment , which meet the requirements of the _ mpwg reference architecture _ .figure [ fig : mtm_vsim_emscb_platform ] illustrates our model , in which a hypervisor / microkernel executes a legacy operating system in coexistence with a running instance of the emscb - based security architecture .the latter controls a virtual machine with several trusted engines and services compliant to the mpwg requirements . in the following paragraphs, we outline the significant platform layers concerning our approach .the * hardware layer * of our model includes a generic mtm as described in section [ section : mobile_trusted_module ] , in addition to conventional hardware components .this mtm acts as a dedicated master trust anchor for the complete trusted mobile platform .the * virtualisation layer * provides generic hardware abstraction , between the physical hardware of a trusted mobile platform and the _ trusted software layer _ below .the emscb project supports microkernels of the l4-family such as hypervisors . in general , all solutions provide mechanisms for resource management , inter - process - communication ( ipc ) , virtual machines , memory management and scheduling . in our specific case ,the virtualisation layer includes also a fully functional device driver for a dedicated generic mtm .furthermore , it is responsible for instantiation of both the trusted software layer and the legacy operating system .the * trusted software layer * provides security functionality and is responsible for isolation of embedded applications and software compartments .it also implements the _ vmtm proxy service _ as described in section [ section : mobile_trusted_module ] .currently , emscb / turaya provide an excellent foundation by its security services ( trust manager , compartment manager , storage manager ) , which are required by the rtr and rtv , protected storage and trusted engines management agent of .therefore , it is reasonable to build the significant parts of the device manufacturer engine within this layer . trusted engines within the * application layer* are implemented as parallel and isolated l4linux compartments on behalf of different stakeholders .each compartment has access to its vmtm instance through an embedded client - side device driver .this driver constrains the functionality with respect to its specific use case ( mrtlm or mltm ) .furthermore , has an , which is responsible for building all required allocated trusted resources and services depending of its specific system configuration and security policy .we have introduced the trusted engines and mtms in terms of our objective . in this context , we have exposed significant parts of the mpwg reference architecture and how it can be implemented on a ( very slightly modified ) tpm trust - anchor .we have shown how to deploy trusted virtualised compartments on devices and exhibited basic operations required in the mobile domain , such as migration . using a vmtm in lieu of a subscriber identity module ( sim )as a trusted and protected software allows expansion to a much wider field of authentication and identification management systems even on standard pc platforms .supporting online transactions by authentication via credentials held in a vmtm may be one attractive use case . however , there are some privacy and security challenges associated with this implementation on a desktop computer , which require further research. finally , replacing sims / usims by multi - purpose vsims may be attractive even for genuine mobile devices . trusted computing group , _ tcg mobile reference architecture_. specification version 1.0 , revision 1 . 12 june 2007 .nicolai kuntze and andreas u. schmidt .transitive trust in mobile scenarios . in gntermller , editor , _ proceedings of the international conference on emerging trends in information and communication security ( etrics 2006 ) _ , volume 3995 of _ lecture notes in computer science ( lncs ) _ , pages 7385 .springer - verlag , 2006 .kuntze , n. , schmidt , a.u .: trusted computing in mobile action . in venter , h.s ., eloff , j.h.p . , labuschagne , l. , eloff , m.m ., eds . : proceedings of the information security south africa ( issa ) conference ( 2006 ) nicolai kuntze and andreas u. schmidt . trusted ticket systems and applications . in _ to appear in : new approaches for security , privacy , and trust in complex systems .proceedings of the ifip sec2007 .sandton , south africa 14 - 16 may 2007_. springer - verlag , 2007 .
in its recently published _ tcg mobile reference architecture _ , the tcg mobile phone work group specifies a new concept to enable trust into future mobile devices . for this purpose , the tcg devises a trusted mobile platform as a set of trusted engines on behalf of different stakeholders supported by a physical trust - anchor . in this paper , we present our perception on this emerging specification . we propose an approach for the practical design and implementation of this concept and how to deploy it to a trustworthy operating platform . in particular we propose a method for the take - ownership of a device by the user and the migration ( i.e. , portability ) of user credentials between devices .
in 1961 , james and stein considering the problem of estimating the mean vector of a -dimensional normal distributed random vector with a covariance matrix introduced an estimator which outperforms the maximum likelihood estimate ( mle ) for dimension , under the common quadratic risk in the sense that for all parameter values this unexpected result draw a great interest of mathematical statisticians and stimulated a number of authors to contribute to the theory of improved estimation by extending the problem of james and stein in different directions to more general models with unknown covariance matrix and considering other types of estimates ( see for more details and other references ) .a considerable effort has been directed towards the problems of improved estimation in non - gaussian models with the spherically symmetric distributions ( see ) and in the non - parametric regression models .now the james stein estimator and other improved shrinkage estimators are widely used in econometrics and the problems associated with the signal processing . in this paperwe will consider the problem of estimating the mean in a conditionally gaussian distribution .suppose that the observation is a -dimensional random vector which obeys the equation where is a constant vector parameter , is a conditionally gaussian random vector with a zero mean and the covariance matrix , i.e. , where is some fixed -algebra .we propose to consider a shrinkage estimator of the form where is a positive constant which will be specified below .it will be shown that such an estimator allows one to obtain an explicit upper bound for the quadratic risk in case of the regression model with a conditionally gaussian noise .theorem [ le.sec:gas.1 ] in section [ sec : gas ] claims that the estimator outperforms the maximum likelihood estimate uniformly in from any compact set for any dimension starting from two . in section [ sec : per ] , we apply the estimator to solve the problem of improved parametric estimation in the regression model in continuous time with a non - gaussian noise . the rest of the paper is organized as follows . in section [ sec : gas ] , we impose some conditions on the random covariance matrix and derive to upper bound for the difference of risks corresponding to and respectively . in section [ sec : aut ] , the estimate is used for the parameter estimation in a discrete time regression with a gaussian noise depending on some nuisance parameters .appendix contains some technical results .in this section we will derive an upper bound for the risk of estimate under some conditions on the random covariance matrix .assume that there exists a positive constant , such that the minimal eigenvalue of matrix satisfies the inequality the maximal eigenvalue of the matrix is bounded on some compact set from above , i.e. where is some known positive constant .let denote the difference of the risks of estimate and that of as we will need also the following constant where , [ le.sec:gas.1 ] let the noise in have a conditionally gaussian distribution and its covariance matrix satisfy conditions with some compact set .then the estimator with dominates the mle for any , i.e. ^ 2.\ ] ] first we will establish the lower bound for the random variable .[ le.sec:gas.2 ] under the conditions of theorem 2.1 the proof of lemma is given in the appendix . in order to obtain the upper bound for we will adjust the argument in the proof of stein s lemma to the model with a random covariance matrix .we consider the risks of mle and of r(\theta^{*},\theta)=r(\hat{\theta}_{ml},\theta)+\mathbf{e}_{\theta}[\mathbf{e}((g(y)-1)^{2}\|y\|^{2}|\mathcal{g})]\\[2 mm ] + 2\sum_{j=1}^{p}\mathbf{e}_{\theta}[\mathbf{e}((g(y)-1)y_{j}(y_{j}-\theta_j)|\mathcal{g})],\end{gathered}\ ] ] where . denoting and applying the conditional density of distribution of a vector with respect to -algebra one gets making the change of variable and assuming , one finds that where denotes the -th element of matrix .these quantities can be written as thus , the risk for an estimator takes the form |_{u = y}\right).\end{gathered}\ ] ] therefore one has where this implies that since , one comes to the inequality from here it follows that taking into account the condition and the lemma [ le.sec:gas.2 ] , one has minimizing the function with respect to , we come to the desired result ^ 2.\ ] ] hence theorem [ le.sec:gas.1 ] .[ le.sec:gas.3 ] let in the noise with the positive definite non random covariance matrix and . then the estimator with dominates the mle for any and compact set , i.e. ^ 2.\ ] ] note that if then ^ 2.\ ] ] [ le.sec:gas.4 ] if and in model then the risk of estimate is given by the formula ^ 2=:r_p.\ ] ] by applying the stirling s formula for the gamma function one can check that as .the behavior of the risk for small values of is shown in fig.1 .it will be observed that in this case the risk of the james stein estimate remains constant for all , i.e. and the risk of the mle is equal to and tends to infinity as . at ., scaledwidth=90.0% ]in this section we apply the proposed estimate to a non - gaussian continuous time regression model .let observations obey the equation here a vector of unknown parameters from some compact set .assume that is a one - periodic functions , bounded and orthonormal in ] , where is known number .it is easy to check that the covariance of the noise has the form a & 1 & ... & a^{p-2 } \\[2 mm ] & \ddots & & \\[2 mm ] a^{p-1 } & a^{p-2 } & ... & 1 \end{array } \right ) \ ] ] let in be specified by with $ ] . then for any mle is dominated by the estimator and one has that . now we find the estimation of the maximal eigenvalue of matrix . by definition onehas by applying the cauchy bunyakovskii inequalitywe obtain that thus , hence , taking into account the theorem [ le.sec:gas.1 ] we come to assertion of proposition .in this paper we propose a new type improved estimation procedure . the main difference from the well - known james stein estimate is that in the dominator in the corrected term we take the first power of the observation norm .this allow us to improve estimation with respect to mle begining with any dimension .moreover , we apply this procedure to the estimation problem for the non - gaussian ornstein uhlenbeck levy regression model .6.1 . proof of the lemma [ le.sec:gas.2 ] . from onehas using a repeated conditional expectation and since the random vector is distributed conditionally normal with a zero mean , then making the change of variable and applying the estimation we find 6.2 .the proof of conditions and on the matrix .the elements of matrix can be written as where and are the jump times of the poisson process , i.e. notice that by one can the matrix present as where is non random matrix with elements and is a random matrix with elements .\end{gathered}\ ] ] this implies that therefore and we come to the assertion of lemma [ lem.sec:app.1 ] .[ lem.sec:app.2 ] let be defined by with .then a maximal eigenvalue of the matrix with elements defined by , satisfy the following inequality where and .one has where since the is a one - periodic orthonormal functions therefore the first integral is equal to and in view of the inequality for any from here denoting we obtain that hence lemma [ lem.sec:app.2 ] .w. james , c. stein , estimation with quadratic loss , in : proceedings of the fourth berkeley symposium on mathematics statistics and probability , vol . 1 , university of california press , berkeley , 1961 , pp .361 - 380 .
the paper considers the problem of estimating a dimensional mean vector of a multivariate conditionally normal distribution under quadratic loss . the problem of this type arises when estimating the parameters in a continuous time regression model with a non - gaussian ornstein uhlenbeck process driven by the mixture of a brownian motion and a compound poisson process . we propose a modification of the james stein procedure of the form where is an observation and is a special constant . this estimate allows one to derive an explicit upper bound for the quadratic risk and has a significantly smaller risk than the usual maximum likelihood estimator for the dimensions . this procedure is applied to the problem of parametric estimation in a continuous time conditionally gaussian regression model and to that of estimating the mean vector of a multivariate normal distribution when the covariance matrix is unknown and depends on some nuisance parameters . _ keywords _ : conditionally gaussian regression model ; improved estimation ; james stein procedure ; non - gaussian ornstein uhlenbeck process . _ ams 1991 subject classifications _ : primary : 62c20 ; secondary : 62c15
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this is a late parrot !it s a stiff !bereft of life , it rests in peace ! if you had nt nailed him to the perch he would be pushing up the daisies !its metabolical processes are of interest only to historians !it s hopped the twig !it s shuffled off this mortal coil !it s rung down the curtain and joined the choir invisible !this is an ex - parrot ! _ monty python , `` pet shop '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a mechanism for automatically generating multiple paraphrases of a given sentence would be of significant practical import for text - to - text generation systems .applications include summarization and rewriting : both could employ such a mechanism to produce candidate sentence paraphrases that other system components would filter for length , sophistication level , and so forth . not surprisingly , therefore , paraphrasing has been a focus of generation research for quite some time .one might initially suppose that sentence - level paraphrasing is simply the result of word - for - word or phrase - by - phrase substitution applied in a domain- and context - independent fashion .however , in studies of paraphrases across several domains , this was generally not the case .for instance , consider the following two sentences ( similar to examples found in ): 0.9 after the latest fed rate cut , stocks rose across the board .+ observe that `` fed '' ( federal reserve ) and `` greenspan '' are interchangeable only in the domain of us financial matters .also , note that one can not draw one - to - one correspondences between single words or phrases .for instance , nothing in the second sentence is really equivalent to `` across the board '' ; we can only say that the entire clauses `` stocks rose across the board '' and `` winners strongly outpaced losers '' are paraphrases .this evidence suggests two consequences : ( 1 ) we can not rely solely on generic domain - independent lexical resources for the task of paraphrasing , and ( 2 ) _ sentence - level _ paraphrasing is an important problem extending beyond that of paraphrasing smaller lexical units ._ our work presents a novel knowledge - lean algorithm that uses _ multiple - sequence alignment _ ( msa ) to _ learn _ to generate sentence - level paraphrases essentially from unannotated corpus data alone ._ in contrast to previous work using msa for generation , we need neither parallel data nor explicit information about sentence semantics .rather , we use two _ comparable corpora _ , in our case , collections of articles produced by two different newswire agencies about the same events .the use of related corpora is key : we can capture paraphrases that on the surface bear little resemblance but that , by the nature of the data , must be descriptions of the same information . note that we also acquire paraphrases from each of the individual corpora ; but the lack of clues as to sentence equivalence in single corpora means that we must be more conservative , only selecting as paraphrases items that are structurally very similar .our approach has three main steps .first , working on each of the comparable corpora separately , we compute _ lattices _ compact graph - based representations to find commonalities within ( automatically derived ) groups of structurally similar sentences .next , we identify pairs of lattices from the two different corpora that are paraphrases of each other ; the identification process checks whether the lattices take similar arguments . finally , given an input sentence to be paraphrased , we match it to a lattice and use a paraphrase from the matched lattice s mate to generate an output sentence .the key features of this approach are : * focus on paraphrase generation . * in contrast to earlier work , we not only extract paraphrasing rules , but also automatically determine which of the potentially relevant rules to apply to an input sentence and produce a revised form using them. * flexible paraphrase types . * previous approaches to paraphrase acquisition focused on certain rigid types of paraphrases , for instance , limiting the number of arguments .in contrast , our method is not limited to a set of _ a priori_-specified paraphrase types . * use of comparable corpora and minimal use of knowledge resources . *in addition to the advantages mentioned above , comparable corpora can be easily obtained for many domains , whereas previous approaches to paraphrase acquisition ( and the related problem of phrase - based machine translation ) required parallel corpora .we point out that one such approach , recently proposed by , also represents paraphrases by lattices , similarly to our method , although their lattices are derived using parse information .moreover , our algorithm does not employ knowledge resources such as parsers or lexical databases , which may not be available or appropriate for all domains a key issue since paraphrasing is typically domain - dependent .nonetheless , our algorithm achieves good performance .previous work on automated paraphrasing has considered different levels of paraphrase granularity . learning synonyms via distributional similarityhas been well - studied . and identify phrase - level paraphrases , while and acquire structural paraphrases encoded as templates .these latter are the most closely related to the sentence - level paraphrases we desire , and so we focus in this section on template - induction approaches .extract inference rules , which are related to paraphrases ( for example , x wrote y implies x is the author of y ) , to improve question answering .they assume that _ paths _ in dependency trees that take similar arguments ( leaves ) are close in meaning .however , only two - argument templates are considered .also use dependency - tree information to extract templates of a limited form ( in their case , determined by the underlying information extraction application ) .like us ( and unlike lin and pantel , who employ a single large corpus ) , they use articles written about the same event in different newspapers as data .our approach shares two characteristics with the two methods just described : pattern comparison by analysis of the patterns respective arguments , and use of non - parallel corpora as a data source .however , _ extraction _ methods are not easily extended to _ generation _ methods .one problem is that their templates often only match small fragments of a sentence .while this is appropriate for other applications , deciding whether to use a given template to generate a paraphrase requires information about the surrounding context provided by the entire sentence .[ [ overview ] ] overview + + + + + + + + we first sketch the algorithm s broad outlines .the subsequent subsections provide more detailed descriptions of the individual steps .the major goals of our algorithm are to learn : * recurring patterns in the data , such as x ( injured / wounded ) y people , z seriously , where the capital letters represent variables ; * pairings between such patternsthat represent paraphrases , for example , between the patternx ( injured / wounded ) y people , z of them seriously and the patterny were ( wounded / hurt ) by x , among them z were in serious condition .figure [ fig : arch ] illustrates the main stages of our approach . during training ,patterninduction is first applied independently to the two datasets making up a pair of comparable corpora .individual patternsare learned by applying _ multiple - sequence alignment _ to clustersof sentences describing approximately similar events ; these patternsare represented compactly by _ lattices _ ( see figure [ fig : lattice ] ) .we then check for latticesfrom the two different corpora that tend to take the same arguments ; these latticepairs are taken to be paraphrase patterns .once training is done , we can generate paraphrases as follows : given the sentence `` the surprisebombing injured twenty people , five of them seriously '' , we match it to the lattice x ( injured / wounded ) y people , z of them seriously which can be rewritten as y were ( wounded / hurt ) by x , among them z were in serious condition , and so by substituting arguments we can generate `` twenty were wounded by the surprisebombing , among them five were in serious condition '' or `` twenty were hurt by the surprisebombing , among them five were in serious condition '' .our first step is to cluster sentences into groups from which to learn useful patterns ; for the multiple - sequence techniques we will use , this means that the sentences within clustersshould describe similar events and have similar structure , as in the sentences of figure [ fig : cluster ] .this is accomplished by applying hierarchical complete - link clustering to the sentences using a similarity metric based on word n - gram overlap ( ) .the only subtlety is that we do not want mismatches on sentence details ( e.g. , the location of a raid ) causing sentences describing the same type of occurrence ( e.g. , a raid ) from being separated , as this might yield clusterstoo fragmented for effective learning to take place .( moreover , variability in the _ arguments _ of the sentences in a cluster is needed for our learning algorithm to succeed ; see below . )we therefore first replace all appearances of dates , numbers , and proper names with generic tokens .clusterswith fewer than ten sentences are discarded . in order to learn patterns ,we first compute a _ multiple - sequence alignment _ ( msa ) of the sentences in a given cluster .pairwise msa takes two sentences and a scoring function giving the similarity between words ; it determines the highest - scoring way to perform insertions , deletions , and changes to transform one of the sentences into the other .pairwise msa can be extended efficiently to multiple sequences via the iterative pairwise alignment , a polynomial - time method commonly used in computational biology .the results can be represented in an intuitive form via a word _ lattice _ ( see figure [ fig : lattice ] ) , which compactly represents ( n - gram ) structural similarities between the cluster s sentences . to transform latticesinto generation - suitable patternsrequires some understanding of the possible varieties of latticestructures .the most important part of the transformation is to determine which words are actually instances of arguments , and so should be replaced by _ slots _( representing variables ) .the key intuition is that because the sentences in the clusterrepresent the same _ type _ of event , such as a bombing , but generally refer to different _ instances _ of said event ( e.g. a bombing in jerusalem versus in gaza ) , areas of large variability in the latticeshould correspond to arguments . to quantify this notion of variability ,we first formalize its opposite : commonality .we define _ backbone _ nodes as those shared by more than 50% of the cluster s sentences .the choice of 50% is not arbitrary it can be proved using the pigeonhole principle that our strict - majority criterion imposes a unique linear ordering of the backbone nodes that respects the word ordering within the sentences , thus guaranteeing at least a degree of well - formedness and avoiding the problem of how to order backbone nodes occurring on parallel `` branches '' of the lattice .once we have identified the backbonenodes as points of strong commonality , the next step is to identify the regions of variability ( or , in latticeterms , many parallel disjoint paths ) between them as ( probably ) corresponding to the arguments of the propositions that the sentences represent .for example , in the top of figure [ fig : lattice ] , the words southern city , `` settlement of name'',``coastal resort of name '' , etc . all correspond to the location of an event and could be replaced by a single slot .figure [ fig : lattice ] shows an example of a latticeand the derived slotted lattice ; we give the details of the slot - induction process in the appendix .now , if we were using a parallel corpus , we could employ sentence - alignment information to determine which lattices correspond to paraphrases . since we do not have this information , we essentially approximate the parallel - corpus situation by correlating information from descriptions of ( what we hope are ) the same event occurring in the two different corpora .our method works as follows .once latticesfor each corpus in our comparable - corpus pair are computed , we identify latticeparaphrase pairs , using the idea that paraphrases will tend to take the same values as arguments .more specifically , we take a pair of latticesfrom different corpora , look back at the sentence clusters from which the two lattices were derived , and compare the slot values of those cross - corpus sentence pairs that appear in articles written on the _ same day _ on the same topic ; we pair the latticesif the degree of matching is over a threshold tuned on held - out data .for example , suppose we have two ( linearized ) lattices bombed slot2 and slot3 was bombed by slot4 drawn from different corpora .if in the first lattice s sentence cluster we have the sentence `` the plane bombed the town '' , and in the second lattice s sentence cluster we have a sentence written on the same day reading `` the town was bombed by the plane '' , then the corresponding lattices may well be paraphrases , where slot1 is identified with slot4 and slot2 with slot3 . to compare the set of argument values of two lattices ,we simply count their word overlap , giving double weight to proper names and numbers and discarding auxiliaries ( we purposely ignore order because paraphrases can consist of word re - orderings ) .given a sentence to paraphrase , we first need to identify which , if any , of our previously - computed sentence clustersthe new sentence belongs most strongly to .we do this by finding the best alignment of the sentence to the existing lattices .if a matching latticeis found , we choose one of its comparable - corpus paraphrase latticesto rewrite the sentence , substituting in the argument values of the original sentence .this yields as many paraphrases as there are lattice paths .all evaluations involved judgments by native speakers of english who were not familiar with the paraphrasing systems under consideration .we implemented our system on a pair of comparable corpora consisting of articles produced between september 2000 and august 2002 by the agence france - presse ( afp ) and reuters news agencies . given our interest in domain - dependent paraphrasing , we limited attention to 9 mb of articles , collected using a tdt - style document clustering system , concerning individual acts of violence in israel and army raids on the palestinian territories . from this data( after removing 120 articles as a held - out parameter - training set ) , we extracted 43 slotted latticesfrom the afp corpus and 32 slotted latticesfrom the reuters corpus , and found 25 cross - corpus matching pairs ; since latticescontain multiple paths , these yielded 6,534 template pairs . before evaluating the quality of the rewritings produced by our templates and lattices , we first tested the quality of a random sample of just the template pairs . in our instructions to the judges , we defined two text units ( such as sentences or snippets ) to be paraphrases if one of them can generally be substituted for the other without great loss of information ( but not necessarily vice versa ) . given a pair of _ templates _ produced by a system , the judges marked them as paraphrases if for many instantiations of the templates variables , the resulting text units were paraphrases .( several labelled examples were provided to supply further guidance ) .to put the evaluation results into context , we wanted to compare against another system , but we are not aware of any previous work creating templates precisely for the task of generating paraphrases .instead , we made a good - faith effort to adapt the dirt system to the problem , selecting the 6,534 highest - scoring templates it produced when run on our datasets .( the system of was unsuitable for evaluation purposes because their paraphrase extraction component is too tightly coupled to the underlying information extraction system . )it is important to note some important caveats in making this comparison , the most prominent being that dirt was not designed with sentence - paraphrase generation in mind its templates are much shorter than ours , which may have affected the evaluators judgments and was originally implemented on much larger data sets.tide n : nn : n '' , which we transformed into `` y tide of x '' so that its output format would be the same as ours . ]the point of this evaluation is simply to determine whether another corpus - based paraphrase - focused approach could easily achieve the same performance level . in brief, the dirt system works as follows .dependency trees are constructed from parsing a large corpus .leaf - to - leaf paths are extracted from these dependency trees , with the leaves serving as slots .then , pairs of paths in which the slots tend to be filled by similar values , where the similarity measure is based on the mutual information between the value and the slot , are deemed to be paraphrases .we randomly extracted 500 pairs from the two algorithms output sets .of these , 100 paraphrases ( 50 per system ) made up a `` common '' set evaluated by all four judges , allowing us to compute agreement rates ; in addition , each judge also evaluated another `` individual '' set , seen only by him- or herself , consisting of another 100 pairs ( 50 per system ) .the `` individual '' sets allowed us to broaden our sample s coverage of the corpus .the pairs were presented in random order , and the judges were not told which system produced a given pair .as figure [ msa - dirt - accuracy ] shows , our system outperforms the dirt system , with a consistent performance gap for all the judges of about 38% , although the absolute scores vary ( for example , judge 4 seems lenient ) .the judges assessment of correctness was fairly constant between the full 100-instance set and just the 50-instance common set alone . in terms of agreement , the kappa value ( measuring pairwise agreement discounting chance occurrences ) on the common set was 0.54 , which corresponds to moderate agreement .multiway agreement is depicted in figure [ msa - dirt - accuracy ] there , we see that in 86 of 100 cases , at least three of the judges gave the same correctness assessment , and in 60 cases all four judges concurred .finally , we evaluated the quality of the paraphrase sentences generated by our system , thus ( indirectly ) testing all the system components : pattern selection , paraphrase acquisition , and generation .we are not aware of another system generating sentence - level paraphrases .therefore , we used as a baseline a simple paraphrasing system that just replaces words with one of their randomly - chosen wordnet synonyms ( using the most frequent sense of the word that wordnet listed synonyms for ) .the number of substitutions was set proportional to the number of words our method replaced in the same sentence .the point of this comparison is to check whether simple synonym substitution yields results comparable to those of our algorithm . [ cols="<,<",options="header " , ] for this experiment , we randomly selected 20 afp articles about violence in the middle east published later than the articles in our training corpus .out of 484 sentences in this set , our system was able to paraphrase 59 ( 12.2% ) .( we chose parameters that optimized precision rather than recall on our small held - out set . )we found that after proper name substitution , only seven sentences in the test set appeared in the training set , which implies that latticesboost the generalization power of our method significantly : from seven to 59 sentences .interestingly , the coverage of the system varied significantly with article length .for the eight articles of ten or fewer sentences , we paraphrased 60.8% of the sentences per article on average , but for longer articles only 9.3% of the sentences per article on average were paraphrased .our analysis revealed that long articles tend to include large portions that are unique to the article , such as personal stories of the event participants , which explains why our algorithm had a lower paraphrasing rate for such articles .all 118 instances ( 59 per system ) were presented in random order to two judges , who were asked to indicate whether the meaning had been preserved .of the paraphrases generated by our system , the two evaluators deemed 81.4% and 78% , respectively , to be valid , whereas for the baseline system , the correctness results were 69.5% and 66.1% , respectively .agreement according to the kappa statistic was 0.6 .note that judging full sentences is inherently easier than judging templates , because template comparison requires considering a variety of possible slot values , while sentences are self - contained units .figure [ fig : wordnet ] shows two example sentences , one where our msa - based paraphrase was deemed correct by both judges , and one where both judges deemed the msa - generated paraphrase incorrect .examination of the results indicates that the two systems make essentially orthogonal types of errors .the baseline system s relatively poor performance supports our claim that whole - sentence paraphrasing is a hard task even when accurate word - level paraphrases are given .we presented an approach for generating sentence level paraphrases , a task not addressed previously .our method learns structurally similar patterns of expression from data and identifies paraphrasing pairs among them using a comparable corpus .a flexible pattern - matching procedure allows us to paraphrase an unseen sentence by matching it to one of the induced patterns .our approach generates both lexical and structural paraphrases .another contribution is the induction of msa lattices from non - parallel data .lattices have proven advantageous in a number of nlp contexts , but were usually produced from arallel data , which may not be readily available for many applications .we showed that word lattices can be induced from a type of corpus that can be easily obtained for many domains , broadening the applicability of this useful representation .in this appendix , we describe how we insert slots into latticesto form slotted lattices .recall that the backbone nodes in our latticesrepresent words appearing in many of the sentences from which the lattice was built . as mentioned above, the intuition is that areas of high variability between backbone nodes may correspond to arguments , or slots .but the key thing to note is that there are actually two different phenomena giving rise to multiple parallel paths : _ argument variability _, described above , and _synonym variability_. for example , figure [ fig : variability](b ) contains parallel paths corresponding to the synonyms `` injured '' and `` wounded '' .note that we want to _ remove _ argument variability so that we can generate paraphrases of sentences with arbitrary arguments ; but we want to _ preserve _ synonym variability in order to generate a variety of sentence rewritings . to distinguish these two situations , we analyze the _ split level _ of backbonenodes that begin regions with multiple paths .the basic intuition is that there is probably more variability associated with arguments than with synonymy : for example , as datasets increase , the number of locations mentioned rises faster than the number of synonyms appearing .we make use of a _ synonymy threshold _ ( set by held - out parameter - tuning to 30 ) , as follows .* if no more than % of all the edges out of a backbonenode lead to the same next node , we have high enough variability to warrant inserting a slot node .* otherwise , we incorporate reliable synonyms , identified only single - word synonyms , phrase - level synonyms can similarly be acquired by considering chains of nodes connecting backbone nodes . ] into the backbone structure by preserving all nodes that are reached by at least % of the sentences passing through the two neighboring backbonenodes .furthermore , all backbonenodes labelled with our special generic tokens are also replaced with slotnodes , since they , too , probably represent arguments ( we condense adjacent slotsinto one ) .nodes with in - degree lower than the synonymy threshold are removed under the assumption that they probably represent idiosyncrasies of individual sentences .see figure [ fig : variability ] for examples .
we address the text - to - text generation problem of sentence - level paraphrasing a phenomenon distinct from and more difficult than word- or phrase - level paraphrasing . our approach applies _ multiple - sequence alignment _ to sentences gathered from unannotated comparable corpora : it learns a set of paraphrasing patterns represented by _ word lattice _ pairs and automatically determines how to apply these patterns to rewrite new sentences . the results of our evaluation experiments show that the system derives accurate paraphrases , outperforming baseline systems .
high performance coronagraphs with small inner working angle ( iwa ) are unavoidably very sensitive to small pointing errors and other low order aberrations .this property is due to the fact that the wavefront of a source at small angular distance ( typically between 1 and 2 ) from the optical axis is `` similar '' ( in the linear algebra sense of the term ) to an on - axis wavefront with a small ( ) pointing error .if the coronagraph must `` transmit '' the former , it will also transmit a significant part of the latter , and therefore be extremely sensitive to pointing errors .this behavior is indeed verified by performance comparison between coronagraph concepts . for high performance coronagraphs , such as the phase - induced amplitude apodization ( piaa )coronagraph used as example in this paper , the coronagraph should ideally be designed to balance iwa against stellar angular diameter , which sets a fundamental limit on the achievable coronagraph performance .such coronagraphs are therefore pushed to become sensitive to pointing errors corresponding to the angular size of nearby stars , roughly 1 milliarcsecond ( mas ) , and are also highly sensitive to other low order wavefront errors such as focus and astigmatism .milliarcsecond - level pointing error can increase stellar leakage in the coronagraph to the point where a planet would be lost in photon noise .even smaller errors can create , if not independantly measured , a signal which is very similar to a planet s image .a robust and accurate measurement of low order aberrations ( especially tip - tilt errors , which are easily generated by telescope pointing errors and vibrations ) is therefore essential for high contrast coronagraphic observations at small angular separation .the science focal plane after the coronagraph is unfortunately `` blind '' to small levels of low order aberrations , which can only be seen when already too large to maintain high contrast in the coronagraphic science image .a better option is to monitor pointing errors by using starlight which would otherwise be rejected by the coronagraph .this scheme was successfully implemented on the lyot project coronagraph .alternatively , the measurement could be performed independently from the coronagraph optical train ( for example , the wavefront sensor in the adaptive optics system upstream of the coronagraph ) .we propose in this paper an improved solution to obtain accurate measurement of several low - order aberrations including pointing : the coronagraphic low - order wavefront sensor ( clowfs ) .the wavefront control requirements for a piaa coronagraph are first clearly defined in [ sec : requ ] .the clowfs principle is presented in [ sec : principle ] .wavefront reconstruction algorithms and clowfs sensitivity are discussed in [ sec : wfreconstr ] .the aberration sensitivity of a piaa coronagraph equipped with a clowfs is discussed in [ sec : aberrsens ] , and the results of a laboratory demonstration on a piaa coronagraph system are shown in [ sec : labexp ] .simulated science focal plane image in a piaac system with a 0.005 /d pointing offset .the piaa design adopted for this simulation has a 1.9 /d iwa . ]wavefront aberrations can be produced either before the piaa optics ( for example a bending of the telescope primary mirror , or a telescope pointing error ) or between the piaa optics and the focal plane mask . in a space - based telescope free of atmospheric turbulence , the strongest sources of aberration are likely to be telescope pointing errors and low - order aberrations due to structural deformations of the optical telescope assembly .the piaa coronagraph is especially sensitive to such low order aberrations if they are introduced prior to the piaa optics , as they will then scatter light in the most scientifically precious area of the science focal plane : the inner part of the field .as illustrated in figure [ fig : psf21 ] , a very small pointing error ( 0.5% of /d ) can be sufficient to create an artefact as bright as an earth - like planet .extreme sensitivity to pointing is unavoidable in small iwa coronagraphs , and , in the case of the piaa , is due to the fact that a `` remapped '' tip / tilt scatters light outside the focal plane mask .coronagraphs with larger iwa and better tolerance to pointing errors exist , and even within the piaa `` family '' of coronagraph , sensitivity to pointing errors can be balanced against iwa by changing the size of the focal plane mask and the pupil apodization profile .the piaa coronagraph is much less affected by aberrations after the beam shaping optics .for example , a small amount of tip / tilt in the post - piaa optics will simply move the psf on the focal plane mask ( which is most likely slightly oversized ) without introducing scattering at larger separations .in addition , low order aberrations introduced after the piaa optics will likely be much smaller in amplitude , thanks to the small size of optical elements between the piaa optics and the focal plane mask . for successful detection of a faint source ( assumed here to be an earth - like planet at contrast ) ,low - order aberrations must simultaneously : * be small enough to avoid loosing the planet signal within the scattered starlight s photon noise .for the piaa coronagraph design used in this work , a pre - piaa 0.005 /d pointing error is sufficient to scatter of the starlight into the science field , and this scattered light produces two wide `` arcs '' on either side of the optical axis , at a contrast level .although a piaa coronagraph could be designed with reduced sensitivity to pointing errors ( but with larger iwa ) , we assume here that the maximum allowable pointing error for a space piaa coronagraph mission aimed at direct imaging of `` earth - like '' planets is 0.005 /d ( corresponding to starlight leak peaking at contrast ) . on a 1.4-m diameter telescope in the visible, this corresponds to 0.4 mas , a value which is similar to the angular radius of a `` typical '' target ( a main - sequence star at 10 pc ) . on larger telescopes ,the angular radius of the star ( 0.5 mas for a sun - like star at 10 pc ) will drive the coronagraph design , which will therefore also end up with a 0.5 mas rms pointing error requirement . * be stable , or calibrated , to a fraction of the planet s expected flux .we assume in this paper that coronagraphic leaks due to low order aberrations must be calibrated to 10% of the expected planet s contribution .this second requirement is therefore much more severe .light scattered in the science focal plane scales approximately as the square of the aberration amplitude : pointing must be stable ( or calibrated ) to ( 0.13 mas on a 1.4-m telescope in the visible ) for a planet at contrast .fortunately , high accuracy measurement of low order aberrations by the clowfs scheme proposed in this paper can be used to reliably model the stellar leakage in the coronagraphic science focal plane .this model can then be numerically subtracted from the science image to reveal much fainter underlying sources down to the photon noise and detector noise limits . pointing errors larger than 0.0016 ( but smaller than 0.005 ) are acceptable , as long as they are measured to 0.0016 accuracy .a simplified optical layout for a clowfs system for a piaa coronagraph is shown in figure [ fig : lowfsprinciple ] .clowfs light is extracted by the focal plane mask located after the piaa optics . in the piaa coronagraph design ,the role of this focal plane is to selectively remove starlight , while transmitting the science field .the mask is therefore illuminated by a large number of photons , which are freely available , and , if properly used , allow highly sensitive measurement of low order aberrations . the central part of the focal plane mask used in the clowfs design is opaque : only a reflective annulus around this central part sends light to the clowfs optics .as shown in figure [ fig : lowfsprinciple ] we denote the radius of the inner opaque zone and the radius of the focal plane mask , which is fixed at the iwa of the coronagraph . for the baseline configuration adopted in this paper , with on the sky : the central 40% of the focal plane mask is opaque .since the pupil before the focal plane mask is strongly apodized by the piaa optics , only a small fraction of the total light is reflected by the reflective annulus covering the range ( see figure [ fig : fluxr1r2 ] ) .the clowfs detector acquires a defocused image of this annulus , at defocus distance waves ( in this paper , the defocus value is given as peak - to - valley in the post - piaa pupil plane ) .while this defocus value may seem large , most of the light is in the central part of the apodized pupil , and the `` effective '' defocus is about half to a third of the optical defocus listed in this work .the clowfs pupil is oversized by a factor 2 ( ) to include most of the light reflected by the clowfs focal plane mask .fraction of the total starlight collected by the telescope sent to the clowfs detector as a function of . ] the clowfs defocus ( the clowfs detector is not conjugated to the focal plane mask ) is necessary to measure focus , but is not necessary if only tip and tilt are measured . in a clowfs configuration where the detector is conjugated to the focal plane mask , a single clowfs image can be used to recover the amplitude of focus in the incoming beam , but not its sign ( in an optical system free of aberrations , focal plane images acquired inside and outside focus are identical ) .the clowfs defocus is therefore introduced to remove this sign ambiguity . a mathematical proof of the quadratic ( rather than linear ) response of a `` in - focus '' clowfs to pupil focus aberration is provided in [ ssec : lin ] .while in an ideal system ( where photon noise would be the only source of noise ) the central opaque zone would reduce the clowfs performance , it is in practice extremely advantageous for two reasons , which are now described .without the opaque zone , the clowfs detector would be illuminated by a large number of photons and the signal to be extracted would be a tiny relative change in intensity , which would be challenging to measure in practice . to detect this signal , the detector calibration would need to be very accurate and detector saturation would be a serious issue . masking the central part of the psfgreatly reduces the total amount of light in the clowfs but only slightly reduces the amplitude of the signal produced by low order aberrations .for example , a small pointing error in the post - piaa pupil creates maximal signal where the psf surface brightness slope is the greatest .the very center of the psf , although it contains a lot of flux , contains very little signal .thanks to the central dark zone , microscopic changes in the coronagraphic wavefront produce macroscopic changes in the clowfs image , as shown in figure [ fig : lowfsims ] .images obtained by the clowfs for ( left ) and ( right ) for a defocus distance of . with , the images are significantly fainter but aberrations are more easily seen . in each of the 8 images shown , the inside and outside focus images of the clowfs mask are side - by - side . in the clowfs systemwe propose , only one of these two images would need to be acquired . ] the central dark zone of the focal plane mask allows the clowfs to be insensitive to small motions of the clowfs elements ( detector and off - axis parabola or lens to re - image the focal plane mask ) .if the focal plane mask were fully reflective , a tip - tilt error in the post - piaa pupil ( which should be corrected ) and a tip - tilt error in the clowfs optics ( which should not be corrected ) would look identical .this is due to the fact that the outer edge of the focal plane mask is dark , and the clowfs would therefore measure tip - tilt as a translation of the image on the clowfs detector .this sensitivity to small amount of tip - tilt in the clowfs optics would therefore require the clowfs optical elements positions to be accurately known and stable . with the dark zone in the center of the focal plane mask, however , the same tip - tilt is measured as a macroscopic change in the clowfs image shape ( see figure [ fig : lowfsims ] ) instead of a small translation .thanks to the dark zone , the clowfs tip - tilt measurement is accurately referenced to the focal plane mask .the geometry of the focal plane mask is driven by both the coronagraph architecture and the clowfs .the transmission of the mask must first satisfy the coronagraph s requirements . implementing a clowfsrequires : * a tilted focal plane mask to redirect stellar light to the clowfs imaging optics .if the tilt angle is large , or if the coronagraph requires a very circular focal plane mask , the focal plane mask could be made elliptical . * a reflective zone on the focal plane mask .this requirement should have no negative impact on the coronagraph s performance , as a reflective coating may be deposited on the illuminated side of the focal plane mask without affecting its transmission .the focal plane mask reflectivity defines the clowfs performance .an `` ideal '' focal plane mask for clowfs , as described in this paper , may be very challenging to manufacture : * it is very difficult to make the central part of the mask truly `` black '' , and some of the light in the central part of the psf will be reflected into the clowfs imaging optics .* the reflective annulus may not be uniformly reflective and could also introduce wavefront errors ( the reflective surface may not be very flat ) . * coatings , wether optimized to be black ( central part of the mask ) or reflective ( annulus ), are somewhat chromatic : the reflectivity map of the focal plane mask is wavelength - dependent .even with these imperfections , the clowfs will still produce a strong response for the aberrations to be measured ( tip , tilt , focus and their remapped equivalents ) .it may however be difficult to predict what there responses will be if the focal plane mask is poorly calibrated . as detailed in [ sec : wfreconstr ] , we therefore propose to first measure these linear responses by introducing aberrations in the coronagraph optics , and then use these responses to decompose the clows image in a linear sum of aberrations .this step requires no additional hardware , provided that there are actuators to correct for the aberrations measured by the clowfs .this is very similar to what is commonly done on adaptive optics systems , where a `` response matrix '' is first acquired by measuring the wfs signal when each actuator of the deformable mirror is moved .we show in this section that for small wavefront aberrations ( 1 rad ) , the clowfs image is a linear function of the wavefront aberration modes to be measured .this convenient property is due to : * the fact that the clowfs is not operating on a `` dark '' fringe : when the incoming wavefront is perfect , the clowfs images already contain some light . * the non - orthogonality between the aberration - free clowfs complex amplitude distribution in the clowfs detector array and the change introduced on this complex amplitude by the aberration modes to be measured .we denote and the 2-d coordinate respectively in the post - piaa pupil plane and in the clowfs detector plane .we denote the 2-d complex amplitude obtained by the clowfs for a perfect wavefront .the complex amplitude obtained in the clowfs detector plane is a linear function of the pupil plane complex amplitude ( virtually all optical systems are linear in complex amplitude ) .since by definition , for , we can therefore write where is a linear operator . the corresponding light intensity in the clowfs detector plane is + |m w(u)|^2\ ] ] which is linear function of as long as : , which helps to satisfy condition [ equ : lincond ] . if there is an aberration mode for which and are orthogonal ( = 0 $ ] for all values of ) , condition [ equ : lincond ] will not be satisfied even for small aberration levels , and the clowfs image will be a quadratic function of the wavefront aberration ( ) .although such a situation is `` unlikely '' because orthogonality would need to occur simultaneously on each pixel of the clowfs detector , it does occur for the focus aberration mode if the clowfs detector is conjugated to the focal plane mask ( no defocus in the clowfs re - imaging ) . in this special case , since the clowfs re - images the focal plane reflective annulus without defocus , we can look at the orthogonality between and directly on the focal plane mask .the complex amplitude is the fourier transform of the pupil plane complex amplitude and is purely real ( no imaginary part ) over the reflective annulus of the focal plane mask when no wavefront aberration is present . is the change in the complex amplitude over the focal plane annulus introduced by a focus aberration in the pupil plane , and is therefore the fourier transform of where is the radial coordinate in the pupil plane , is the amplitude profile in the exit pupil of the piaa coronagraph ( decreases with ) , and is the phase in the exit pupil of the piaa coronagraph ( is the focus phase term remapped radially by the piaa optics ) . the approximation in equation [ equ : puppha ] is valid because we are considering small aberrations ( ) . since both and are real functions , the fourier transform of is purely imaginary .we have therefore demonstrated that in this special configuration , the aberration - free clowfs complex amplitude distribution in the clowfs detector array and the change introduced on this complex amplitude by a focus aberration are perfectly orthogonal .this problem can be solved by introducing a defocus in the clowfs re - imaging optics , as proposed in this paper .this defocus term is equivalent to convolving the complex amplitude map in the annulus by a kernel which breaks the orthogonality .numerical simulations show that the clowfs s response is linear to the modes considered in this paper .this linearity is only valid for small amplitudes , as shown in figure [ fig : lintip ] for tilt .linearity of the clowfs response to a tilt aberration . with a linear wavefront reconstruction algorithm ,the measured tip signal ( solid line ) differs slightly from the true tip aberration ( dashed line ) .this non - linearity reaches about 20% at 0.2 /d . ]as shown in figure [ fig : lowfs8mif ] , the main steps of the clowfs closed loop are : * acquire a single frame with the clowfs detector array * compute the difference between this frame and the `` reference '' image obtained ( or computed ) with no aberrations . * decompose this difference as a linear sum of modal responses .these modal responses are either pre - computed or measured prior to starting the clowfs loop .* the coefficients of the linear decomposition described above are used to drive actuators to remove the aberrations measured by the clowfs the estimation of low - order aberrations therefore requires knowledge of the modal responses ( how aberrations will linearly modify the image acquired by the clowfs ) . as shown in figure [ fig : lowfs8mif ] ( left ) , this information can consist of clowfs responses to a series of wavefront aberration modes , which are here zernike aberrations introduced either before ( m1 to m5 ) or after ( m6 to m8 ) the piaa optics .they can either be pre - computed by simulation or measured as a response to an aberration introduced in the optical train ( this latter method is more robust as it will accurately account for unknown fabrication errors in clowfs optical components ) .the wavefront control algorithm must avoid confusion which could arise from the fact that the same low order aberration will produce nearly the same clowfs signal ( although with a different amplitude ) wether it is introduced before or after the piaa optics : in the configuration adopted in this paper , responses to pre - piaa and post - piaa tip / tilt are 99.6% similar .although post - piaa aberrations are expected to be smaller / slower , they still need to be properly corrected .one must avoid compensating for a telescope pointing error ( pre - piaa tip - tilt ) by introducing a post - piaa tip - tilt , or compensating for a post - piaa tip - tilt aberration by a pre - piaa tip - tilt correction : both scenarios would create strong diffracted light at and beyond the iwa in the science focal plane even though the `` overall '' tip - tilt signal seen by the clowfs would be zero .the reconstruction algorithm shown in figure [ fig : lowfs8mif ] addresses this issue by creating `` differential '' tip - tilt and focus modes ( m6 , m7 , m8 ) obtained by subtraction of pre - piaa responses m1,m2 , m3 from post - piaa responses m6 , m7 , m8 .these differential modes track the difference between pre and post - piaa aberrations , and can only be measured slowly since the corresponding clowfs response is weak ( due to the high degree of similarity between modes m1 , m2 , m3 and m6 , m7 , m8 ) . since most tip - tilt /focus aberrations originate before the piaa optics , the proposed algorithm offloads all of the tip / tilt signal to telescope pointing corrections ( pre - piaa ) .differential aberrations are measured more slowly and off - loaded as post - piaa corrections . in the example given here , thanks the large gap between pre - piaa and post - piaa pointing control bandwidths ( khz vs. hz ) the differential pointing signal can be sent to a post - piaa tip / tilt corrector rather than a combination of pre - piaa and post - piaa tip - tilt .the accuracy of the clowfs loop is ultimately limited by the measurement accuracy ( assuming ideal actuators ) . for small wavefront excursions ,the linear model is a very good approximation and the measurement accuracy is driven by detector noise and photon noise . for optimal performance ,the clowfs exposure time and loop gain need to be carefully chosen to match the dynamic properties of the wavefront aberrations . in principle , in a very stable environment , the clowfs loop could correct very small errors by averaging a large number of measurements ( small loop gain ) .quantitative estimates will be given in [ sec : aberrsens ] assuming a photon - noise limited detector .the closed loop controller proposed in this paper is based on the linearity of the clowfs response . for large wavefront excursions ,the actual clowfs response can differ significantly from this linear model , and the closed loop may `` unlock '' .while we have not performed a detailed loop stability analysis , we expect the clowfs loop to stay locked as long as pointing excursions are within a few tenths of : * figure [ fig : lintip ] shows that the clowfs is still very close to the linear model at 0.2 , and , more importantly , the curve shown in this figure is smooth and monotonic . with a 0.2 pointing excursion ,the estimate obtained from our linear model will not be exact , but will be sufficiently good to bring back the pointing significantly closer to on - axis at the next iteration * as illustrated in figure [ fig : lowfsims ] , the clowfs recovers pointing errors by essentially looking for a brightness enhancement on one side of the re - imaged reflective ring and a corresponding decrease in surface brightness on the opposite side of the ring .as long as pointing errors are small enough for the peak of the stellar image to stay within the inner edge of the reflective ring , this scheme will correctly estimate the direction of the pointing error , and the amplitude of this error will be estimated with sufficient precision to reduce the pointing error after correction . since the inner edge of the reflectivemask is at least 0.5 , the pointing loop is expected to stay stable over this range .laboratory operation of the clowfs confirm this behavior , and the closed loop locks even if the initial pointing error is 0.5 .if the initial pointing error is too large for the linear clowfs loop to lock , a non - linear model of the clowfs may be used .even with a non - linear model , the clowfs will fail to measure pointing errors if the stellar psf misses the focal plane mask ( pointing error is as large or larger than the mask size ) .a separate coarse pointing sensor is then necessary to measure the pointing offset and then bring the stellar psf within the clowfs range .the coronagraph image may be used for this purpose , provided that detector saturation does not prevent pointing error measurement . in this section, we explore how the clowfs performance is affected by the choice of ( which sets the amount of light sent to the clowfs ) and the amount of introduced in the image of the focal plane mask . for each choice of the parameters and , the linear algorithm described in [ ssec : wfca ]is used here to measure the clowfs sensitivity to pointing ( modes m1 , m2 ) , focus ( mode m3 ) , astigmatism ( modes m4 , m5 ) , differential tip - tilt ( modes m6,m7 ) and differential focus ( mode m8 ) .the linear algorithm allows rapid evaluation of a large number of designs : 2-d sensitivity maps on a - plane can therefore be created , as shown in figure [ fig : map3d ] . under the ideal conditions used in this simulation ( perfect detector ) ,figure [ fig : map3d ] shows that all aberrations are best measured if is close to zero : ideally , the focal plane mask should be fully reflective .figure [ fig : map3d ] however shows that there is relatively little loss in sensitivity if is increased to up to . withonly 10% of the starlight reflected to the clowfs ( ) there is almost no loss in clowfs sensitivity to pointing errors .for the pointing error to double , must be increased to 0.44 , leaving only 0.87% of the starlight for the clowfs .if the same 0.87% were applied uniformly over the psf ( instead of masking the center of the psf ) , pointing errors would increase more than tenfold .a similar behavior is observed for all other modes shown in figure [ fig : map3d ] : for example , the focus measurement error is doubled for , leaving only 0.06% of the starlight for the clowfs .these results confirm the `` relative signal amplification '' effect previously claimed : the central dark part of the mask removes most of the clowfs flux but only mildly reduces the signal .the clowfs design adopted in this paper ( , = 20 rad pv ) uses only 1.8% of starlight , at the expense of almost doubling tip - tilt and focus measurement errors .the amount of has little effect on the clowfs sensitivity to pointing , unless it is sufficiently large ( rad ) to `` blend '' together light from opposite sides of the reflective focal plane ring .some is however essential to allow good focus estimation with the clowfs , and helps with astigmatism as well . for focus, there is however no gain beyond rad , at which point the loss in pointing sensitivity is still very small . for all pre - piaa low order aberrations , figure [ fig :map3d ] shows that the residual sensing errors with a clowfs are within a factor of the theoretically optimal sensitivity .for example , the clowfs pointing error is 1 rad per mode per photon , or 1.4 rad for tip and tilt combined .this corresponds to just under 1 /d pointing error for a single photon , even if the central opaque zone of the focal plane mask removes most of the starlight .similarly , focus and astigmatisms are recovered with 2 rad rms error each for a single photon .the clowfs therefore makes a very efficient use of a limited number of photons .due to the strong similarity between pre - piaa and post - piaa modes , the clowfs sensitivity to differential modes such as m6 , m7 or m8 , is much weaker . beyond the global trends outlined above , figure [fig : map3d ] also shows more subtle effects : diffraction effects produce oscillations of the sensitivity to focus and astigmatism in the - plane . computing 2d sensitivity maps such as the ones shown in figure [ fig : map3d ] is therefore essential for fine tuning the clowfs performance to low order wavefront control requirements .l c c required pointing calibration accuracy ( contrast ) & + maximum rms pointing excursion ( contrast ) & + required sampling time & 5 s & 38 + maximum allowed uncalibrated pointing drift rate & 0.026 & 3.4 arcsec / s + we consider here a 1.4-m diameter telescope observing a star in a 0.2 m wide band centered at .we assume a 50% system throughput , which corresponds to at the telescope entrance .as described in [ sec : requ ] , pre - piaa low order aberrations can affect coronagraphic contrast at small angles ( close to the iwa of the piaa coronagraph ) much more easily than post - piaa low order aberrations .we therefore only consider in this section pre - piaa pointing errors , which could be generated by a telescope / spacecraft pointing error .we adopt the requirement defined in [ sec : requ ] : pointing error must be measured to a 0.0016 1- accuracy . without clowfs ,a 0.0016 /d pointing error would have to be measured from the corresponding total coronagraphic leak in the science focal plane , equal to approximately .a 1- measurement of this leak can be achieved with 1 photon , provided that this leak is interferometrically combined with a much brighter and well known `` reference(s ) '' ( in 2-d , this technique is referred to a focal plane wavefront sensing , where a deformable mirror is used to mix coherent starlight with the scattered light halo ) .this `` reference(s ) '' is necessary to measure the sign of the aberration and also to bring the signal above the detector readout noise and/or incoherent background in the image . in this scheme ,detection of a faint companion can not be done at the same time as pointing measurement , and time must be shared between the two tasks ( we assume here that 50% of the time is spent for each task ) .a measurement of pointing error with a 0.0016 1- error would therefore require 5 seconds . as shown in figure [ fig : map3d ] , the tip sensitivity for the clowfs is 1.2 rad rms for one photon at the telescope entrance .the 0.0016 tip measurement accuracy ( equal to 0.0025 rad rms ) therefore requires photons , which can be gathered in 38 .the clowfs is therefore capable of measuring pointing errors about 130,000 times faster than the science focal plane .the pointing stability requirement is derived in table [ tab : perfsumm ] from both the required pointing calibration accuracy ( equal to 0.0016 in this example ) and the sampling time necessary to measure pointing errors to this level of accuracy .the ratio between these two quantities defines a maximum allowable pointing drift rate beyond which the stability / calibration requirement previously defined can not be met .as shown in table [ tab : perfsumm ] , measuring pointing errors with the clowfs is several orders of magnitude quicker than if only the light in the science focal plane were used .this measurement sensitivity gain yields a much more relaxed pointing drift rate requirement : with a clowfs as opposed to without .the clowfs camera requires a modest number of pixels : the signals shown in the left of figure [ fig : lowfs8mif ] contain little high spatial frequencies and can be accurately measured with approximately 10 pixels across one defocused spot images .a 20 by 20 pixel window ( 400 pixels ) is sufficient , and can be read rapidly ( khz ) with current technology .the clowfs measures pointing and other low order aberration by detecting changes in the defocused image it acquires .temporal changes in the detector response must therefore be small compared to the expected signal .the clowfs is not sensitive to static spatial variations in the detector response ( flat field ) , as the signal is extracted from a difference between two images acquired at different times .the effect of spatially uniform flat field variations can be removed by scaling of the images prior to this subtraction .a 0.0016 /d pointing offset corresponds to a 2% change in the surface brightness on the bright ring images by each frame of the clowfs : one side of the ring is 2% brighter while the opposite side is 2% fainter .a comparable variation in detector sensitivity between the two sides of the ring images is very unlikely in modern detectors ( visible ccds ) , even over the course of several hours .thanks to the large number of photons collected by the clowfs , a moderate amount of readout noise ( few photo - electron ) will still allow operation of the clowfs at high sampling rate .for example , at 10 khz sampling rate , the defocused image contains 10800 photons on a target . assuming that the `` ring '' occupies a 50 pixel surface area on the detector , photon noise ( 15 photo - electrons per pixel ) is likely to be larger than readout noise on modern visible detectors .the clowfs has been implemented in the piaa coronagraph testbed at the subaru telescope . as shown in fig .[ fig : labsetup ] , the clowfs is driving 5 piezo actuators to move the light source in x , y , z and move a post - piaa mirror in tip and tilt .schematic representation of the clowfs implementation for the piaa coronagraph testbed at subaru telescope .the clowfs is extracting light reflected by the focal plane mask and drives 5 actuators : pre - piaa tip / tilt / focus ( the light source is mounted on a 3 axis piezo stage ) and post - piaa tip / tilt . ]the clowfs focal plane mask used in this experiment is shown in fig .[ fig : labmask ] , and was manufactured by lithography techniques .the mask is not `` ideal '' : the central portion is not perfectly opaque , and reflects a few percent of the light .since most of the starlight falls on this central part of the mask , the clowfs frame contains a bright peak at the center of the defocused mask image .thanks to the defocus introduced in the clowfs , the fainter light reflected by the reflective annulus interferes with this central peak , and also forms fainter outer rings visible as shown in fig .[ fig : labplots ] . with a finite reflectivity of the central part of the focal plane mask, a more optimal design would be to reduce the size of the central `` dark '' area to increase the amount of light at the transition between the central dark spot and the reflective annulus .clowfs focal plane mask used in the piaa coronagraph laboratory testbed at subaru telescope ( fabricated by hta photomask ) .the 100 micron radius mask center is opaque ( low reflectivity ) , and is surrounded by a 100 micron wide highly reflective annulus .the science field , transmiting light to the science camera , extends from 200 micron to 550 micron radius . ]the linear scheme implemented for calibration and processing of the clowfs data is very insensitive to the details of the clowfs design , and the clowfs is able to measure simultaneously both pre and post - piaa tip / tilt with little cross - talk ( see fig . [fig : labplots ] , upper right ) . as shown in fig .[ fig : labplots ] , we have achieved closed loop pointing stability for both the light source position and post - piaa tip / tilt . in our laboratory testbed ,pre - piaa pointing errors and post - piaa tip / tilt are similar in amplitude , while in an actual system , we would expect pre - piaa pointing errors ( telescope pointing , primary and secondary mirrors tilts ) to be much larger and faster than post - piaa tip / tilt ( most likely due to slow thermal drifts on small optics ) . on our testbed, we have therefore operated the control loop with similar temporal bandwidth for both pre and post - piaa aberrations , unlike the control scheme proposed in fig .[ fig : lowfs8mif ] .the performance we have achieved in the laboratory is limited by both system stability ( our testbed is in air and includes 75 mm diameter optics separated by more than a meter ) and clowfs loop speed . in our experiment ,the clowfs sampling interval was limited to 25 s due to hardware limitations ( readout speed of the camera image and time necessary to transfer the image from the computer which controls the camera to a separate computer performing the clowfs image analysis ) . fora space coronagraphic mission , a better controlled environment and a faster readout camera ( 10 khz is reasonable for the small number of pixels needed ) would allow higher performance . despite these limitations ,our laboratory demonstration of the clowfs concept has exceeded both the rms pointing excursion and the pointing measurement accuracy required for achieving 1e10 contrast at 2 with a visible piaa coronagraph mission .the clowfs design presented in this paper can efficiently measure low order aberrations `` for free '' , as it uses light that would otherwise be discarded by the coronagraph .both the hardware configuration and software algorithms presented are easy to implement and their performance is robust against calibration errors , chromaticity , non - common path errors and small errors / aberrations in the optical components .the clowfs pointing measurement can also lead to improved astrometric accuracy for the position of faint companions , as the star position on the image is usually difficult to measure in coronagraphic images . in this paper , we have studied a clowfs design on a low - iwa piaa coronagraph . although coronagraphs with larger iwa can tolerate larger amount of low order aberrations ( see for example kuchner & traub 2002 ; kuchner et al . 2005 ) , they require these aberration to be measured ahead of the coronagraphed beam because they are `` blind '' to aberrations until they are large enough to produce significant coronagraph leaks. the clowfs would therefore be very useful to any high contrast coronagraph , and the optical design presented in this paper can readily be used on any coronagraph where a focal plane mask physically blocks starlight .the clowfs can also be applied to phase mask coronagraphs . for such coronagraphs( see for example roddier & roddier 1997 ; rouan et al .2000 ; palacios 2005 ) , where starlight is diffracted outside the pupil by the focal plane mask , a modified lyot stop is placed on the pupil plane to reflect starlight to the clowfs .in addition , using a pattern matching algorithm , the clowfs can estimate low - order wavefront aberrations accurately and quickly even in non - linear region .these new clowfs designs and their performances will be presented in an upcoming paper .belikov , r. , kasdin , n.j . , & vanderbei , r.j .2006 , , 652 , 833 digby , a.p .2006 , , 650 , 484 guyon , o. 2003 , , 404 , 379 guyon , o. , pluzhnik , e.a . ,galicher , r. , martinache , f. , ridgway , s.t . , & woodruff , r.a .2005 , , 622 , 744 guyon , o. , pluzhnik , e. a. , kuchner , m. j. , collins , b. , & ridgway , s. t. 2006 , , 167 , 81 kasdin , n.j . , vanderbei , r.j . ,spergel , d.n . , & littman , m.g .2003 , , 582 , 1147 kuchner , m.j . , & traub , w.a .2002 , , 570 , 900 kuchner , m.j . ,crepp , j. , & ge , j. 2005 , , 628 , 466 lloyd , j.p . , & sivaramakrishnan , a. 2005 , , 621 , 1153 oppenheimer , b. r. , et al .2004 , , 5490 , 433 palacios , d.m .2005 , , 5905 , 196 roddier , f. , & roddier , c. 1997 , , 109 , 815 rouan , d. , riaud , p. , boccaletti , a. , clnet , y. , & labeyrie , a. 2000 , , 112 , 1479 shaklan , s.b ., & green , j.j .2005 , , 628 , 474 sivaramakrishnan , a. , soummer , r. , sivaramakrishnan , a.v . ,lloyd , j.p . , oppenheimer , b.r . , & makidon , r.b .2005 , , 634 , 1416
high contrast coronagraphic imaging of the immediate surrounding of stars requires exquisite control of low - order wavefront aberrations , such as tip - tilt ( pointing ) and focus . we propose an accurate , efficient and easy to implement technique to measure such aberrations in coronagraphs which use a focal plane mask to block starlight . the coronagraphic low order wavefront sensor ( clowfs ) produces a defocused image of a reflective focal plane ring to measure low order aberrations . even for small levels of wavefront aberration , the proposed scheme produces large intensity signals which can be easily measured , and therefore does not require highly accurate calibration of either the detector or optical elements . the clowfs achieves nearly optimal sensitivity and is immune from non - common path errors . this technique is especially well suited for high performance low inner working angle ( iwa ) coronagraphs . on phase - induced amplitude apodization ( piaa ) type coronagraphs , it can unambiguously recover aberrations which originate from either side of the beam shaping introduced by the piaa optics . we show that the proposed clowfs can measure sub - milliarcsecond telescope pointing errors several orders of magnitude faster than would be possible in the coronagraphic science focal plane alone , and can also accurately calibrate residual coronagraphic leaks due to residual low order aberrations . we have demonstrated pointing stability in a laboratory demonstration of the clowfs on a piaa type coronagraph .
large data sets of crystalline structures are nowadays available in two major contexts . on one hand , databases of materials have been created containing structural information of both experimental and theoretical compounds from high - throughput calculations , which are the basis for data - mining techniques in materials discovery projects . on the other hand ,ab initio structure predictions can produce a huge number of new structures that have either not yet been found experimentally or are metastable . in both casesit is essential to quantify similarities and dissimilarities between structures in the data sets , requiring a configurational distance that satisfies the properties of a metric . databases frequently contain duplicates and insufficiently characterized structures which need to be identified and filtered . in experimental data ,the representation of identical structures as obtained from different experiments will always slightly differ due to noise in the measurements , such that the configurational distance is never exactly zero .noise is also present in theoretical calculations where a geometry relaxation is for instance stopped once a certain , possibly insufficient convergence threshold is reached . in ab initio structure predictionschemes it is typically necessary to maintain some structural diversity which can be quantified as a certain minimal configurational distance .all these examples clearly show the need for a metric that allows to measure configurational distances and local structures in a reliable and efficient way .crystalline structures are typically given in a dual representation .the first part specifies the cell and the second part the atomic positions within the cell .the former can for instance be given by the three lattice vectors , and , or by their lengths , and , and the intermediate angles , and . the atomic positions can either be specified by cartesian coordinates or the reduced coordinates with respect to the lattice vectors .however , such representations are not unique , since any choice of lattice points can serve as cell vectors of the same crystalline structure .unique and preferably standardized cell parameters are required for comparison and analysis of different crystals .algorithms to transform unit cells to a reduced form are frequently used in crystallography , such as the niggli - reduction which produces cells with shortest possible vectors ( ) .unfortunately , in the presence of noisy lattice vectors , cells can change discontinuously within the niggli - reduction algorithm .symmetry analysis and the corresponding classification in the 230 crystallographic space groups are another tool to compare crystal structures. however , the outcome of a symmetry analysis algorithm strongly depends on a tolerance parameter such that the introduction of some noise can change the resulting space group in a discontinuous manner . because of the above described problems it is difficult to quantify similarities based on dual representations . within the structure prediction community fingerprints that are not based on such a dual representation have been proposed .oganov et al . introduced element resolved radial distribution functions as a crystal fingerprint . for a crystal containing one element only a single function is obtained for the entire system .the difference between the radial distribution functions of two crystals is then taken as the configurational distance . by definitionthe radial distribution function contains only radial information , but no information about the angular distribution of the atoms .such angular information has been added in the bond characterization matrix ( bcm ) fingerprint . in this fingerprint spherical harmonic and exponential functionsare used to set up modified bond - orientational order metrics of the entire configuration .the distance between two configurations can be measured by the euclidean distance between their bcms .atomic environment descriptors are also needed in the context of machine learning schemes for force fields , bonding pattern recognition , or to compare vacancy , interstitial and intercalation sites .these descriptors could also be used to measure similarities between structures .even though they have never been used in this context we will present a comparison with such a descriptor . when humans decide by visual inspection whether two structures are similar they proceed typically in a different way .they try to find matching atoms which have the same structural environment .if all the atoms in one structure can be matched with the atoms of the other structure , the two structures are considered to be identical .such a matching approach based on the hungarian algorithm has already turned out to be useful for the distinction of clusters . in this paperwe will present a fingerprint for crystalline structures which is based on such a matching approach .the environment of each atom is described by an atomic fingerprint which is calculated in real space for an infinite crystal and represents some kind of environmental scattering properties observed from the central atom .therefore , all the ambiguities of a dual representation do not enter into the fingerprint , allowing an efficient and precise comparison of structures .recently we have proposed an configurational fingerprint for clusters . in this approachan overlap matrix is calculated for an atom centered gaussian basis set . the vector formed by the eigenvalues of thismatrix forms a global fingerprint that characterizes the entire structure .the euclidian norm of the difference vector between two structures is the configurational distance between them and satisfies the properties of a metric .since there is no unique representation of a crystal by a group of atoms ( e.g. the atoms in some unit cell ) we will use atomic fingerprints instead of global fingerprints in the crystalline case .however , this atomic fingerprint is closely related to our global fingerprint for non - periodic systems . for each atom in a crystal located at we obtain a cluster of atoms by considering only those contained in a sphere centered at . for this clusterwe calculate the overlap matrix elements as described in reference for a non - periodic system , i.e we put on each atom one or several gaussian type orbitals and calculate the resulting overlap integral .the orbitals are indexed by the letters and and the index gives the index of the atom on which the gaussian is centered , i.e. in this first step , the amplitudes of the gaussians are chosen such that the gaussians are normalized to one . to avoid that the eigenvalues have discontinuities when an atom enters into or leaves the sphere we construct in a second step another matrix such that the cutoff function smoothly goes to zero on the surface of the sphere with radius in the limit where tends to infinity the cutoff function converges to a gaussian of width .the characteristic length scale is typically chosen to be the sum of the two largest covalent radii in the system .the value determines how many derivatives of the cutoff function are continuous on the surface of the sphere , and was used in the following .one can consider the modified matrix to be the overlap matrix of the cluster where the amplitude of the gaussian at atom is determined by . in this wayatoms close to the surface of the sphere give rise to very small eigenvalues of and are thus weighted less than the atoms closer to the center .the eigenvalues of this matrix are sorted in descending order and form the atomic fingerprint vector .since we can not predict exactly how many atoms will be in the sphere we estimate a maximum length for the atomic fingerprint vector .if the number of atoms is too small to generate enough eigenvalues to fill up the entire vector , the entries at the end of the fingerprint vector are filled up with zeros .this also guarantees that the fingerprint is a continuous function with respect to the motion of the atoms when atoms might enter or leave the sphere . if an atom enters into the sphere some zeros towards the end of the fingerprint vector are transformed in a continuous way into some very small entries which only contribute little to the overall fingerprint .the euclidean norm measures the dissimilarity between the atomic environments of atoms and . the atomic fingerprints and of all the atoms in two crystalline configurations and can now be used to define a configurational distance between the two crystals : where is a permutation function which matches a certain atom in crystal with atom in crystal . the optimal permutation function which minimizes be found with the hungarian algorithm in polynomial time .if the two crystals and are identical the hungarian algorithm will in this way assign corresponding atoms to each other .the hungarian algorithm needs as its input only the cost matrix given by in the following it will be shown that satisfies the properties of a metric , namely * positiveness : * symmetry : * coincidence axiom : if and only if * triangle inequality : . from the definition ( eq . [ def ] )it is obvious that the positiveness and symmetry conditions are fulfilled .the coincidence theorem is satisfied if the individual atomic fingerprints are unique , i.e if there are not two different atomic environments that give rise to identical atomic fingerprints . in our work on fingerprints for clusterswe have shown that the fingerprints can be considered to be unique if they have a length larger or equal to 3 per atom .the triangle inequality can be established in this way : where , and are assumed to be the permutations that minimize respectively the euclidean vector norms associated to , and .since the -centered spheres contain typically about 50 atoms , an atomic fingerprint has at least length 50 if only -type gaussian orbitals or length 200 if both and orbitals are used . since a configuration is characterized by the ensemble of all the atomic fingerprints of all the atoms in the cell , the amount of data needed to characterize a structure is quite large even though it is certainly manageable for crystals with a small number of atoms per unit cell .storage requirements might however become too high in certain cases such as large molecular crystals .we will , therefore , introduce contraction schemes that allow to considerably reduce the amount of data necessary to characterize a crystalline structure .two such schemes will briefly be discussed below .let us introduce a function that designates a certain property of the gaussian orbital and encodes it in form of a contiguous integer index . in case of a multicomponent crystalit can indicate on which kind of chemical element the gaussians are centered and whether the orbital is of or type .the principal vector is thus chopped into pieces whose elements all carry the same value . in the following presentation of numerical resultswe have always considered the central atom to be special , independent of its true chemical type .having atomic species in the unit cell and using atomic gaussian orbitals with a maximum angular momentum , runs from 1 to .now we can construct a contracted matrix together with its metric tensor where is the principal vector of the matrix of eq . [ cutoff ] .the eigenvalues of the generalized eigenvalue problem form again an atomic fingerprint of length which is much shorter than the non - contracted fingerprint .the fingerprints described so far can in principle also be used for molecular crystals .however , the amount of data needed to characterize such crystals can be quite large if the molecules forming the crystal contain many atoms . by creating molecular orbitals in analogy with standard methods in electronic structure calculations the required amount of datacan be considerably reduced .the eigenvalues arising from the overlap matrix in this molecular basis set will then form a fingerprint for the molecular crystal .the molecular orbitals can be obtained in the following way : for each molecule in our unit cell we cut out a cluster of molecules within a sphere of a certain radius . for each molecule in this sphere we set up the overlap matrix by putting gaussian type orbitals on all its constituent atoms .then we calculate for this matrix the eigenvalues and eigenvectors .the principal vectors belonging to several of the largest eigenvalues are subsequently used for the contraction : no metric tensor is required since the set of vectors used for the contraction is orthogonal .the molecular orbitals have characteristic patterns , such that the orbital corresponding to the first principal vector has no nodes , while the orbitals of the following principal vectors have increasing number of nodes .they are therefore similar to the atomic orbitals of , and higher angular momentum character , which were used for the fingerprints in the ordinary crystals . in fig .[ orbitals ] these orbitals are shown for the case of the paracetamol molecule . by multiplying with some cutoff function as in eq .[ cutoff ] we can then obtain molecule centered overlap matrices in this molecular basis which is free of discontinuities with respect to the motion of the atoms . in the molecular casethe value of the cutoff function depends on some short range pseudo - interaction between the central and the surrounding molecules .this interaction between the central molecule and another molecule is given by where the sum over runs over all the atoms in the central molecule and the sum over over all the atoms in the surrounding molecule . is the distance between the atoms and and is the sum of the van der waals radii of the two atoms .the interaction is taken to vanish beyond its first zero .because of the short range of the interaction , molecules sharing a large surface will be coupled strongly .the analytical form of the cutoff function is identical to the one for the atomic case ( eq . [ fcut ] ) .however , since a cartesian distance between molecules is ill defined , the argument in eq .[ fcut ] is modified .the scaled distance between the atoms is replaced by the normalized interaction between the molecules the eigenvalues of this final overlap matrix form now a fingerprint describing the environment of this molecule with respect to the other molecules . to compare two structures this procedureis done for all molecules contained in the corresponding unit cell .a configurational distance is calculated then as in eq .[ def ] by using the hungarian algorithm .structural data found in various material databases is frequently obtained from measurements at different temperatures which results in thermal expansion .similarly , measurements at different pressures or low quality x - ray diffraction patterns can lead to slight cell distortions .obviously our fingerprint distances among such expanded or distorted but otherwise identical structures are different from zero . for these reasonswe have introduced a scheme where the six degrees of freedom associated to the cell are optimized while keeping the reduced atomic coordinates fixed such as to obtain the smallest possible distance to a reference configuration .the gradient of our fingerprint distance with respect to the lattice vectors can be calculated analytically using the hellmann feynman theorem .an application of the lattice optimization scheme was applied to a subset of zro structures taken from the open quantum materials database ( oqmd ) , as will be discussed in further detail later in the following section .fig . [ big ] shows all possible pairwise configurational distances obtained with several fingerprints for various data sets .different fingerprints are plotted along the x and y axis .lfp stands for the uncontracted long fingerprint and in square parenthesis it is indicated whether only or both and orbitals were used to set up the overlap matrix , sfp[ ] stands for the short contracted fingerprint with orbitals only where the properties used for the contraction are central atom and the element type of the neighboring atoms in the sphere . for materials that have only one type of element ( si in our case ) the atomic fingerprint has only length two and the coincidence theorem is not satisfied .even though there are hyperplanes in the configurational space where different configurations have identical fingerprints , it is very unlikely that different local minima lie on such hyperplanes and the fingerprint can therefore nevertheless well distinguish between identical and distinct structures .if both and orbitals are used ( sfp[ ) the atomic fingerprint has at least length 4 and no problem with the coincidence theorem arise .in addition we also show the configurational distances arising from the oganov and bcm fingerprints as well as from a fingerprint based on the amplitudes of symmetry functions .all our data sets contain both the global minimum ( geometric ground state ) as well as local minima ( metastable ) structures , obtained from minima hopping runs .energies and forces were calculated with the dftb+ method for sic and the molecular crystals , and the lenosky tight - binding scheme was used for si . for the cspbi perovskite and the transparent conductive oxide zn plane wave density functional theory ( dft ) calculations were used as implemented in the quantum espresso code .the first test set consists of clathrate like structures of low density silicon allotropes .low density silicon gives rise to a larger number of low energy crystalline structures than silicon at densities of diamond silicon and thus poses an ideal benchmark system .in the first line of the figure we show the results of a relatively sloppy local geometry optimization , where the relaxation is stopped once the forces are smaller than 5.e-2 ev / .gaps separating identical from distinct structures are hardly visible for all fingerprints .once a very accurate geometry optimization with a force threshold of 5.e-3 ev / is performed , gaps become visible for all the fingerprints . the second data set is silicon carbide , a material well known for its large number of polytypes .our fingerprint gives rise to a small gap whereas the configurational distances based on all other fingerprints do not show any gap at all .the opening of a gap can again be observed once the geometry optimization is done with high accuracy . for this caseall fingerprints result in a gap , but like for all test sets it is the least pronounced for the bcm fingerprint .both the oganov and bcm fingerprints are global ones such that information is lost in the averaging process of these fingerprints as the system gets larger .therefore , it is not surprising that the gap again disappears even for the high quality geometry optimization once one goes to large cells .the next two test sets consist of an oxide material and a perovskite with their characteristic building blocks of octahedra and tetrahedra which can be arranged in a very large number of different ways .all our fingerprints give rise to clear gaps separating identical from distinct structures .the oganov fingerprint also gives rise to clear gaps whereas the bcm fingerprint only weakly indicates some gap .the behler fingerprint gives a well pronounced gap for zn but only a blurred gap for cspbi .the last theoretical test system is a platinum surface . in this casethe energies were calculated with the morse potential .the geometry optimization were done with high accuracy and therefore a big gap is visible in all cases . fig .[ energy ] shows the correlation between the energy difference and the fingerprint distance for all the test cases of fig .[ big ] . except for the very large 256 atoms system there exists always a clear energy gap if the geometry optimization was done with high accuracy .even though there is of course the possibility of nearly degenerate structures , this seems to happen rarely in practice and energy is thus a rather good and simple descriptor for small unit cells . to test our molecular fingerprint ,two test systems were employed , namely crystalline formaldhyde and paracetamol .the formaldehyde system comprised 240 structures with 8 molecules per cell and the paracetamol system 300 structures with 4 molecules per cell .the two top panels of fig . [ molecular ] show the molecular fingerprint distance versus the energy difference of different structures of paracetamol and formaldehyde , respectively .the two bottom panels show the correlation of the standard fingerprint against the molecular fingerprint for both systems .the existence of a gap in the pairwise distance distributions clearly indicates that identical and distinct structures can be identified by both fingerprints .however , the molecular fingerprint vector is considerably shorter because only six principal vectors were used ( shown in fig .[ orbitals ] ) . since six is the number of degrees of freedom of a rigid rotator it is expected that this fingerprint is long enough to satisfy the coincidence theorem .top panels : correlation between the energy difference and the molecular fingerprint distance ( mfp ) for formaldehyde ( a ) and paracetamol ( b ) .bottom panels : correlation between molecular fingerprint distance and standard fingerprint distance ( short contracted fingerprint with s orbitals only , sfp[s ] ) for formaldehyde ( c ) and paracetamol ( d ) . ] the nodal character of the first six principal vectors for the paracetamol molecule . the atoms are colored according to the sign of the elements of the first six principal vectors . a systematic colour pattern can be observed .the first principal eigenvector never changes sign and has therefore no nodes ( a ) .higher principal vectors exhibit more and more nodes ( b - f ) . ]next we applied our fingerprint to zro structures contained in the oqmd .115 different entries were available at this composition .the structures were either based on experimental data retrieved from the inorganic crystal structure database ( icsd ) or on binary structural prototypes .when the oqmd was initially created , duplicate entries were identified with the structure comparison algorithm as implemented in the materials interface ( mint ) software package which employs a 6-level test that includes cell reduction as well as an analysis of the lattice symmetry .structures classified as identical to an existing entry in oqmd were mapped to that entry without performing a structural relaxation .therefore , the structural data set contains both dft optimized and experimental structures , resulting in noise on the atomic and cell coordinates arising from the numerical calculations as well as from the different experiments and thermal effects . in fig .[ zro2]a we show the ordinary and the lattice vector optimized fingerprint distances for all 115 structures from the database .we can see that the fingerprint distance can be reduced down to about 1.e-7 for many structures .for some of them the initial fingerprint distances were as large as 0.1 .this allows to detect some identical structures whose initial large fingerprint distance was only due to thermal expansion .however , even with lattice vector optimization it was not possible to decide for the whole data set in an inambiguous way which structures are identical and which were not .therefore , local geometry optimizations were performed at the dft level for all structures using the vasp code .a plane wave cutoff energy of 520 ev was used together with a dense -point mesh .both the atomic and cell variables were relaxed until the maximal force component was less than 2.e-3 ev / and the stress below 1.e-2 gpa .panel ( b ) of fig .[ zro2 ] shows the dft energy differences of the relaxed structures against the fingerprint distances , showing a clear gap that allows to distinguish between identical and different structures .applying the lattice vector optimization scheme on these relaxed structures was not able to further lower the fingerprint distances of identical structures .the coloring in fig .[ zro2 ] indicates how the two structures belonging to a fingerprint distance were classified by mint . assuming that there are no different structures with degenerate dft energies, one can conclude that mint was not able to extract from the non - relaxed data set the information whether structures are identical or not and has erroneously assigned numerous identical structures as distinct , and vice versa to a lesser extent .since both oganov and bcm methods are global fingerprints that discard crucial information , they can fail to describe structural differences , a problem that becomes especially apparent when considering defect structures in complex materials . as an example, a supercell was constructed of the cubic perovskite structure of laalo .half of the al atoms on the b - sites were replaced by mn .then , single oxygen vacancies were introduced on symmetrically inequivalent x - sites .obviously , the structural symmetry was reduced from the initial space group of laalo to the orthorhombic space group of the supercell la(al , mn)o , and the oxygen vacancies resulted in structures with and symmetry .both mint and our fingerprint confirm that the structures are clearly different , whereas the oganov and bcm fingerprint erronously classify both structures as identical .atomic fingerprints that describe the scattering properties as obtained from an overlap matrix are well suited to characterize atomic environments .an ensemble of atomic fingerprints forms a global fingerprint that allows to identify crystalline structures and to define configurational distances satisfying the properties of a metric .the widely used oganov and bcm fingerprints do not have these properties and do also in practice not allow a reliable way to distinguish identical from distinct structures .symmetry function based fingerprints are of similar quality as our scattering fingerprints .however , they are much more costly to calculate .both fingerprints have a cubic scaling with respect to the number of atoms within the cutoff range , but our prefactor of the matrix diagonalization is much smaller then the prefactor for the 3-body terms required for the calculation of the symmetry functions .in contrast to ` true'`false ' schemes such as employed in mint which rely on a threshold and affirm that two structures are either identical or distinct , our fingerprint gives a distance between configurations .the appearance of a gap in the distance distribution indicates that a reliable assignment of identical and distinct structure can be performed .in addition , strong reductions in the fingerprint distances upon lattice vector optimization can detect and eliminate thermal noise on the data set , rendering our fingerprint ideal to scan for duplicates in large structural databases .our scheme can easily be extended to molecular crystals by introducing quantities that are analogous to molecular orbitals .furthermore , the new fingerprint can be used to accurately explore local environments to create atomic and structural attributes for machine learning techniques . in summary, we have demonstrated that this approach allows to characterize crystalline structures by rather short fingerprint vectors and to decide more reliably whether structures are identical or not than previously proposed methods .we thank vinay hegde and antoine emery for valuable expert discussions .this work was done within the nccr marvel project .ma gratefully acknowledges support from the novartis universitt basel excellence scholarship for life sciences and the swiss national science foundation .computer resources were provided at the cscs under project s499 and the national energy research scientific computing center , which is supported by the office of science of the u.s .department of energy under contract no .de - ac02 - 05ch11231 .46ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1088/0957 - 0233/16/1/039 [ * * , ] * * , ( ) * * , ( ) link:\doibase 10.1063/1.4812323 [ * * , ( ) ] link:\doibase 10.1038/sdata.2015.9 [ * * , ( ) ] link:\doibase 10.1016/j.commatsci.2015.02.050 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1039/a901227c [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1016/j.cpc.2011.11.007 [ * * , ] _ _ , , vol .( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) link:\doibase 10.1103/physrevb.90.054102 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1016/j.cpc.2013.10.027 [ * * , ( ) ] link:\doibase 10.1103/physrevb.92.014101 [ * * , ( ) ] * * , ( ) * * , ( ) `` , '' * * , ( ) * * , ( ) * * , ( ) * * , ( )
measuring similarities / dissimilarities between atomic structures is important for the exploration of potential energy landscapes . however , the cell vectors together with the coordinates of the atoms , which are generally used to describe periodic systems , are quantities not suitable as fingerprints to distinguish structures . based on a characterization of the local environment of all atoms in a cell we introduce crystal fingerprints that can be calculated easily and allow to define configurational distances between crystalline structures that satisfy the mathematical properties of a metric . this distance between two configurations is a measure of their similarity / dissimilarity and it allows in particular to distinguish structures . the new method is an useful tool within various energy landscape exploration schemes , such as minima hopping , random search , swarm intelligence algorithms and high - throughput screenings .
to understand how bits of information from external or locally computed signals can be specifically distributed through a network or to it s downstream components we first consider a generic stochastic dynamical system that evolves in time according to where denotes the variables of the network nodes , describes the intrinsic dynamics of the network , and is a stochastic external input driving instantaneous state variable fluctuations which carry the information to be routed through the network .we consider a deterministic reference state solving in the absence of signals ( ) . to quantify how bits of information surfing on top of such a dynamical state are routed through the network , we use information theoretic measures that quantify the amount of information shared and transferred between nodes , independent of how this information is encoded or decoded .more precisely , we measure information sharing between signal and the time lagged signal of nodes and in the network via _ _ the time - delayed mutual information ( dmi ) _ _ here is the probability distribution of the variable of unit at time and the joint distribution of and the variable lagged by ._ _ as a second measure we use the delayed _ _ _ _ _ _ transfer entropy ( dte ) _ _ ( cf .methods ) that genuinely measures information transfer between pairs of units .asymmetries in the dmi and dte curves and _ _ _ _ then indicate the dominant direction in which information is shared or transferred between nodes . to identify the role of the underlying reference dynamical state for network communication a small noise expansion in the signals turns a out to be ideally suited : while the small noise expansion limits the analysis to the vicinity of a specific reference state which is usually regarded as a weakness , in the context of our study , this property is highly advantageous as it directly conditions the calculations on a particular reference state and enables us to extract it s role for the emergent pattern of information routing within the network . for white noise sources this method yields general expressions for the conditional probabilities that depend on . using this result the expressions forthe delayed mutual information and transfer entropy and become a function of the underlying collective reference dynamical state ( cf .methods and supplementary section 1 ) . the dependency on this reference state then provides a generic mechanism to change communication in networks by manipulation the underlying collective dynamics . in the following we show how this general principle gives rise to a variety of mechanisms to flexibly change information routing in networks .we focus on oscillatory phenomena widely observed in networks with a communication function .oscillatory synchronization and phase locking provide a natural way for the temporal coordination between communicating units .key variables in oscillator systems are the phases at time of the individual units .in fact , a wide range of oscillating systems display similar phase dynamics ( cf .supplementary section 2 ) and phase - based encoding schemes are common , e.g. in the brain , genetic circuits and artificial systems .we first focus on systems in a stationary state with a stationary distribution for which the expressions for the dmi and dte become independent of the starting time and only depend on the lag and reference state . to assess the dominant direction of the shared information between two nodes we quantify asymmetries in the dmi curve by using the difference between the integrated mutual informations and .if this is positive , information is shared predominantly from unit to while negative values indicate the opposite direction .analogously , we compute the differences in dte as (cf . methods and supplementary section 3 ) .the set of pairs or for all , then capture strength and directionality of information routing in the network akin to a functional connectivity analysis in neuroscience .we refer to them as information routing patterns ( irps ) . * * @ @ plus 1fil minus * * populations ( wilson - cowan type dynamics ) . for the same network two different collective dynamical states accessed by different initial conditions give rise to two different information sharing patterns ( f h top vs. bottom ) . *l , * as in a d but for generic oscillators close to a hopf bifurcation ( stuart - landau oscillators ) connected to a larger network . in i andl connectivity matrices are shown instead of graphs .two different network - wide information routing patterns arise ( top vs. bottom in j l ) by changing a small number of connection weights ( purple entries in i and l ) . a range of networks of oscillatory units , with disparate physical interactions , connection topologies and external input signals support multiple irps .for instance , in a model of a gene - regulatory network with two oscillatory sub - networks ( fig .[ fig : example_networks]a ) dmi analysis reveals irps with different dominant directions ( fig . [ fig : example_networks]b - d , upper vs. lower sub - panels ) .the change is triggered by adding an external factor that degrades the transcribed mrna in one of the oscillators and thereby changes its intrinsic frequency ( see methods ) .more complex changes in irps emerge in larger networks , possibly with modular architecture . in a network of interacting neuronal populations ( fig .[ fig : example_networks]e ) different initial conditions lead to different underlying collective dynamical states .switching between them induces complicated but specific changes in the irps ( fig .[ fig : example_networks]f - h ) .different irps also emerge by changing a small number of connections in larger networks .[ fig : example_networks]i - l illustrates this for a generic system of coupled oscillators each close to a hopf bifurcation . in general , several qualitatively different options for modifying network - wideirps exist , all of which are relevant in natural and artificial systems : ( i ) changing the intrinsic properties of individual units ( fig . [fig : example_networks]a - d , cf . also fig .[ fig : control]a - c below ) , ( ii ) modifying the system connectivity ( fig .[ fig : example_networks]i - l , fig .[ fig : control]d - f ) and ( iii ) selecting distinct dynamical states of structurally the same system ( fig .[ fig : example_networks]e - h , see also fig .[ fig : combinatorics ] below ) .to reveal how different irps arise and how they depend on the network properties and dynamics , we derive analytic expressions for the dmi and dte between all pairs of oscillators in a network .we determine the phase of each oscillator in isolation by extending its phase description to the full basin of attraction of the stable limit cycle . for weak coupling ,the effective phase evolution becomes where is the intrinsic oscillation frequencies of node and the coupling functions depend on the phase differences only .the final sum in models external signals as independent gaussian white noise processes and a covariance matrix .the precise forms of and generally depend on the specific system ( supplementary section 2 ) . as visible from fig .[ fig : example_networks]e - h , the irp strongly depends on the underlying collective dynamical state .we therefore decompose the dynamics into a deterministic reference part and a fluctuating component .we focus on phase - locked configurations for the deterministic dynamics with constant phase offsets .we estimate the stochastic part via a small noise expansion ( methods , supplementary theorem 1 ) yielding a first - order approximation for the joint probabilities . using together with the periodicity of the phase variables ,we obtain the delayed mutual information _ _ between phase signals in coupled oscillatory networks ; here is the modified bessel function of the first kind , and is the inverse variance of a von mises distributions ansatz for .the system s parameter dependencies , including different inputs , local unit dynamics , coupling functions and interaction topologies are contained in . by similar calculations we obtain analytical expressions for ( methods and supplementary theorem 2 ) .our theoretical predictions well match the numerical estimates ( fig .[ fig : example_networks]d , h , l , see also fig .[ fig : mechanisms]c , d below and supplementary figs . 1 , 2 and 7 ) .for independent input signals ( for ) we typically obtain similar irps determined either by the delayed mutual information or the transfer entropy ( supplementary fig . 2 ) .* multi - stable dynamics and flexible anisotropic information routing . * _ _ * a , * two identical and symmetrically coupled neuronal circuits of wilson - cowan type ( dark and light green , modular sub - network in fg .[ fig : example_networks]e ) .* b , * phase difference between the extracted phases of the two neuronal populations fluctuating around a locked value } ] .* c , * delayed mutual information and * d , * transfer entropy between the phase signals in state ( orange ) and ( brown ) for numerical data ( dots ) and theory ( lines ) .the change in peak latencies form }<0 ] in the and the asymmetry of the curves show anisotropic information routing .switching between the two dynamical states reverses the information flow pattern ( graphs , bottom ) .* e , * phase coupling function ( blue ) and its antisymmetric part ( red ) .the two zeros of with negative slope indicate the deterministic equilibrium phase differences } ] in states and , receptively .the directionality in the information routing pattern arises due to the different slopes of ( dashed lines ) at the noiseless phase - locking offsets } ] . ]to better understand how a collective state gives rise to a specific routing pattern with directed information sharing and transfer , consider a network of two symmetrically coupled identical neural population models ( fig .[ fig : mechanisms]a ) . due to permutation symmetry , the coupling functions obtained from the phase - reduction of the original wilson - cowan - type equations ( methods , supplementary section 5 ) , are identical . for biologically plausible parametersthis network in the noiseless - limit has two stable phase - locked reference states ( and ) .the fixed phase differences } ] are determined by the zeros of the anti - symmetric coupling with negative slope ( fig .[ fig : mechanisms]e ) . for a given level of ( sufficiently weak ) noise, the system shows fluctuations around either one of these states ( fig .[ fig : mechanisms]b ) each giving rise to a different irp .sufficiently strong external signals can trigger state switching and thereby effectively invert the dominant communication direction visible from the ( fig .[ fig : mechanisms]c ) and even more pronounced from the ( fig .[ fig : mechanisms]d ) without changing any structural properties of the network .the anisotropy in information transfer in the fully symmetric network is due to symmetry broken dynamical states . for independent noise inputs , , that are moreover small , the evolution of , , near the reference state reduces to }\left(\phi_{i}^{\left(\mathrm{fluct}\right)}-\phi_{j}^{\left(\mathrm{fluct}\right)}\right)+\varsigma_{i}\xi_{i}\label{eq : firstnoisexpk2}\ ] ] with coupling constants }=\gamma'(\delta\phi_{12}^{\left[\alpha\right]}) ] ( methods ) . as }\approx0 ] . at the same time , the strongly negative coupling } ] at which maximal information sharing is observed ( methods , eq . ) .it furthermore becomes clear that the directionality of the information transfer in general need not be related to the order in which the oscillators phase - lock because the phase - advanced oscillator can either effectively pull the lagging one , or , as in this example , the lagging oscillator can push the leading one to restore the equilibrium phase - difference . in summary ,effective interactions local in state space and controlled by the underlying reference state together with the noise characteristics determine the irps of the network .symmetry broken dynamical states then induce anisotropic and switchable routing patterns without the need to change the physical network structure .for networks with modular interaction topology , our theory relating topology , collective dynamics and irps between individual units can be generalized to predict routing between entire modules . assuming that each sub - network in the noise - less limit has a stable phase - locked reference state , a second phase reduction generalized to stochastic dynamics characterizes each module by a single meta - oscillator with collective phase and frequency , driven by effective noise sources with covariances .the collective phase dynamics of a network with modules then satisfies where are the effective inter - community couplings ( supplementary section 4 ) .the structure of equation is formally identical to equation so that the expressions for inter - node information routing ( , ) can be lifted to expressions on the inter - community level ( , by replacing node- with community - related quantities ( i.e. with or with , etc . ,supplementary corollary 3 and 4 ) .importantly , this process can be further iterated to networks of networks , etc .[ fig : control ] shows examples of information flow patterns resolved at two scales .the information routing direction on the larger scale reflects the majority and relative strengths of irps on the finer scale .@ @ plus 1fil minus the collective quantities in the system are intricate functions of the network properties at the lower scales .intriguingly , the coupling functions not only depend on the non - local interactions between units of module and of cluster but also on purely local properties of the individual clusters .in particular , the form of is a function of the intrinsic local dynamical states and of both clusters as well as the phase response of sub - network ( see methods and supplementary section 4 ) .thus irps on the entire network level depend on local community properties .this establishes several generic mechanisms to globally change information routing in networks via local changes in modular properties , local connectivity , or via switching of local dynamical states . in a network consisting of two sub - networks ( fig .[ fig : control]a ) the local change of the frequency of a single hopf - oscillator in sub - network induces a non - local inversion of the information routing between cluster and ( fig .[ fig : control]b - d ) . in fig .[ fig : control]e - f the direction in which information is routed between two sub - networks * * and of coupled phase oscillators is remotely changed by increasing the strength of a local link in module . the origin in both examples is a non - trivial combination of several factors : the ( small ) manipulations alter the collective cluster frequency , the local dynamical state which in turn change the collective phase response and the effective noise strength of cluster ( supplementary fig . 3 ) .these changes all contribute to changes in the effective couplings as well as in the inter - cluster phase - locking values . taken together this causes the observed inversions in information routing direction .interestingly , the transition in information routing has a switch like dependency on the changed parameter ( fig .[ fig : control]c , d , g ) promoting digital - like changes of communication modes .* * switching between combinatorially many information routing patterns . * _ _ * a , * modular circuit as in fig . [fig : example_networks]e . without inter - module coupling , each of the communities exhibits multi - stability between two phase - locked configurations , denoted as states and ( insets ) .* b , * information routing patterns between the hierarchically reduced sub - networks for different combinations of the local dynamical states ] ) can give rise to more than one globally locked collective state marked with dashes , i.e. ] , .* c , * the local dynamical state configuration ] in fig .[ fig : combinatorics ] ) .others support non - phase locked dynamics that gives rise to time - dependent irps ( cf .[ fig : combinatorics]c and below ) .thus , varying local dynamical states in a hierarchical network flexibly produces a combinatorial number of different irps in the same physical network .general reference states , including periodic or transient dynamics , are not stationary and hence the expressions for the dmi and dte become dependent on time .for example , fig .[ fig : combinatorics]c shows irps that undergo cyclic changes due to an underlying periodic reference state ( cf . also supplementary figure 8a - c ) . in systems with a global fixed pointsystematic displacements to different starting positions in state space give rise to different stochastic transients with different and time - dependent irps ( supplementary figure 8d ) .similarly , switching dynamics along heteroclinic orbits constitute another way of generating specific progressions of reference dynamics .thus information surfing on top of non - stationary reference dynamical configurations naturally yield temporally structured sequences of irps , resolvable also by other measures of instantaneous information flow , e.g. .the above results establish a theoretical basis for the emergence of information routing capabilities in complex networks when signals are communicated on top of collective reference states .we show how information sharing ( dmi ) and transfer ( dte ) emerge through the joint action of local unit features , global interaction topology and choice of the collective dynamical state .we find that information routing patterns self - organize according to general principles ( cf .[ fig : mechanisms ] , [ fig : control ] , [ fig : combinatorics ] ) and can thus be systematically manipulated . employing formal identity of our approach at every scale in oscillatory modular networks ( eq . vs. ) we identify local paradigms that are capable of regulating information routing at the non - local level across the whole network ( figs .[ fig : control ] , [ fig : combinatorics ] ) .we derived theoretical results based on information sharing and transfer obtained via delayed mutual information and transfer entropy curves . using these abstract measures our results are independent of any particular implementation of a communication protocol andthus generically demonstrate how collective dynamics can have a functional role in information routing .for example , in the network in fig .[ fig : mechanisms ] externally injected streams of information are automatically encoded in fluctuations of the rotation frequency of the individual oscillators .the injected signals are then transmitted through the network and decodable from the fluctuating phase velocity of a target unit precisely along those pathways predicted by the current state - dependent irp ( supplementary section 7 ) .our theory is based on a small noise approximation that conditions the analysis onto a specific underlying dynamical state . in this way we extracted the precise role of such a reference state for the networks information routing abilities . for larger signal amplitudes or in highly recurrent networks in which higher - order interactions can play an important role the expansion can be carried out systematically to higher orders using diagrammatic approaches or numerically to accounting for better accuracy and non - gaussian correlations ( cf .also supplementary section 3.4 ) . in systems with multi - stable states two signal types need to be discriminated : those that encode the information to be routed and those that indicate a switch in the reference dynamics and consequently the irps .if the second type of stimuli is amplified appropriately a switch between multi - stable states can be induced that moves the network into the appropriate irp state for the signals that follow .for example , in the network of fig .[ fig : mechanisms ] a switch from state to can be induced by a strong positive pulse to oscillator ( and vice versa ) .if such pulses are part of the input a switch to the appropriate irp state will automatically be triggered and the network auto - regulates its irp function .more generally a separate part of the network that effectively filters out relevant signatures indicating the need for a different irp could provide such pulses .moreover , using the fact that local interventions are capable to switch irps in the network the outcomes of local computations can be used to trigger changes in the global information routing and thereby enable context - dependent processing in a self - organized way . when information surfs on top of dynamical reference states the control of irps is shifted towards controlling collective network dynamics making methods from control theory of dynamical systems available to the control of information routing .for example , changing the interaction function in coupled oscillators systems or providing control signals to a subset of nodes are capable of manipulating the network dynamics .moreover , switch like changes ( cf .[ fig : control ] ) can be triggered by crossing bifurcation points and the control of information routing patterns then gets linked to bifurcation theory of network dynamical systems .while the mathematical part of our analysis focused on phase signals , including additional amplitude degrees of freedom into the theoretical framework can help to explore neural or cell signaling codes that simultaneously use activity- and phase - based representations to convey information .moreover , separating irp generation , e.g. via phase configurations , from actual information transfer , for instance in amplitude degrees of freedom , might be useful for the design of systems with a flexible communication function . the predicted phenomena , including non - local changes of information routing by local interventions , could be directly experimentally verified using methods available to date , such as electrochemical arrays or synthetic gene regulatory networks ( supplementary section 5.3 ) .in addition our results are applicable to the inverse problem : unknown network characteristics may be inferred by fitting theoretical expected dmi and dte patterns to experimentally observed data . for example , inferring state - dependent coupling strengths could further the analysis of neuronal dynamics during context - dependent processing .modifying inputs , initial conditions or system - intrinsic properties may well be viable in many biological and artificial systems whose function requires particular information routing . for instance , on long time scales , evolutionary pressure may select a particular information routing pattern by biasing a particular collective state in gene regulatory and cell signaling networks ; on intermediate time scales , local changes in neuronal responses due to adaptation or varying synaptic coupling strength during learning processes * * can impact information routing paths in entire neuronal circuits ; on fast time scales , defined control inputs to biological networks or engineered communication systems that switch the underlying collective state , can dynamically modulate information routing patterns without any physical change to the network .* methods : * _ transfer entropy . _ the delayed transfer entropy ( dte ) _ _ from a time - series to a time - series is defined as __ with joint probability .this expression is not invariant under permutation of and , implying the directionality of te . for a more direct comparison with dmi in figure 2, we define by for and by for . _ _ dynamic information routing via dynamical states . _ _ for a dynamical system the reference deterministic solution starting at is given by the deterministic flow .the small noise approximation for white noise then yields where denotes the normal distribution with mean and covariance matrix , and . from this and the initial distribution delayed mutual information and transfer entropy and are obtained via and .the result depends on time , lag and the reference state ._ _ oscillator networks . _ _ in fig . 1a, we consider a network of two coupled biochemical goodwin oscillators .oscillations in the expression levels of the molecular products arise due to a nonlinear repressive feedback loop in successive transcription , translation and catalytic reactions .the oscillators are coupled via mutual repression of the translation process .in addition , in one oscillator changes in concentration of an external enzyme regulate the speed of degradation of mrnas , thus affecting the translation reaction , and , ultimately , the oscillation frequency . in fig .[ fig : example_networks]e , [ fig : mechanisms ] , [ fig : combinatorics ] we consider networks of wilson - cowan type neural masses ( population signals ) .each neural mass intrinsically oscillates due to antagonistic interactions between local excitatory and inhibitory populations .different neural masses interact , within and between communities , via excitatory synapses . in the generic networks in fig .1i and fig .3a each unit is modeled by the normal form of a hopf - bifurcation in the oscillatory regime together with linear coupling .finally , the modular networks analyzed in figures 3a and 3b are directly cast as phase - reduced models with freely chosen coupling functions .see the supplementary information for additional details , model equations and parameters and phase estimation ._ _ _ _ analytic derivation of the and curves . _ _ in the small noise expansion , both and curves have an analytic approximation : for stochastic fluctuations around some phase - locked collective state with constant reference phase offsets the phases evolve as in the deterministic limit , where is the collective network frequency and the are the coupling functions from eq . .in presence of noise , the phase dynamics have stochastic components . in first order approximation , independent noise inputs yield coupled ornstein - uhlenbeck processes with linearized , state - dependent couplings given by the laplacian matrix entries and . the analytic solution to the stochastic equations provides an estimate of the probability distributions , , and . via this results in a prediction for , eq ., as a function of the matrix elements specifying the inverse variance of a von mises distribution ansatz for .similarly via an expression for is obtained . for the dependency of , and on network parameters and further details ,see the derivation of the theorems 1 and 2 in the supplementary information ._ _ time scale for information sharing ._ _ for a network of two oscillators as in fig . with linearized coupling strengths } ] and }<g_{2}^{\left[\alpha\right]}$ ] , maximizing ( see supplementary information for full analytic expressions of and in two oscillator networks ) yields _ collective phase reduction ._ suppose that each node belongs to a specific network module out of non - overlapping modules of a network .then equation can be simplified to under the assumption that in the absence of noise every community has a stable internally phase - locked state , where are constant phase offsets of individual nodes .every community can then be regarded as a single meta - oscillator with a collective phase and a collective frequency .the vector components of the collective phase response , the effective couplings and the noise parameters and are obtained through collective phase reduction and depend on the respective quantities ( ) on the single - unit scale ( see supplementary section 4 for a full derivation ) .+ * acknowledgements : * we thank t. geisel for valuable discussions . partially supported by the federal ministry for education and research ( bmbf ) under grants no . 01gq1005b[ ck , db , mt ] and 03sf0472e [ mt ] , by the nvidia corp . , santa clara , usa [ mt ] , a grant by the max planck society [ mt ] , by the fp7 marie curie career development fellowship ief 330792 ( dynvib ) [ db ] and an independent postdoctoral fellowship by the rockefeller university , new york , usa [ ck ] . +* author contributions : * all authors designed research . c.k . derived the theoretical results , developed analysis tools and carried out the numerical experiments .all authors analyzed and interpreted the results and wrote the manuscript .+ * additional information : * the authors declare no competing financial interests .supplementary information accompanies this paper .correspondence and requests should be addressed to ck ( ckirst.edu ) . +
flexible information routing fundamentally underlies the function of many biological and artificial networks . yet , how such systems may specifically communicate and dynamically route information is not well understood . here we identify a generic mechanism to route information on top of collective dynamical reference states in complex networks . switching between collective dynamics induces flexible reorganization of information sharing and routing patterns , as quantified by delayed mutual information and transfer entropy measures between activities of a network s units . we demonstrate the power of this generic mechanism specifically for oscillatory dynamics and analyze how individual unit properties , the network topology and external inputs coact to systematically organize information routing . for multi - scale , modular architectures , we resolve routing patterns at all levels . interestingly , local interventions within one sub - network may remotely determine non - local network - wide communication . these results help understanding and designing information routing patterns across systems where collective dynamics co - occurs with a communication function . attuned function of many biological or technological networks relies on the precise yet dynamic communication between their subsystems . for instance , the behavior of cells depends on the coordinated information transfer within gene regulatory networks and flexible integration of information is conveyed by the activity of several neural populations during brain function . identifying general mechanisms for the routing of information across complex networks thus constitutes a key theoretical challenge with applications across fields , from systems biology to the engineering of smart distributed technology . complex systems with a communication function often show characteristic dynamics , such as oscillatory or synchronous collective dynamics with a stochastic component . information is carried in the presence of these dynamics within and between neural circuits , living cells , ecologic or social groups as well as technical communication systems , such as ad hoc sensor networks . while such dynamics could simply reflect the properties of the interacting unit s , emergent collective dynamical states in biological networks can actually contribute to the system s function . for example , it has been hypothesized that the widely observed oscillatory phenomena in biological networks enable emergent and flexible information routing . here we derive a theory that shows how information if conveyed by fluctuations around collective dynamical reference states ( e.g. a stable oscillatory pattern ) can be flexibly routed across complex network topologies . quantifying information sharing and transfer by time - delayed mutual information and transfer entropy curves between time - series of the network s units , we demonstrate how switching between multi - stable states enables the rerouting of information without any physical changes to the network . in fully symmetric networks , anisotropic information transfer can arise via symmetry breaking of the reference dynamics . for networks of coupled oscillators our approach gives analytic predictions how the physical coupling structure , the oscillators properties and the dynamical state of the network co - act to produce a specific communication pattern . resorting to a collective - phase description , our theory further resolves communication patterns at all levels of multi - scale , modular topologies , as ubiquitous , e.g. , in the brain connectome and bio - chemical regulatory networks . we thereby uncover how local interventions within one module may remotely modify information sharing and transfer between other distant sub - networks . a combinatorial number of information routing patterns in networks emerge due to switching between multi - stable dynamical states that are localized on individual sub - sets of network nodes . these results offer a generic mechanism for self - organized and flexible information routing in complex networked systems . for oscillatory dynamics the links made between multi - scale connectivity , collective network dynamics and flexible information routing has potential applications to the reconstruction and design of gene regulatory circuits , wireless communication networks or to the analysis of cognitive functions , among others .
the two key characteristics of wireless communications that most greatly impact system design and performance are 1 ) the randomly - varying channel conditions and 2 ) limited energy resources . in wireless systems ,the power of the received signal fluctuates randomly over time due to mobility , changing environment , and multipath fading .these random changes in the received signal strength lead to variations in the instantaneous data rates that can be supported by the channel .in addition , mobile wireless systems can only be equipped with limited energy resources , and hence energy efficient operation is a crucial requirement in most cases . to measure and compare the energy efficiencies of different systems and transmission schemes , one can choose as a metric the energy required to reliably send one bit of information .information - theoretic studies show that energy - per - bit requirement is generally minimized , and hence the energy efficiency is maximized , if the system operates at low signal - to - noise ratio ( ) levels and hence in the low - power or wideband regimes .recently , verd in has determined the minimum bit energy required for reliable communication over a general class of channels , and studied of the spectral efficiency bit energy tradeoff in the wideband regime while also providing novel tools that are useful for analysis at low . in many wireless communication systems , in addition to energy - efficient operation , satisfying certain quality of service ( qos ) requirements is of paramount importance in providing acceptable performance and quality .for instance , in voice over ip ( voip ) , interactive - video ( e.g , .videoconferencing ) , and streaming - video applications in wireless systems , latency is a key qos metric and should not exceed certain levels . on the other hand , wireless channels , as described above ,are characterized by random changes in the channel , and such volatile conditions present significant challenges in providing qos guarantees . in most cases , statistical , rather than deterministic ,qos assurances can be given .in summary , it is vital for an important class of wireless systems to operate efficiently while also satisfying qos requirements ( e.g. , latency , buffer violation probability ). information theory provides the ultimate performance limits and identifies the most efficient use of resources .however , information - theoretic studies and shannon capacity formulation generally do not address delay and quality of service ( qos ) constraints .recently , wu and negi in defined the effective capacity as the maximum constant arrival rate that a given time - varying service process can support while providing statistical qos guarantees .effective capacity formulation uses the large deviations theory and incorporates the statistical queueing constraints by capturing the rate of decay of the buffer occupancy probability for large queue lengths .the analysis and application of effective capacity in various settings has attracted much interest recently ( see e.g. , and references therein ) . in this paper , we study the energy efficiency in the presence of queueing constraints and channel uncertainty .we assume that the channel is not known by the transmitter and receiver prior to transmission , and is estimated imperfectly by the receiver through training . in our model, we incorporate statistical queueing constraints by employing the effective capacity formulation which provides the maximum throughput under limitations on buffer violation probabilities for large buffer sizes .since the transmitter is assumed to not know the channel , fixed - rate transmission is considered .we consider a point - to - point wireless link in which there is one source and one destination .it is assumed that the source generates data sequences which are divided into frames of duration .these data frames are initially stored in the buffer before they are transmitted over the wireless channel .the discrete - time channel input - output relation in the symbol duration is given by = h[i ] x[i ] + n[i ] \quad i = 1,2,\ldots.\end{gathered}\ ] ] where ] denote the complex - valued channel input and output , respectively .we assume that the bandwidth available in the system is and the channel input is subject to the following average energy constraint : |^2\}\le { \bar{p}}/ b ] is a zero - mean , circularly symmetric , complex gaussian random variable with variance |^2\ } = n_0 ] are assumed to form an independent and identically distributed ( i.i.d . ) sequence .finally , ] .since the mmse estimate depends only on the training energy and not on the training duration , it can be easily seen that transmission of a single pilot at every seconds is optimal .note that in every frame duration of seconds , we have symbols and the overall available energy is .we now assume that each frame consists of a pilot symbol and data symbols .the energies of the pilot and data symbols are respectively , where is the fraction of total energy allocated to training .note that the data symbol energy is obtained by uniformly allocating the remaining energy among the data symbols . in the training phase, the receiver obtains the mmse estimate which is a circularly symmetric , complex , gaussian random variable with mean zero and variance , i.e. , .now , the channel fading coefficient can be expressed as where is the estimate error and .consequently , in the data transmission phase , the channel input - output relation becomes = \hat{h}[i ] x[i ] + \tilde{h}[i ] x[i ] + n[i ] \quad i = 1,2,\ldots.\end{gathered}\ ] ] since finding the capacity of the channel in ( [ eq : impmodel ] ) is a difficult task - dependent , discrete distribution with a finite number of mass points . in such cases ,no closed - form expression for the capacity exists , and capacity values need to be obtained through numerical computations . ] , a capacity lower bound is generally obtained by considering the estimate error as another source of gaussian noise and treating x[i ] + n[i] ] where is the variance of the estimate error . under these assumptions , a lower bound on the instantaneous capacityis given by , where effective is and is the variance of estimate .note that the expression in ( [ eq : traincap2 ] ) is obtained by defining where is a standard complex gaussian random variable with zero mean and unit variance , i.e. , .since gaussian is the worst uncorrelated noise , the above - mentioned assumptions lead to a pessimistic model and the rate expression in ( [ eq : traincap2 ] ) is a lower bound to the capacity of the true channel ( [ eq : impmodel ] ) . on the other hand, is a good measure of the rates achieved in communication systems that operate as if the channel estimate were perfect ( i.e. , in systems where gaussian codebooks designed for known channels are used , and scaled nearest neighbor decoding is employed at the receiver ) .henceforth , we base our analysis on to understand the impact of the imperfect channel estimate .since the transmitter is unaware of the channel conditions , it is assumed that information is transmitted at a fixed rate of bits / s . when , the channel is considered to be in the on state and reliable communicationis achieved at this rate .if , on the other hand , , we assume that outage occurs . in this case , channel is in the off state and reliable communication at the rate of bits / s can not be attained . hence, effective data rate is zero and information has to be resent .[ fig:00 ] depicts the two - state transmission model together with the transition probabilities . under the blockfading assumption , it can be easily seen that the transition probabilities are given by where and is an exponential random variable with mean , and hence , .in , wu and negi defined the effective capacity as the maximum constant arrival rate that a given service process can support in order to guarantee a statistical qos requirement specified by the qos exponent .if we define as the stationary queue length , then is the decay rate of the tail distribution of the queue length : therefore , for large , we have the following approximation for the buffer violation probability : .hence , while larger corresponds to more strict qos constraints , smaller implies looser qos guarantees .similarly , if denotes the steady - state delay experienced in the buffer , then for large , where is determined by the arrival and service processes .the effective capacity is given by }\}}\end{gathered}\ ] ] where = \sum_{i=1}^{t}r[i] ] denote the discrete - time stationary and ergodic stochastic service process .note that in the model we consider , = rt \text { or } 0 $ ] depending on the channel state being on or off . in , it is shown that for such an on - off model , we have note that in our model .then , for a given qos delay constraint , the effective capacity normalized by the frame duration and bandwidth , or equivalently spectral efficiency in bits / s / hz , becomes note that is obtained by optimizing both the fixed transmission rate and the fraction of power allocated to training , . in the optimization result ( [ eq : trainopti2 ] ) , and are the optimal values of and , respectively .it can easily be seen that hence , as the qos requirements relax , the maximum constant arrival rate approaches the average transmission rate . on the other hand , for , in order to avoid violations of buffer constraints . in this paper , we focus on the energy efficiency of wireless transmissions under the aforementioned statistical queueing constraints .since energy efficient operation generally requires operation at low- levels , our analysis throughout the paper is carried out in the low- regime . in this regime ,the tradeoff between the normalized effective capacity ( i.e , spectral efficiency ) and bit energy is a key tradeoff in understanding the energy efficiency , and is characterized by the bit energy at zero spectral efficiency and wideband slope provided , respectively , by where and are the first and second derivatives with respect to , respectively , of the function at zero . and provide a linear approximation of the spectral efficiency curve at low spectral efficiencies .in this section , we investigate the optimization problem in ( [ eq : trainopti ] ) .in particular , we identify the optimal fraction of power that needs to be allocated to training while satisfying statistical buffer constraints .[ theo : optrho ] at a given level , the optimal fraction of power that solves ( [ eq : trainopti ] ) does not depend on the qos exponent and the transmission rate , and is given by where and ._ proof : _ from ( [ eq : trainopti ] ) and the definition of in ( [ eq : trainthresh ] ) , we can easily see that for fixed , the only term in ( [ eq : trainopti ] ) that depends on is .moreover , has this dependency through .therefore , that maximizes the objective function in ( [ eq : trainopti ] ) can be found by minimizing , or equivalently maximizing . substituting the definitions in ( [ eq : trainpower ] ) and the expressions for and into( [ eq : trainsnr ] ) , we have where .evaluating the derivative of with respect to and making it equal to zero leads to the expression in ( [ eq : optrho ] ) . clearly , is independent of and .above , we have implicitly assumed that the maximization is performed with respect to first and then .however , the result will not alter if the order of the maximization is changed .note that the objective function in ( [ eq : trainopti ] ) is a monotonically increasing function of for all .it can be easily verified that maximization does not affect the monotonicity of , and hence is still a monotonically increasing function of .therefore , in the outer maximization with respect to , the choice of that maximizes will also maximize , and the optimal value of is again given by ( [ eq : optrho ] ) . this section , we investigate the spectral efficiency bit energy tradeoff as the average power diminishes .we assume that the bandwidth allocated to the channel is fixed . with the optimal value of given in theorem [ theo : optrho ], we can now express the normalized effective capacity as where and note that vanishes with decreasing .we obtain the following result on the bit energy requirement in the low - power regime as diminishes .[ theo : imperfect ] in the low - power regime , the bit energy increases without bound as the average power and hence vanishes , i.e. , this result shows us that operation at very low power levels is extremely energy inefficient and should be avoided regardless of the value of . note that the power allocated for training , , decreases with decreasing .hence , our ability to estimate the channel is hindered in the low - power regime while , as mentioned before , the system operates as if the channel estimate were perfect .this discrepancy leads to the inefficiency seen as approaches zero .[ fig:6 ] plots the spectral efficiency vs. bit energy for when hz . as predicted by the result of theorem [ theo : imperfect ] , the bit energy increases without bound in all cases as the spectral efficiency .consequently , the minimum bit energy is achieved at a nonzero spectral efficiency below which one should avoid operating as it only increases the energy requirements .another observation is that the minimum bit energy increases as increases and hence as the statistical queueing constraints become more stringent . at higher spectral efficiencies ,we again note the increased energy requirements with increasing . in fig .[ fig : ebsnr ] , we plot as a function of for different bandwidth levels assuming .similarly , we observe that the minimum bit energy is attained at a nonzero value below which requirements start increasing .furthermore , we see that as the bandwidth increases , the minimum bit energy tends to decrease and is achieved at a lower level . in the rayleigh channel with . . ]vs. in the rayleigh channel . , =0.01 . ] in this section , we consider the wideband regime in which the bandwidth is large .we assume that the average power is kept constant .note that as the bandwidth increases , approaches zero and we again operate in the low- regime .we denote .note that as , we have . with this notation, we can express the normalized effective capacity as the following result provides the expressions for the bit energy at zero spectral efficiency ( i.e. , as ) and the wideband slope , and characterize the spectral efficiency - bit energy tradeoff in the wideband regime .[ theo : wideband ] in the wideband regime , the minimum bit energy and wideband slope are given by respectively , where , , and . is defined as and satisfies we note that the minimum bit energy in the wideband regime is achieved as and hence as .this is in stark contrast to the results in the low - power regime in which the bit energy requirements grow without bound as and vanishes . in the model we have considered ,the difference in the performance can be attributed to the fact that increase in the bandwidth , while lowering symbol power , does not necessarily decrease the training power .[ fig : trainwb ] plots the spectral efficiency bit energy curve in the rayleigh channel for different values . in the figure, we assume that .as predicted , the minimum bit energies are obtained as and hence the spectral efficiency approach zero . are computed to be equal to db for , respectively .moreover , the wideband slopes are for the same set of values .as can also be seen in the result of theorem [ theo : wideband ] , the minimum bit energy and wideband slope in general depend on . in fig .[ fig : trainwb ] , we note that the bit energy requirements ( including the minimum bit energy ) increase with increasing , illustrating the energy costs of stringent queueing constraints . in this paper , we have considered fixed - rate / fixed - power transmissions over imperfectly - known channels . in fig .[ fig : compair ] , we compare the performance of this system with those in which the channel is perfectly - known and fixed- or variable - rate transmission is employed .the latter models have been studied in and .this figure demonstrates the energy costs of not knowing the channel and sending the information at fixed - rate .a. ephremides and b. hajek , information theory and communication networks : an unconsummated union , " _ ieee trans .inform . theory _44 , pp .2416 - 2434 , oct .d. wu and r. negi `` effective capacity : a wireless link model for support of quality of service , '' _ ieee trans .wireless commun ._ , vol.2,no . 4 , pp.630 - 643 .july 2003 l. liu , p. parag , j. tang , w .- y . chen and j .- f .chamberland , `` resource allocation and quality of service evaluation for wireless communication systems using fluid models , '' _ ieee trans .inform . theory _1767 - 1777 , may 2007 j. tang and x. zhang , `` cross - layer - model based adaptive resource allocation for statistical qos guarantees in mobile wireless networks , '' _ ieee trans .wireless commun ._ , vol . 7 , pp.2318 - 2328 , june 2008 .d. qiao , m.c .gursoy and s. velipasalar , `` energy efficiency of fixed - rate wireless transmissions under qos constraints , '' to appear at _ ieee international conference on communications ( icc ) _ , junchang , _ performance guarantees in communication networks _, new york : springer , 1995
energy efficiency of fixed - rate transmissions is studied in the presence of queueing constraints and channel uncertainty . it is assumed that neither the transmitter nor the receiver has channel side information prior to transmission . the channel coefficients are estimated at the receiver via minimum mean - square - error ( mmse ) estimation with the aid of training symbols . it is further assumed that the system operates under statistical queueing constraints in the form of limitations on buffer violation probabilities . the optimal fraction of of power allocated to training is identified . spectral efficiency bit energy tradeoff is analyzed in the low - power and wideband regimes by employing the effective capacity formulation . in particular , it is shown that the bit energy increases without bound in the low - power regime as the average power vanishes . on the other hand , it is proven that the bit energy diminishes to its minimum value in the wideband regime as the available bandwidth increases . for this case , expressions for the minimum bit energy and wideband slope are derived . overall , energy costs of channel uncertainty and queueing constraints are identified .
quantum mechanics ( qm ) and general relativity ( gr ) have modified our understanding of the physical world in depth .but they have left us with a general picture of the physical world which is unclear , incomplete , and fragmented . combining what we havelearn about our world from the two theories and finding a new synthesis is a major challenge , perhaps _ the _ major challenge , in today s fundamental physics .the two theories have a opened a major scientific revolution , but this revolution is not completed .most of the physics of this century has been a sequel of triumphant explorations of the new worlds opened by qm and gr .qm lead to nuclear physics , solid state physics , and particle physics .gr to relativistic astrophysics , cosmology and is today leading us towards gravitational astronomy .the urgency of applying the two theories to larger and larger domains , the momentous developments , and the dominant pragmatic attitude of the middle of this century , have obscured the fact that a consistent picture of the physical world , more or less stable for three centuries , has been lost with the advent of qm and gr .this pragmatic attitude can not be satisfactory , or productive , in the long run .the basic cartesian - newtonian notions such as matter , space , time , causality , have been modified in depth .the new notions do not stay together . at the basis of our understanding of the worldreigns a surprising confusion . from qmand gr we know that we live in a spacetime with quantum properties , that is , a _quantum spacetime_. but what is a quantum spacetime ? in the last decade , the attention of the theoretical physicists has been increasingly focusing on this major problem .whatever the outcome of the enterprise , we are witnessing a large scale intellectual effort for accomplishing a major aim : completing the xxth scientific revolution , and finding a new synthesis . in this effort ,physics is once more facing _conceptual _ problems : _ what is matter ? what is causality ? what is the role of the observer in physics ?what is time ?what is the meaning of `` being somewhere '' ?what is the meaning of `` now '' ?what is the meaning of `` moving '' ? is motion to be defined with respect to objects or with respect to space ? _ these foundational questions , or sophisticated versions of these questions , were central in the thinking and in the results of einstein , heisenberg , bohr , dirac and their colleagues .but these are also precisely the same questions that descartes , galileo , huygens , newton and their contemporaries debated with passion the questions that lead them to create modern science . for the physicists of the middle of this century ,these questions were irrelevant : one does not need to worry about first principles in order to apply the schrdinger equation to the helium atom , or to understand how a neutron star stays together .but today , if we want to find a novel picture of the world , if we want to understand what is quantum spacetime , we have to return , once again , to those foundational issues . we have to find a new answer to these questions different from newton s answer which took into account what we have learned about the world with qm and gr .of course , we have little , if any , direct empirical access to the regimes in which we expect genuine quantum gravitational phenomena to appear .anything could happen at those fantastically small distance scales , far removed from our experience . nevertheless , we do have information about quantum gravity , and we do have indications on how to search it .in fact , we are precisely in one of the very typical situations in which good fundamental theoretical physics has been working at its best in the past : we have learned two new extremely general `` facts '' about our world , qm and gr , and we have `` just '' to figure out what they imply , when taken together .the most striking advances in theoretical physics happened in situations analogous to this one . here, i present some reflections on these issues . what have we learned about the world from qm and ,especially , gr ?what do we know about space , time and matter ?what can we expect from a quantum theory of spacetime ?to which extent does taking qm and gr into account force us to modify the notion of time ?what can we already say about quantum spacetime ?i present also a few reflections on issues raised by the relation between philosophy of science and research in quantum gravity .i am not a philosopher , and i can touch philosophical issues only at the risk of being naive .i nevertheless take this risk here , encouraged by craig callender and nick huggett extremely stimulating idea of this volume .i present some methodological considerations how shall we search ?how can the present successful theories can lead us towards a theory that does not yet exist? as well as some general consideration .in particular , i discuss the relation between physical theories that supersed each others and the attitude we may have with respect to the truth - content of a physical theory , with respect to the reality of the theoretical objects the theory postulates in particular , and to its factual statements on the world in general .i am convinced of the reciprocal usefulness of a dialog between physics and philosophy ( rovelli 1997a ) .this dialog has played a major role during the other periods in which science faced foundational problems . in my opinion , most physicists underestimate the effect of their own epistemological prejudices on their research . and many philosophers underestimate the influence positive or negative they have on fundamental reserach .on the one hand , a more acute philosphical awarness would greatly help the physicists engaged in fundamental research : newton , heisenberg and einstein could nt have done what they have done if they were nt nurtured by ( good or bad ) philosophy . on the other hand ,i wish contemporary philosophers concerned with science would be more interested in the ardent lava of the foundational problems science is facing today .it is here , i believe , that stimulating and vital issues lie .what is the task of a quantum theory of gravity , and how should we search for such a theory ?the task of the search is clear and well defined .it is determined by recalling the three major steps that lead to the present problematic situation .the first step is in the works of faraday , maxwell and einstein .faraday and maxwell have introduced a new fundamental notion in physics , the field .faraday s book includes a fascinating chapter with the discussion of whether the field ( in faraday s terminology , the `` lines of force '' ) is `` real '' .as far as i understand this subtle chapter ( understanding faraday is tricky : it took the genius of maxwell ) , in modern terms what faraday is asking is whether there are independent degrees of freedom in the electric and magnetic fields .a degree of freedom is a quantity that i need to specify ( more precisely : whose value and whose time derivative i need to specify ) in order to be able to predict univocally the future evolution of the system .thus faraday is asking : if we have a system of interacting charges , and we know their positions and velocities , is this knowledge sufficient to predict the future motions of the charges ? or rather , in order to predict the future , we have to specify the instantaneous configuration of the field ( the fields degrees of freedom ) , as well ?the answer is in maxwell equations : the field has independent degrees of freedom .we can not predict the future evolution of the system from its present state unless we know the instantaneous field configuration . learning to use these degrees of freedom lead to radio , tv and cellular phone .to which physical entity do the degrees of freedom of the electromagnetic field refer ?this was one of the most debated issues in physics towards the end of last century .the electromagnetic waves have aspects in common with water waves , or with sound waves , which describe vibrations of some material medium .the natural interpretation of the electromagnetic field was that it too describes the vibrations of some material medium for which the name `` ether '' was chosen .a strong argument supports this idea : the wave equations for water or sound waves fail to be galilean invariant .they do so because they describe propagation over a medium ( water , air ) whose state of motion breaks galilean invariance and defines a preferred reference frame .maxwell equations break galilean invariance as well and it was thus natural to hypothesize a material medium determining the preferred reference frame . buta convincing dynamical theory of the ether compatible with the various experiments ( for instance on the constancy of the speed of light ) could not be found .rather , physics took a different course .einstein _ believed _ maxwell theory as a fundamental theory and _ believed _ the galilean insight that velocity is relative and inertial system are equivalent .merging the two , he found special relativity .a main result of special relativity is that the field can not be regarded as describing vibrations of underlying matter .the idea of the ether is abandoned , and _ the field has to be taken seriously as elementary constituent of reality_. this is a major change from the ontology of cartesian - newtonian physics . in the best description we can give of the physical world, there is a new actor : the field .the electromagnetic field can be described by the maxwell potential .the entity described by ( more precisely , by a gauge - equivalent class of s ) is one of the elementary constituents of the physical world , according to the best conceptual scheme physics has find , so far , for grasping our world .the second step ( out of chronological order ) is the replacement of the mechanics of newton , lagrange and hamilton with quantum mechanics ( qm ) . as did classical mechanics ,qm provides a very general framework . by formulating a specific dynamical theory within this framework, one has a number of important physical consequences , substantially different from what is implied by the newtonian scheme .evolution is probabilistically determined only ; some physical quantities can take certain discrete values only ( are `` quantized '' ) ; if a system can be in a state , where a physical quantity has value , as well as in state , where has value , then the system can also be in states ( denoted ) where , has value with probability , or , alternatively , with probability ( superposition principle ) ; conjugate variables can not be assumed to have value at the same time ( uncertainty principle ) ; and what we can say about the properties that the system will have the - day - after - tomorrow is not determined just by what we can say about the system today , but also on what we will be able to say about the system tomorrow .( bohr would had simply said that observations affect the system .formulations such as bohm s or consistent histories force us to use intricate wording for naming the same physical fact . )the formalism of qm exists in a number of more or less equivalent versions : hilbert spaces and self - adjoint observables , feynman s sum over histories , algebraic formulation , and others . often , we are able to translate from one formulation to another . however , often we can not do easily in one formulation , what we can do in another .qm is not the theory of micro - objects .it is our best form of mechanics .if quantum mechanics failed for macro - objects , we would have detected the boundary of its domain of validity in mesoscopic physics .we havent .the classical regime raises some problems ( why effects of macroscopic superposition are difficult to detect ? ) .solving these problems requires good understanding of physical decoherence and perhaps more .but there is no reason to doubt that qm represents a deeper , not a shallower level of understanding of nature than classical mechanics .trying to resolve the difficulties in our grasping of our quantum world by resorting to old classical intuition is just lack of courage .we have learned that the world has quantum properties .this discovery will stay with us , like the discovery that velocity is only relational or like the discovery that the earth is not the center of the universe .the empirical success of qm is immense .its physical obscurity is undeniable .physicists do not yet agree on what qm precisely says about the world ( the difficulty , of course , refers to physical meaning of notions such as `` measurement '' , `` history '' , `` hidden variable '' , ) .it is a bit like the lorentz transformations before einstein : correct , but what do they mean ? in my opinion , what qm means is that the contingent ( variable ) properties of any physical system , or the state of the system , are relational notion which only make sense when referred to a second physical system .i have argued for this thesis in ( rovelli 1996 , rovelli 1998 ) .however , i will not enter in this discussion here , because the issue of the interpretation of qm has no direct connection with quantum gravity . quantum gravity and the interpretation of qmare two major but ( virtually ) completely unrelated problems .qm was first developed for systems with a finite number of degrees of freedom .as discussed in the previous section , faraday , maxwell and einstein had introduced the field , which has an infinite number of degrees of freedom .dirac put the two ideas together .he _ believed _ quantum mechanics and he _ believed _maxwell s field theory much beyond their established domain of validity ( respectively : the dynamics of finite dimensional systems , and the classical regime ) and constructed quantum field theory ( qft ) , in its first two incarnations , the quantum theory of the electromagnetic field and the relativistic quantum theory of the electron . in this exercise ,dirac derived the existence of the photon just from maxwell theory and the basics of qm .furthermore , by just _ believing _ special relativity and _ believing _ quantum theory , namely assuming their validity far beyond their empirically explored domain of validity , he predicted the existence of antimatter .the two embryonal qft s of dirac were combined in the fifties by feynman and his colleagues , giving rise to quantum electrodynamics , the first nontrivial interacting qft .a remarkable picture of the world was born : quantum fields over minkowski space .equivalently , la feynman : the world as a quantum superposition of histories of real and virtual interacting particles .qft had ups and downs , then triumphed with the standard model : a consistent qft for all interactions ( except gravity ) , which , in principle , can be used to predict anything we can measure ( except gravitational phenomena ) , and which , in the last fifteen years has received nothing but empirical verifications .descartes , in _le monde _ , gave a fully relational definition of localization ( space ) and motion ( on the relational / substantivalist issue , see earman and norton 1987 , barbour 1989 , earman 1989 , rovelli 1991a , belot 1998 ) . according to descartes, there is no `` empty space '' .there are only objects , and it makes sense to say that an object a is contiguous to an object b. the `` location '' of an object a is the set of the objects to which a is contiguous .`` motion '' is change in location .that is , when we say that a moves we mean that a goes from the contiguity of an object b to the contiguity of an object c. a consequence of this relationalism is that there is no meaning in saying `` a moves '' , except if we specify with respect to which other objects ( b , c , ) it is moving . thus , there is no `` absolute '' motion .this is the same definition of space , location , and motion , that we find in aristotle .relationalism , namely the idea that motion can be defined only in relation to other objects , should not be confused with galilean relativity .galilean relativity is the statement that `` rectilinear uniform motion '' is a priori indistinguishable from stasis .namely that velocity ( but just velocity ! ) , is relative to other bodies .relationalism holds that _ any _ motion ( however zigzagging ) is a priori indistinguishable from stasis .the very formulation of galilean relativity requires a nonrelational definition of motion ( `` rectilinear and uniform '' with respect to what ? ) .newton took a fully different course .he devotes much energy to criticise descartes relationalism , and to introduce a different view . according to him , _space _ exists .it exists even if there are no bodies in it .location of an object is the part of space that the object occupies .motion is change of location .thus , we can say whether an object moves or not , irrespectively from surrounding objects .newton argues that the notion of absolute motion is necessary for constructing mechanics .his famous discussion of the experiment of the rotating bucket in the _ principia _ is one of the arguments to prove that motion is absolute .this point has often raised confusion because one of the corollaries of newtonian mechanics is that there is no detectable preferred referential frame . thereforethe notion of _ absolute velocity _ is , actually , meaningless , in newtonian mechanics .the important point , however , is that in newtonian mechanics velocity is relative , but any other feature of motion is not relative : it is absolute .in particular , acceleration is absolute .it is acceleration that newton needs to construct his mechanics ; it is acceleration that the bucket experiment is supposed to prove to be absolute , against descartes . in a sense ,newton overdid a bit , introducing the notion of absolute position and velocity ( perhaps even just for explanatory purposes ? ) .many people have later criticised newton for his unnecessary use of absolute position . butthis is irrelevant for the present discussion .the important point here is that newtonian mechanics requires absolute acceleration , against aristotle and against descartes .precisely the same does special relativistic mechanics . similarly , newton introduce absolute time .newtonian space and time or , in modern terms , spacetime , are like a _ stage _ over which the action of physics takes place , the various dynamical entities being the actors .the key feature of this stage , newtonian spacetime , is its metrical structure .curves have length , surfaces have area , regions of spacetime have volume .spacetime points are at fixed _ distance _ the one from the other .revealing , or measuring , this distance , is very simple .it is sufficient to take a rod and put it between two points .any two points which are one rod apart are at the same distance .using modern terminology , physical space is a linear three - dimensional ( 3d ) space , with a preferred metric . on this space thereexist preferred coordinates , in terms of which the metric is just .time is described by a single variable .the metric determines lengths , areas and volumes and defines what we mean by straight lines in space .if a particle deviates with respect to this straight line , it is , according to newton , accelerating .it is not accelerating with respect to this or that dynamical object : it is accelerating in absolute terms .special relativity changes this picture only marginally , loosing up the strict distinction between the `` space '' and the `` time '' components of spacetime . in newtonianspacetime , space is given by fixed 3d planes .in special relativistic spacetime , which 3d plane you call space depends on your state of motion .spacetime is now a 4d manifold with a flat lorentzian metric .again , there are preferred coordinates , in terms of which $ ] .this tensor , , enters all physical equations , representing the determinant influence of the stage and of its metrical properties on the motion of anything .absolute acceleration is deviation of the world line of a particle from the straight lines defined by .the only essential novelty with special relativity is that the `` dynamical objects '' , or `` bodies '' moving over spacetime now include the fields as well .example : a violent burst of electromagnetic waves coming from a distant supernova has _ traveled across space _ and has reached our instruments . for the rest , the newtonian construct of a fixed background stage over which physics happen is not altered by special relativity .the profound change comes with general relativity ( gr ) .the central discovery of gr , can be enunciated in three points .one of these is conceptually simple , the other two are tremendous .first , the gravitational force is mediated by a field , very much like the electromagnetic field : the gravitational field .second , newton s _ spacetime _ , the background stage that newton introduced introduced , against most of the earlier european tradition , _ and the gravitational field , are the same thing_. third , the dynamics of the gravitational field , of the other fields such as the electromagnetic field , and any other dynamical object , is fully relational , in the aristotelian - cartesian sense .let me illustrate these three points .first , the gravitational field is represented by a field on spacetime , , just like the electromagnetic field .they are both very concrete entities : a strong electromagnetic wave can hit you and knock you down ; and so can a strong gravitational wave . the gravitational field has independent degrees of freedom , and is governed by dynamical equations , the einstein equations .second , the spacetime metric disappears from all equations of physics ( recall it was ubiquitous ) . at its place weare instructed by gr we must insert the gravitational field .this is a spectacular step : newton s background spacetime was nothing but the gravitational field !the stage is promoted to be one of the actors .thus , in all physical equations one now sees the direct influence of the gravitational field . how can the gravitational field determine the metrical properties of things , which are revealed , say , by rods and clockssimply , the inter - atomic separation of the rods atoms , and the frequency of the clock s pendulum are determined by explicit couplings of the rod s and clock s variables with the gravitational field , which enters the equations of motion of these variables .thus , any measurement of length , area or volume is , in reality , a measurement of features of the gravitational field .but what is really formidable in gr , the truly momentous novelty , is the third point : the einstein equations , as well as _ all other equations of physics _ appropriately modified according to gr instructions , are fully relational in the aristotelian - cartesian sense .this point is independent from the previous one .let me give first a conceptual , then a technical account of it .the point is that the only physically meaningful definition of location that makes physical sense within gr is relational .gr describes the world as a set of interacting fields and , possibly , other objects .one of these interacting fields is .motion can be defined only as positioning and displacements of these dynamical objects relative to each other ( for more details on this , see rovelli 1991a and especially 1997a ) . to describe the motion of a dynamical object , newton had to assume that acceleration is absolute , namely it is not relative to this or that other dynamical object .rather , it is relative to a background space .faraday maxwell and einstein extended the notion of `` dynamical object '' : the stuff of the world is fields , not just bodies .finally , gr tells us that the background space is itself one of these fields .thus , the circle is closed , and we are back to relationalism : newton s motion with respect to space is indeed motion with respect to a dynamical object : the gravitational field .all this is coded in the active diffeomorphism invariance ( diff invariance ) of gr . with the equations of motionriemann[g]=0 ) might require a detailed analysis ( for instance , hamiltonian ) of the theory .] because active diff invariance is a gauge , the physical content of gr is expressed only by those quantities , derived from the basic dynamical variables , which are fully independent from the points of the manifold . in introducing the background stage ,newton introduced two structures : a spacetime manifold , and its non - dynamical metric structure .gr gets rid of the non - dynamical metric , by replacing it with the gravitational filed .more importantly , it gets rid of the manifold , by means of active diff invariance . in gr ,the objects of which the world is made do not live over a stage and do not live on spacetime : they live , so to say , over each other s shoulders .of course , nothing prevents us , if we wish to do so , from singling out the gravitational field as `` the more equal among equals '' , and declaring that location is absolute in gr , because it can be defined with respect to it . but this can be done within any relationalism : we can always single out a set of objects , and declare them as not - moving by definition .the problem with this attitude is that it fully misses the great einsteinian insight : that newtonian spacetime is just one field among the others .more seriously , this attitude sends us into a nightmare when we have to deal with the motion of the gravitational field itself ( which certainly `` moves '' : we are spending millions for constructing gravity wave detectors to detect its tiny vibrations ) .there is no absolute referent of motion in gr : the dynamical fields `` move '' with respect to each other .notice that the third step was not easy for einstein , and came later than the previous two .having well understood the first two , but still missing the third , einstein actively searched for non - generally covariant equations of motion for the gravitational field between 1912 and 1915 . with his famous `` hole argument ''he had convinced himself that generally covariant equations of motion ( and therefore , in this context , active diffeomorphism invariance ) would imply a truly dramatic revolution with respect to the newtonian notions of space and time ( on the hole argument , see earman and norton 1987 , rovelli 1991a , belot 1998 ) . in 1912he was not able to take this profoundly revolutionary step ( norton 1984 , stachel 1989 ) . in 1915he took this step , and found what landau calls `` the most beautiful of the physical theories '' . at the light of the three steps illustrated above, the task of quantum gravity is clear and well defined .he have learned from gr that spacetime is a dynamical field among the others , obeying dynamical equations , and having independent degrees of freedom .a gravitational wave is extremely similar to an electromagnetic wave .we have learned from qm that every dynamical object has quantum properties , which can be captured by appropriately formulating its dynamical theory within the general scheme of qm ._ therefore _ , spacetime itself must exhibit quantum properties .its properties , including the metrical properties it defines , must be represented in quantum mechanical terms .notice that the strength of this `` therefore '' derives from the confidence we have in the two theories , qm and gr . now , there is nothing in the basics of qm which contradicts the physical ideas of gr .similarly , there is nothing in the basis of gr that contradicts the physical ideas of qm .therefore , there is no a priori impediment in searching for a quantum theory of the gravitational fields , that is , a quantum theory of spacetime .the problem is ( with some qualification ) rather well posed : is there a quantum theory ( say , in one formulation , a hilbert space , and a set of self - adjoint operators ) whose classical limit is gr ? on the other hand , all previous applications of qm to _ field _ theory , namely conventional qft s , rely heavily on the existence of the `` stage '' , the fixed , non - dynamical , background metric structure .the minkowski metric essentially for the construction of a conventional qft ( in enters everywhere ; for instance , in the canonical commutation relations , in the propagator , in the gaussian measure ) .we certainly can not simply replace with a quantum field , because all equations become nonsense . therefore , to search for a quantum theory of gravity , we have two possible directions .one possibility is to `` disvalue '' the gr conceptual revolution , reintroduce a background spacetime with a non - dynamical metric , expand the gravitational field as , quantize only the fluctuations , and hope to recover the full of gr somewhere down the road .this is the road followed for instance by perturbative string theory .the second direction is to be faithful to what we have learned about the world so far .namely to the qm and the gr insights .we must then search a qft that , genuinely , does not require a background space to be defined .but the last three decades whave been characterized by the great success of conventional qft , which neglects gr and is based on the existence of a background spacetime .we live in the aftermath of this success .it is not easy to get out from the mental habits and from the habits to the technical tools of conventional qft .still , this is necessary if we want to build a qft which fully incorporates active diff invariance , and in which localization is fully relational . in my opinion, this is the right way to go .spacetime , or the gravitational field , is a dynamical entity ( gr ) .all dynamical entities have quantum properties ( qm ) .therefore spacetime is a quantum object .it must be described ( picking one formulation of qm , but keeping in mind that others may be equivalent , or more effective ) in terms of states in a hilbert space .localization is relational .therefore these states can not represent quantum excitations localized in some space .they must define space themselves .they must be quantum excitations `` of '' space , not `` in '' space .physical quantities in gr , that capture the true degrees of freedom of the theory are invariant under active diff. therefore the self - adjoint operators that correspond to physical ( predictable ) observables in quantum gravity must be associated to diff invariant quantities .examples of diff - invariant geometric quantities are physical lengths , areas , volumes , or time intervals , of regions determined by dynamical physical objects .these must be represented by operators .indeed , a measurement of length , area or volume is a measurement of features of the gravitational field .if the gravitational field is a quantum field , then length , area and volume are quantum observables. if the corresponding operator has discrete spectrum , they will be quantized , namely they can take certain discrete values only . in this sensewe should expect a discrete geometry .this discreteness of the geometry , implied by the conjunction of gr and qm is very different from the naive idea that the world is made by discrete bits of something .it is like the discreteness of the quanta of the excitations of an harmonic oscillator .a generic state of spacetime will be a continuous quantum superposition of states whose geometry has discrete features , not a collection of elementary discrete objects .a concrete attempt to construct such a theory , is loop quantum gravity .i refer the reader to rovelli ( 1997b ) for an introduction to the theory , an overview of its structure and results , and full references . here, i present only a few remarks on the theory .loop quantum gravity is a rather straightforward application of quantum mechanics to hamiltonian general relativity .it is a qft in the sense that it is a quantum version of a field theory , or a quantum theory for an infinite number of degrees of freedom , but it is profoundly different from conventional , non - general - relativistic qft theory . in conventional qft , states are quantum excitations of a field over minkowski ( or over a curved ) spacetime . in loop quantum gravity ,the quantum states turn out to be represented by ( suitable linear combinations of ) spin networks ( rovelli and smolin 1995a , baez 1996 , smolin 1997 ) .a spin network is an abstract graphs with links labeled by half - integers .see figure 1 .intuitively , we can view each node of the graph as an elementary `` quantum chunk of space '' .the links represent ( transverse ) surfaces separating the quanta of space .the half - integers associated to the links determine the ( quantized ) area of these surfaces .the spin network represent relational quantum states : they are not located in a space .localization must be defined in relation to them .for instance , if we have , say , a matter quantum excitation , this will be located on the spin network ; while the spin network itself is not located anywhere .the operators corresponding to area and volume have been constructed in the theory , simply by starting from the classical expression for the area in terms of the metric , then replacing the metric with the gravitational field ( this is the input of gr ) and then replacing the gravitational field with the corresponding quantum field operator ( this is the input of qm ) .the construction of these operators requires appropriate generally covariant regularization techniques , but no renormalization : no infinities appear .the spectrum of these operators has been computed and turns out to be discrete ( rovelli and smolin 1995b , ashtekar lewandowski 1997a , 1997b ) .thus , loop quantum gravity provides a family of precise quantitative predictions : the quantized values of area and volume .for instance , the ( main sequence ) of the spectrum of the area is where is any finite sequence of half integers .this formula gives the area of a surface pinched by links of a spin network state .the half integers are ones associated with the links that pinch the surface .this illustrates how the links of the spin network states can be viewed as transversal `` quanta of area '' .the picture of macroscopic physical space that emerges is then that of a tangle of one - dimensional intersecting quantum excitation , called the weave ( ashtekar rovelli and smolin 1992 ) .continuous space is formed by the weave in the same manner in which the continuous 2d surface of a t - shirt is formed by weaved threads .the aspect of the gr s relationalism that concerns space was largely anticipated by the earlier european thinking .much less so ( as far as i am aware ) was the aspect of this relationalism that concerns time .gr s treatment of time is surprising , difficult to fully appreciate , and hard to digest .the time of our perceptions is very different from the time that theoretical physics finds in the world as soon as one exits the minuscule range of physical regimes we are accustomed to .we seem to have a very special difficulty in being open minded about this particular notion .already special relativity teaches us something about time which many of us have difficulties to accept . according to special relativity, there is absolute no meaning in saying `` right now on andromeda '' .there is no physical meaning in the idea of `` the state of the world right now '' , because which set of events we consider as `` now '' is perspectival .the `` now '' on andromeda for me might correspond to `` a century ago '' on andromeda for you .thus , there is no single well defined universal time in which the history of the universe `` happens '' .the modification of the concept of time introduced by gr is much deeper .let me illustrate this modifications .consider a simple pendulum described by a variable . in newtonian mechanics ,the motion of the pendulum is given by the evolution of in time , namely by , which is governed by the equation of motion , say , which has ( the two - parameter family of ) solutions .the state of the pendulum at time can be characterized by its position and velocity . from these two, we can compute and and therefore at any . from the physical point of view, we are really describing a situation in which there are _ two _ physical objects : a pendulum , whose position is , and a clock , indicating .if we want to take data , we have to repeatedly observe and .their _ relation _ will be given by the equation above .the relation can be represented ( for given and ) by a line in the plane . in newtonian terms, time flows in its absolute way , the clock is just a devise to keep track of it , and the dynamical system is formed by the pendulum alone .but we can view the same physical situation from a different perspective .we can say that we have a physical system formed by the clock and the pendulum together and view the dynamical system as expressing the relative motion of one with respect to the other .this is precisely the perspective of gr : to express the relative motion of the variables , with respect to each other , in a `` democratic '' fashion . to do that, we can introduce an `` arbitrary parameter time '' as a coordinate on the line in the plane .( but keep in mind that the physically relevant information is in the line , not in its coordinatization ! ) . then the line is represented by two functions , and , but a reparametrization of in the two functions is a gauge , namely it does not modify the physics described .indeed , does not correspond to anything observable , and the equations of motion satisfied by and ( easy to write , but i will not write them down here ) will be invariant under arbitrary reparametrizations of . only -independent quantities have physical meaning .this is precisely what happens in gr , where the `` arbitrary parameters '' , analogous to the of the example , are the coordinates .namely , the spatial coordinate and the temporal coordinate .these have no physical meaning whatsoever in gr : the connection between the theory and the measurable physical quantities that the theory predict is only via quantities independent from and .thus , and in gr have a very different physical meaning than their homonymous in non - general - relativistic physics .the later correspond to readings on rods and clocks .the formed , correspond to nothing at all . recall that einstein described his great intellectual struggle to find gr as `` understanding the meaning of the coordinates '' . in the example, the invariance of the equations of motion for and under reparametrization of , implies that if we develop the hamiltonian formalism in we obtain a constrained system with a ( weakly ) vanishing hamiltonian .this is because the hamiltonian generates evolutions in , evolution in is a gauge , and the generators of gauge transformations are constraints . in canonical grwe have precisely the same situation : the hamiltonian vanishes , the constraints generate evolution in , which is unobservable it is gauge .gr does not describe evolution in time : it describes the relative evolution of many variables with respect to each other .all these variables are democratically equal : there is nt a preferred one that `` is the true time '' .this is the temporal aspect of gr s relationalism .a large part of the machinery of theoretical physics relies on the notion of time ( on the different meanings of time in different physical theories , see rovelli 1995 ) .a theory of quantum gravity should do without .fortunately , many essential tools that are usually introduced using the notion of time can equally well be defined without mentioning time at all .this , by the way , shows that time plays a much weaker role in the structure of theoretical physics than what is mostly assumed .two crucial examples are `` phase space '' and `` state ''. the phase space is usually introduced in textbooks as the space of the states of the systems `` at a given time '' . in a general relativistic context, this definition is useless .however , it is known since lagrange that there is an alternative , equivalent , definition of phase space as the space of the solutions of the equations of motion .this definition does not require that we know what we mean by time .thus , in the example above the phase space can be coordinatized by and , which coordinatize the space of the solutions of the equations of motion .a time independent notion of `` state '' is then provided by a point of this phase space , namely by a particular solution of the equations of motion .for instance , for an oscillator a `` state '' , in this atemporal sense , is characterized by an amplitude and a phase .notice that given the ( time - independent ) state ( and ) , we can compute any observable : in particular , the value of at any desired .notice also that is independent from .this point often raises confusion : one may think that if we restrict to -independent quantities then we can not describe evolution .this is wrong : the true evolution is the relation between and , which is -independent .this relation is expressed in particular by the value ( let us denote it ) of at a given . is given , obviously , by this can be seen as a one - parameter ( the parameter is ) family of observables on the gauge invariant phase space coordinatized by and . notice that this is a perfectly -independent expression .in fact , an explicit computation shows that the poisson bracket between and the hamiltonian constraint that generates evolution in vanishes .this time independent notion of states is well known in its quantum mechanical version : it is the heisenberg state ( as opposed to schrdinger state ) .similarly , the operator corresponding to the observable is the heisenberg operator that gives the value of at .the heisenberg and schrdinger pictures are equivalent if there is a normal time evolution in the theory . in the absence of a normal notion of time evolution ,the heisenberg picture remains viable , the shrdinger picture becomes meaningless .in quantum gravity , only the heisenberg picture makes sense ( rovelli 1991c , 1991d ) . in classical gr ,a point in the physical phase space , or a state , is a solution of einstein equations , up to active diffeomorphisms .a state represents a `` history '' of spacetime .the quantity that can be univocally predicted are the ones that are independents from the coordinates , namely that are invariant under diffeomorphisms .these quantities have vanishing poisson brackets with all the constraints . given a state ,the value of each of these quantities is determined . in quantum gravity ,a quantum state represents a `` history '' of quantum spacetime .the observables are represented by operators that commute with _ all _ the quantum constraints .if we know the quantum state of spacetime , we can then compute the expectation value of any diffeomorphism invariant quantity , by taking the mean value of the corresponding operator .the observable quantities in quantum gravity are precisely the same as in classical gr .some of these quantities may express the value of certain variables `` when and where '' certain other quantities have certain given values .they are the analog of the reparametrization invariant observable in the example above .these quantities describe evolution in a way which is fully invariant under the parameter time , unphysical gauge evolution ( rovelli 1991d , 1991e ) .the corresponding quantum operators are heisenberg operators .there is no schrdinger picture , because there is no unitary time evolution .there is no need to expect or to search for unitary time evolution in quantum gravity , because there is no time in which we should have unitary evolution .a prejudice hard to die wants that unitary evolution is required for the consistency of the probabilistic interpretation .this idea is wrong .what i have described is the general form that one may expect a quantum theory of gr to have .i have used the hilbert space version of qm ; but this structure can be translated in other formulations of qm .of course , physics works then with dirty hands : gauge dependent quantities , approximations , expansions , unphysical structures , and so on . a fully satisfactory construction of the above does not yet exist . a concrete attempt to construct the physical states and the physical observables in loop quantum gravity is given by the spin foam models approach , which is the formulation one obtains by starting from loop quantum gravity and constructing a feynman sum over histories ( reisenberger rovelli 1997 , baez 1998 , barret and crane 1998 ) . see ( baez 1999 ) in this volume for more details on ideas underlying these developments .in quantum gravity , i see no reason to expect a fundamental notion of time to play any role .but the _ nostalgia for time _ is hard to resist . for technical as well as for emotional reasons .many approaches to quantum gravity go out of their way to reinsert in the theory what gr is teaching us we should abandon : a preferred time .the time `` along which '' things happen is a notion which makes sense only for describing a limited regime of reality .this notion is meaningless already in the ( gauge invariant ) general relativistic classical dynamics of the gravitational field . at the fundamental level, we should , simply , forget time .i close this section by briefly mentioning two more speculative ideas .one regards the emergence of time , the second the connection between the relationalism in gr and the relationalism in qm .\(i ) in the previous section , i have argued that we should search for a quantum theory of gravity in which there is no independent time variable `` along '' which dynamics `` happens '' .a problem left open by this position is to understand the emergence of time in our world , with its features , which are familiar to us .an idea discussed in ( rovelli 1993a 1993b , connes and rovelli 1994 ) is that the notion of time is nt dynamical but rather thermodynamical. we can never give a complete account of the state of a system in a field theory ( we can not access the infinite amount of data needed to completely characterize a state ) .therefore we have at best a statistical description of the state . given a statistical state of a generally covariant system , a notion of a flow ( more precisely a one - parameter group of automorphisms of the algebra of the observables ) follows immediately . in the quantum context, this corresponds to the tomita flow of the state .the relation between this flow and the state is the relation between the time flow generated by the hamiltonian and a gibbs state : the two essentially determine each other . in the absence of a preferred time , however , any statistical state selects its own notion of statistical time .this statistical time has a striking number of properties that allow us to identify it with the time of non - general relativistic physics .in particular , a schrdinger equation with respect to this statistical time holds , in an appropriate sense .in addition , the time flows generated by different states are equivalent up to inner automorphisms of the observable algebra and therefore define a common `` outer '' flow : a one paramater group of outer automorphisms .this determines a state independent notion of time flow , which shows that a general covariant qft has an intrinsic `` dynamics '' , even in the absence of a hamiltonian and of a time variable .the suggestion is therefore that the temporal aspects of our world have statistical and thermodynamical origin , rather than dynamical .`` time '' is ignorance : a reflex of our incomplete knoweldge of the state of the world .\(ii ) what is qm really telling us about our world ? in ( rovelli 1996 , 1998 ) , i have argued that what qm is telling us is that the contingent properties of any system or : the state of any system must be seen as relative to a second physical system , the `` observing system ''. that is , quantum state and values that an observables take are relational notions , in the same sense in which velocity is relational in classical mechanics ( it is a relation between two systems , not a properties of a single system ) .i find the consonance between this relationalism in qm and the relationalism in gr quite striking .it is tempting to speculate that they are related .any quantum interaction ( or quantum measurement ) involving a system and a system requires and to be spatiotemporally contiguous .viceversa , spatiotemporal contiguity , which is the grounding of the notions of space and time ( derived and dynamical , not primary , in gr ) can only be verified quantum mechanically ( just because any interaction is quantum mechanical in nature ) .thus , the net of the quantum mechanical elementary interactions and the spacetime fabric are actually the same thing .can we build a consistent picture in which we take this fact into account ? to do that , we must identify two notions : the notion of a spatiotemporal ( or spatial ? ) region , and the notion of quantum system . for intriguing ideas in this direction , see ( crane 1991 ) and , in this volume , ( baez 1999 ) .part of the recent reflection about science has emphasized the `` non cumulative '' aspect in the development of scientific knowledge .according to this view , the evolution of scientific theories is marked by large or small breaking points , in which , to put it very crudely , the empirical facts are just reorganized within new theories .these would be to some extent `` incommensurable '' with respect to their antecedent .these ideas have influenced physicists .the reader has remarked that the discussion of quantum gravity i have given above assumes a different reading of the evolution of scientific knowledge .i have based the above discussion on quantum gravity on the idea that the central physical ideas of qm and gr represent our best guide for accessing the extreme and unexplored territories of the quantum - gravitational regime . in my opinion, the emphasis on the incommensurability between theories has probably clarified an important aspect of science , but risks to obscure something of the internal logic according to which , historically , physics finds knowledge .there is a subtle , but definite , cumulative aspect in the progress of physics , which goes far beyond the growth of validity and precision of the empirical content of the theories . in moving from a theory to the theory that supersedes it, we do not save just the verified empirical content of the old theory , but more .this `` more '' is a central concern for good physics .it is the source , i think , of the spectacular and undeniable predicting power of theoretical physics .let me illustrate the point i am trying to make with a historical case .there was a problem between maxwell equations and galilei transformations .there were two obvious way out .to disvalue maxwell theory , degrading it to a phenomenological theory of some yet - to - be - discovered ether s dynamics . or to disvalue galilean invariance , accepting the idea that inertial systems are not equivalent in electromagnetic phenomena .both ways were pursued at the end of the century .both are sound applications of the idea that a scientific revolution may very well change in depth what old theories teach us about the world . which of the two ways did einstein take ?none of them . for einstein ,maxwell theory was a source of great awe .einstein rhapsodizes about his admiration for maxwell theory . for him , maxwell had opened a new window over the world .given the astonishing success of maxwell theory , empirical ( electromagnetic waves ) , technological ( radio ) as well as conceptual ( understanding what is light ) , einstein admiration is comprehensible .but einstein had a tremendous respect for galileo s insight as well .young einstein was amazed by a book with huygens derivation of collision theory virtually out of galilean invariance alone .einstein understood that galileo s great intuition that the notion of velocity is only relative _ could not be wrong_. i am convinced that in this faith of einstein in the core of the great galilean discovery there is very much to learn , for the philosophers of science , as well as for the contemporary theoretical physicists .so , einstein _ believed the two theories , maxwell and galileo_. he assumed that they would hold far beyond the regime in which they had been tested .he assumed that galileo had grasped something about the physical world , which was , simply , _correct_. and so had maxwell .of course , details had to be adjusted .the core of galileo s insight was that all inertial systems are equivalent and that velocity is relative , not the details of the galilean transformations .einstein knew the lorentz transformations ( found , of course , by lorentz , not by einstein ) , and was able to see that they do not contradict galileo s insight .if there was contradiction in putting the two together , the problem was ours : we were surreptitiously sneaking some incorrect assumption into our deductions .he found the incorrect assumption , which , of course , was that simultaneity could be well defined .it was einstein s faith in the _ essential physical correctness _ of the old theories that guided him to his spectacular discovery .there are innumerable similar examples in the history of physics , that equally well could illustrate this point .einstein found gr `` out of pure thought '' , having newton theory on the one hand and special relativity the understanding that any interaction is mediated by a field on the other ; dirac found quantum field theory from maxwell equations and quantum mechanics ; newton combined galileo s insight that acceleration governs dynamics with kepler s insight that the source of the force that governs the motion of the planets is the sun the list could be long . in all these cases , confidence in the insight that came with some theory , or `` taking a theory seriously '' ,lead to major advances that largely extended the original theory itself . of course , far from me suggesting that there is anything simple , or automatic , in figuring out where the true insights are and in finding the way of making them work together .but what i am saying is that figuring out where the true insights are and finding the way of making them work together is the work of fundamental physics .this work is grounded on the _ confidence _ in the old theories , not on random search of new ones .one of the central concerns of modern philosophy of science is to face the apparent paradox that scientific theories change , but are nevertheless credible .modern philosophy of science is to some extent an after - shock reaction to the fall of newtonian mechanics .a tormented recognition that an extremely successful scientific theory can nevertheless be untrue .but it is a narrow - minded notion of truth the one which is questioned by the event of a successful physical theory being superseded by a more successful one .a physical theory , in my view , is a conceptual structure that we use in order to organize , read and understand the world , and make prediction about it .a successful physical theory is a theory that does so effectively and consistently . at the light of our experience, there is no reason not to expect that a more effective conceptual structure might always exist .therefore an effective theory may always show its limits and be replaced by a better one . on the other hand , however , a novel conceptualization can not but rely on what the previous one has already achieved .when we move to a new city , we are at first confused about its geography. then we find a few reference points , and we make a rough mental map of the city in terms of these points .perhaps we see that there is part of the city on the hills and part on the plane .as time goes on , the map gets better .but there are moments , in which we suddenly realize that we had it wrong .perhaps there were indeed two areas with hills , and we were previously confusing the two .or we had mistaken a big red building for the city hall , when it was only a residential construction .so we adjourn the mental map .sometime later , we have learned names and features of neighbors and streets ; and the hills , as references , fade away .the neighbors structure of knowledge is more effective that the hill / plane one the structure changes , but the knowledge increases . and the big red building ,now we know it , is not the city hall , and we know it forever .there are discoveries that are forever .that the earth is not the center of the universe , that simultaneity is relative .that we do not get rain by dancing . these are steps humanity takes , and does not take back . some of these discoveries amount simply to cleaning our thinking from wrong , encrusted , or provisional credences . but also discovering classical mechanics , or discovering electromagnetism , or quantum mechanics , are discoveries forever . not because the details of these theories can not change , but because we have discovered that a large portion of the world admits to be understood in certain terms , and this is a _ fact _ that we will have to keep facing forever .one of the thesis of this essay , is that general relativity is the expression of one of these insights , which will stay with us `` forever '' .the insight is that the physical world does not have a stage , that localization and motion are relational only , that diff - invariance ( or something physically analogous ) is required for any fundamental description of our world .how can a theory be effective even outside the domain for which it was found ?how could maxwell predict radio waves , dirac predict antimatter and gr predict black holes ?how can theoretical thinking be so magically powerful ?of course , we may think that these successes are chance , and historically deformed perspective .there are hundred of theories proposed , most of them die , the ones that survive are the ones remembered .there is alway somebody who wins the lottery , but this is not a sign that humans can magically predict the outcome of the lottery .my opinion is that such an interpretation of the development of science is unjust , and , worse , misleading .it may explain something , but there is more in science .there are tens of thousand of persons playing the lottery , there were only two relativistic theories of gravity , in 1916 , when einstein predicted that the light would be defected by the sun precisely by an angle of 1.75 .familiarity with the history of physics , i feel confident to claim , rules out the lottery picture .i think that the answer is simpler .somebody predicts that the sun will rise tomorrow , and the sun rises .it is not a matter of chance ( there are nt hundreds of people making random predictions on each sort of strange objects appearing at the horizon ) .the prediction that tomorrow the sun will rise , is sound .however , it is not granted either .a neutron star could rush in , close to the speed of light , and sweep the sun away .more philosophically , who grants me the right of induction ?why should i be confident that the sun would rise , just because it has been rising so many time in the past ?i do not know the answer to _ this _ question .but what i know is that the predictive power of a theory beyond its own domain is _ precisely of the same sort ._ simply , we learn something about nature ( whatever this mean ) . and what we learn is effective in guiding us to predict nature s behavior .thus , the spectacular predictive power of theoretical physics is nothing less and nothing more than common induction . and it is as comprehensible ( or as incomprehensible ) as my ability to predict that the sun will rise tomorrow .simply , nature around us happens to be full of regularities _ that we understand _ , whether or not we understand why regularities exist at all .these regularities give us strong confidence -although not certainty- that the sun will rise tomorrow , as well as in the fact that the basic facts about the world found with qm and gr will be confirmed , not violated , in the quantum gravitational regimes that we have not empirically probed .this view is not dominant nowadays in theoretical physics .other attitudes dominate .the `` pragmatic '' scientist ignores conceptual questions and physical insights , and only cares about developing a theory .this is an attitude , that has been successful in the sixties in getting to the standard model .the `` pessimistic '' scientist has little faith in the possibilities of theoretical physics , because he worries that all possibilities are open , and anything might happen between here and the planck length .the `` wild '' scientist observes that great scientists had the courage of breaking with old and respected ideas and assumptions , and explore new and strange hypothesis . from this observation, the `` wild '' scientist concludes that to do great science one has to explore strange hypotheses , and _ violate respected ideas_. the wildest the hypothesis , the best .i think wilderness in physics is sterile .the greatest revolutionaries in science were extremely , almost obsessively , conservative .so was certainly the greatest revolutionary , copernicus , and so was planck .copernicus was pushed to the great jump from his pedantic labor on the minute technicalities of the ptolemaic system ( fixing the equant ) .kepler was forced to abandon the circles by his extremely technical work on the details of mars orbit .he was using ellipses as approximations to the epicycle - deferent system , when he begun to realize that the approximation was fitting the data better than the ( supposedly ) exact curve . and extremely conservative were also einstein and dirac .their vertiginous steps ahead were not pulled out of the blue sky .they did not come from violating respected ideas , but , on the contrary , from respect towards physical insights . in physics, novelty has always emerged from new data and from a humble , devoted interrogation of the old theories . from turning these theories around and around , immerging into them , making them clash , merge ,talk , until , through them , the missing gear could be seen . in my opinion ,precious research energies are today lost in these attitudes .i worry that a philosophy of science that downplays the component of factual knowledge in physical theories might have part of the responsibility .if a physical theory is a conceptual structure that we use to organize , read and understand the world , then scientific thinking is not much different from common sense thinking .in fact , it is only a better instance of the same activity : thinking about the world .science is the enterprise of continuously exploring the possible ways of thinking about the world , and constantly selecting the ones that work best .if so , there can not be any qualitative difference between the theoretical notions introduced in science and the terms in our everyday language .a fundamental intuition of classical empiricism is that nothing grants us the `` reality '' of the referents of the notions we use to organize our perceptions .some modern philosophy of science has emphasized the application of this intuition to the concepts introduced by science .thus , we are warned to doubt the `` reality '' of the theoretical objects ( electrons , fields , black holes ) .i find these warning incomprehensible .not because they are ill founded , but because they are not applied consistently .the fathers of empiricism consistently applied this intuition to _ any _ physical object . who grants me the reality of a chair ?why should a chair be more than a theoretical concept organizing certain regularities in my perceptions ?i will not venture here in disputing nor in agreeing with this doctrine .what i find incomprehensible is the position of those who grant the solid status of reality to a chair , but not to an electron .the arguments against the reality of the electron apply to the chair as well .the arguments in favor of the reality of the chair apply to the electron as well . a chair , as well asan electron , is a concept that we use to organize , read and understand the world .they are equally real .they are equally volatile and uncertain .perhaps , this curious schizophrenic attitude of being antirealist with electrons and iron realist with chairs is the result of a complex historical evolution .first there was the rebellion against `` metaphysics '' , and , with it , the granting of confidence to science alone .from this point of view , metaphysical questioning on the reality of chairs is sterile true knowledge is in science .thus , it is to scientific knowledge that we apply empiricist rigor .but understanding science in empiricists terms required making sense of the raw empirical data on which science is based . with time , the idea of raw empirical data showed more and more its limits .the common sense view of the world was reconsidered as a player in our picture of knowledge .this common sense view should give us a language and a ground from which to start the old anti - metaphysical prejudice still preventing us , however , from applying empiricist rigor to this common sense view of the world as well .but if one is not interested in questioning the reality of chairs , for the very same reason why should one be interested in questioning the `` reality of the electrons '' ?again , i think this point is important for science itself .the factual content of a theory is our best tool .the faith in this factual content does not prevent us from being ready to question the theory itself , if sufficiently compelled to do so by novel empirical evidence or by putting the theory in relation to other things _ we know _ about the world .scientific antirealism , in my opinion , is not only a short sighted application of a deep classical empiricist insight ; it is also a negative influence over the development of science .h. stein ( 1999 ) has recently beautifully illustrated a case in which a great scientist , poincar , was blocked from getting to a major discovery ( special relativity ) by a philosophy that restrained him from `` taking seriously '' his own findings .science teaches us that our naive view of the world is imprecise , inappropriate , biased .it constructs better views of the world .electrons , if anything at all , are `` more real '' that chairs , not `` less real '' , in the sense that they ground a more powerful way of conceptualizing the world . on the other hand ,the process of scientific discovery , and the experience of this century in particular , has made us painfully aware of the provisional character of _ any _ form of knowledge .our mental and mathematical pictures of the world are only mental and mathematical pictures .this is true for abstract scientific theories as well as from the image we have of our dining room . nevertheless , the pictures are powerful and effective and we ca nt do any better than that .so , is there anything we can say with confidence about the `` real world '' ?a large part of the recent reflection on science has taught us that row data do not exist , and that any information about the world is already deeply filtered and interpreted by the theory .further than that , we could even think , as in the dream of berkeley , that there is no `` reality '' outside there .the european reflection ( and part of the american as well ) has emphasized the fact that truth is always internal to the theory , that we can never exit language , we can never exit the circle of discourse within which we are speaking. it might very well be so .but , if the only notion of truth is internal to the theory , then _ this internal truth _ is what we mean by truth .we can not exit from our own conceptual scheme .we can not put ourself outside our discourse .outside our theory . there may be no notion of truth outside our own discourse .but it is precisely `` from within the language '' that we can assert the reality of the world .and we certainly do so .indeed , it is more than that : it is structural to our language to be a language _ about _ the world , and to our thinking to be a thinking _ of _ the world .therefore , precisely because there is no notion of truth except the one in our own discourse , precisely for this reason , there is no sense in denying the reality of the world .the world is real , solid , and understandable by science .the best we can say about the physical world , and about what is there in the world , is what good physics says about it . at the same time , our perceiving , understanding , and conceptualizing the world is in continuous evolution , and science is the form of this evolution . at every stage ,the best we can say about the reality of the world is precisely what we are saying . the fact we will understand it better later on does not make our present understanding less valuable , or less credible .a map is not false because there is a better map , even if the better one looks quite different .searching for a fixed point on which to rest our restlessness , is , in my opinion , naive , useless and counterproductive for the development of science .it is only by believing our insights and , at the same time , questioning our mental habits , that we can go ahead .this process of cautious faith and self - confident doubt is the core of scientific thinking . exploring the possible ways of thinking of the world , being ready to subvert ,if required , our ancient prejudices , is among the greatest and the most beautiful of the human adventures . quantum gravity , in my view , in its effort to conceptualize quantum spacetime , and to modify in depth the notion of time , is a step of this adventure .* ashtekar a , rovelli c , smolin l ( 1992 ) . weaving a classical metric with quantum threads , _ physical review letters _ * 69 * , 237 .* ashtekar a , lewandowski j ( 1997a ) . quantum theory of gravity i : area operators , _ class and quantum grav _ * 14 * , a55a81 .* ashtekar a , lewandowski j ( 1997b ) . quantum theory of geometry ii : volume operators , gr - qc/9711031 . * baez j ( 1997 ) .spin networks in nonperturbative quantum gravity , in _ the interface of knots and physics _ , ed l kauffman ( american mathematical society , providence ) . * baez j ( 1998 ) .spin foam models , _ class quantum grav _ , * 15 * , 18271858 . * baez j ( 1999 ) . in `` physics meets philosophy at the planck scale '' ,c callender n hugget eds , cambridge university press , to appear .* barbour j ( 1989 ) . _absolute or relative motion ? _ ( cambridge university press , cambridge ) . * barret j , crane l ( 1998 ) .relativistic spin networks and quantum gravity , _ journ math phys _ * 39 * , 32963302 . * belot g ( 1998 ) . why general relativity does need an interpretation , _ philosophy of science _ , * 63 * , s80s88 . * connes a and rovelli c ( 1994 ) .von neumann algebra automorphisms and time versus thermodynamics relation in general covariant quantum theories , _ classical and quantum gravity _ * 11 * , 2899 .* crane l ( 1991 ) .2d physics and 3d topology , _ comm math phys _ * 135 * , 615640 . * descartes r ( 1983 ) : _ principia philosophiae _ , translated by vr miller and rp miller ( reidel , dordrecht [ 1644 ] ) .* earman j ( 1989 ) . _world enough and spacetime : absolute versus relational theories of spacetime _( mit press , cambridge ) .* earman j , norton j ( 1987 ) .what price spacetime substantivalism ?the hole story , _ british journal for the philosophy of science _ , * 38 * , 515525 . * isham c ( 1999 ) . in ``physics meets philosophy at the planck scale '' , c callender n hugget eds , cambridge university press , to appear .* newton i ( 1962 ) : _ de gravitatione et aequipondio fluidorum _ , translation in ar hall and mb hall eds _ unpublished papers of isaac newton _ ( cambridge university press , cambridge ) . *norton j d ( 1984 ) .how einstein found his field equations : 1912 - 1915 , _ historical studies in the physical sciences _ , * 14 * , 253315 . reprinted in _ einstein and the history of general relativity: einstein studies _ , d howard and j stachel eds . , vol.i , 101 - 159 ( birkhuser , boston ) . *penrose r ( 1995 ) . _the emperor s new mind _ ( oxford university press ) * reisenberger m , rovelli c ( 1997 ) . sum over surfaces form of loop quantum gravity , _ physical review _ , * d56 * , 34903508 , gr - qc/9612035 .* rovelli c ( 1991a ) .what is observable in classical and quantum gravity ?, _ classical and quantum gravity _ , * 8 * , 297 .* rovelli c ( 1991b ) .quantum reference systems , _ classical and quantum gravity _ , * 8 * , 317 .* rovelli c ( 1991c ) .quantum mechanics without time : a model , _ physical review _ , * d42 * , 2638 .* rovelli c ( 1991d ) .time in quantum gravity : an hypothesis , _ physical review _, * d43 * , 442 .* rovelli c ( 1991e ) .quantum evolving constants . _ physical review _ * d44 * , 1339 .* rovelli c ( 1993a ) .statistical mechanics of gravity and thermodynamical origin of time , _ classical and quantum gravity _ , * 10 * , 1549 .* rovelli c ( 1993b ) . the statistical state of the universe , _ classical and quantum gravity _ , * 10 * , 1567 .* rovelli c ( 1993c ) . a generally covariant quantum field theory and a prediction on quantum measurements of geometry , _ nuclear physics _ ,* b405 * , 797 .* rovelli c ( 1995 ) .analysis of the different meaning of the concept of time in different physical theories , _ il nuovo cimento _ , * 110b * , 81 .* rovelli c ( 1996 ) .relational quantum mechanics , _ international journal of theoretical physics _ , * 35 * , 1637 .* rovelli c ( 1997a ) .half way through the woods , in _ the cosmos of science _ , j earman and jd norton editors , ( university of pittsburgh press and universitts verlag konstanz ) . * rovelli , c. ( 1997b ) loop quantum gravity , living reviews in relativity ( refereed electronic journal ) , http://www.livingreviews.org/ articles / volume1/1998 - 1rovelli ; gr - qc/9709008 * rovelli c ( 1998 ). incerto tempore , incertisque loci : can we compute the exact time at which the quantum measurement happens ? , _ foundations of physics _, * 28 * , 10311043 , quant - ph/9802020 .* rovelli c ( 1999 ) .strings , loops and the others : a critical survey on the present approaches to quantum gravity , in _ gravitation and relativity : at the turn of the millennium _ , n dadhich j narlikar eds ( poona university press ) , gr - qc/9803024 . *rovelli c and smolin l ( 1988 ) . knot theory and quantum gravity , _ physical review letters _ , * 61 * , 1155 .* rovelli c and smolin l ( 1990 ) . loop space representation for quantum general relativity , _ nuclear physics _ , * b331 * , 80 .* rovelli c , smolin l ( 1995a ) .spin networks and quantum gravity , _ physical review _ , * d 53 * , 5743 . *rovelli c and smolin l ( 1995b ) .discreteness of area and volume in quantum gravity , _ nuclear physics _ * b442 * , 593 .erratum : _ nuclear physics _ * b 456 * , 734 . *smolin l ( 1997 ) .the future of spin networks , gr - qc/9702030 . * stachel j ( 1989 ) .einstein search for general covariance 1912 - 1915 , in _ einstein studies _ , d howard and j stachel eds , vol 1 , 63 - 100 ( birkhuser , boston ) . *stein h ( 1999 ) .physics and philosophy meet : the strange case of poincar . unpublished .* zeilinger a , in _ gravitation and relativity : at the turn of the millennium _ , n dadhich j narlikar eds ( poona university press ) .
i discuss nature and origin of the problem of quantum gravity . i examine the knowledge that may guide us in addressing this problem , and the reliability of such knowledge . in particular , i discuss the subtle modification of the notions of space and time engendered by general relativity , and how these might merge into quantum theory . i also present some reflections on methodological questions , and on some general issues in philosophy of science which are are raised by , or a relevant for , the research on quantum gravity . _ to appear on `` physics meets philosophy at the planck scale '' _ _ c callender n hugget eds , cambridge university press _
the european strategy forum on research infrastructures ( esfri ) , a strategic initiative to develop the scientific integration of europe , has identified four facilities whose science cases are so outstanding that they can be considered as the main ( ground - based ) priorities of the european astronomy and astro - particles communities .these are the square kilometre array ( ska ) , the cherenkov telescope array ( cta ) , km3net ( km neutrino telescope ) and the european extremely large telescope ( e - elt ) . to address the common challenges of these infrastructures through synergy , the asterics ( astronomy esfri and research infrastructure cluster ) project was proposed to the european commission ( ec ) and funded .its major objectives are to support and accelerate the implementation of the esfri telescopes , to enhance their performance beyond the current state - of - the - art , and to see them interoperate as an integrated , multi - wavelength and multi - messenger facility .an important focal point is the management , processing and scientific exploitation of the huge datasets the esfri facilities will generate .asterics will seek solutions to these problems outside of the traditional channels by directly engaging and collaborating with industry and specialised small and medium - sized enterprises ( smes ) .the various esfri pathfinders and precursors and other selected world - class projects ( including space - borne facilities ) will present the perfect proving ground for new methodologies and prototype systems .in addition , asterics will enable astronomers from across the member states to have broad access to the reduced data products of the esfri telescopes via a seamless interface to the virtual observatory framework .this is expected to massively increase the scientific impact of the telescopes , and greatly encourage use ( and re - use ) of the data in new and novel ways , typically not foreseen in the original proposals . by demonstrating cross - facility synchronicity , and by harmonising various policy aspects, asterics will realise a distributed and interoperable approach that ushers in a new multi - messenger era for astronomy . through an active dissemination programme , including direct engagement with all relevant stakeholders , and via the development of citizen scientist mass participation experiments , asterics has the ambition to be a flagship for the scientific , industrial and societal impact esfri projects can deliver .the work packages have been built around the astronomy related esfri projects with a sharp focus on advancing and contributing to their design , construction and implementation .futhermore , the work package activities and their deliverables have been selected to be of high impact and broadly relevant to as wide a range of the astronomy related esfri projects as possible , addressing common technical challenges and collective issues such as harmonisation , interoperability , exchange and cross - facility multi - wavelength / multi - messenger integration .* asterics management support team ( amst ) * + this work package will establish the asterics management support team ( amst ) , and will thus guarantee the smooth execution of all financial , administrative and reporting elements of the project. it will also permit the amst to exercise central control and oversight of the scientific and technical progress of the project , as measured by secured milestones and the successful receipt of deliverables . a high - level policy forum ( involving the esfri projects and other large astronomy research infrastructures )will also be established in order to coordinate and agree new models for joint time allocation , observing and data access / sharing , in addition to other more general policy matters of common interest .the culmination of asterics will be an integrating event to show - case the results of the project and their relevance to the esfri telescopes and all other relevant stakeholders .* dissemination , engagement and citizen science ( decs ) * + the objective of decs is to promote asterics and the esfri astronomy facilities it aims to serve . in particular, it aims to open - up the esfri facilities to all relevant stakeholders and the widest possible audience , including the public : i ) production of high quality branding and promotional outreach materials ; more ambitiously , decs also embraces the adoption of the principles of science 2.0 ; ii ) create web - based interfaces that will open - up the astronomy related esfri facilities to the general public via a harmonised suite of citizen science mass participation experiments ( mpes ) and online video material ; iii ) attract young people to science by networking the esfri facilities via citizen science initiatives , and through coordinating open educational resources .* observatory e - environments linked by common challenges ( obelics ) * + the aim of obelics is to enable interoperability and software re - use for the data generation , integration and analysis of the esfri and pathfinder facilities through the creation of an open innovation environment for establishing open standards and software libraries for multiwavelength / multi - messenger data .the specific objectives are : i ) train researchers and data scientists in the asterics - related projects to apply state - of - the - art parallel software programming techniques , to adopt big - data software frameworks , to benefit from new processor architectures and e - science infrastructures .this will create a community of experts that can contribute across facilities and domains ; ii ) maximise software re - use and co - development of technology for the robust and flexible handling of the huge data streams generated by the asterics - related facilities ( this involves the definition of open standards and design patterns , and the development of software libraries in an open innovation environment ) ; iii ) adapt and optimise extremely large database systems to fulfil the requirements of the asterics - related projects ( this requires the development of use cases , prototypes and benchmarks to demonstrate scalability and deployment on distributed non - homogeneous resources ) ; cooperation with the esfri pathfinders , computing centres , e - infrastructure providers and industry will be organised and managed to fulfil this objective ; iv ) study and demonstrate data integration across asterics - related projects using data mining tools and statistical analysis techniques on petascale data sets ( this will require adaptable and evolving workflow management systems , to allow deployment on existing and future e - science infrastructures ) .all tasks are built upon the state - of - the - art in ict , in cooperation with major european e - infrastructures and are conceived to minimise fragmentation .communications and links with other communities and e - science service providers are considered in order to contribute to the effectiveness of the proposed objectives . * data access , discovery and interoperability ( dadi ) * + the virtual observatory ( vo ) framework is a key element in successfully clustering the esfri projects . with the esfri facilities and their pathfinders included in the vo, astronomers will be able to discover , access , use and compare their data , combining it with data from other ground- and space - based observatories as well as theoretical model collections .the goal of dadi is to make the esfri and pathfinder project data available for discovery and usage in the international vo framework , and accessible with the set of vo - enabled common tools . more specifically : i ) train and support esfri projects staff in the usage and implementation of the vo framework and tools , and make them active participants in the development of the vo framework definition and updates , thus contributing to relevance and sustainability of the framework ; ii ) train and support the wider astronomical community in scientific use of the framework , in particular for pathfinder data , and gather their requirements and feedback ; iii ) adapt the vo framework and tools to the esfri projects needs , in continuous cooperation with the international virtual observatory alliance ( ivoa ) . * connecting locations of esfri observatories and partners in astronomy for timing and real - time alerts ( cleopatra ) * + the partners in asterics share an ambition to use modern communication methods , such as fast broadband connectivity , to improve the scientific capabilities of their research infrastructures .the research activities aim specifically at synergetic observing modes , and fast and reliable access to large data streams .these aspects are covered in cleopatra : i ) develop technology for the enabling of long - haul and many - element time and frequency distribution over fibre connections .this has the potential to increase the efficiency and affordability of all radio astronomy facilities ( ska , lofar , evn ) ; such developments are also highly relevant for astroparticle facilities ( cta , km3net ) and can enable novel realtime multi messenger observations ; ii ) develop methods for relaying alerts , which will signal transient event detections between the facilities and enable joint observing programmes ; the focus is both on interchange formats and on scientific strategies and methods for joint observing ; iii ) further development of existing data streaming software , building on previous e - vlbi projects , and providing tools for robust and efficient data dissemination for all facilities in the user domain , including eso facilities such as alma and the e - elt ; iv ) foster the development of advanced scheduling algorithms , using ai approaches for optimal usage of the esfri facilities , so to achieve a consistent set of enhancements of the facilities based on developments in connectivity and data transport .* interactions among work packages * + obelics and dadi have a strong and interrelated focus on delivering common solutions , standards and analysis to the management and exploitation of large volume and high velocity data streams .key goals include interoperability between facilities such as the ska , cta , euclid , ego , km3net etc .both obelics and dadi also entail significant training opportunities for external stakeholders , in order to ensure that esfri projects staff and users are fully engaged with the astericsprogramme.there is a direct connection between these activities , and a specific task in obelics is dedicated to the interface and coordination with dadi .quite naturally , as all work packages need a dissemination activity to be carried out , and need to be managed , all of them will have a direct connection to decs and amst .using an inclusive , engaging and open approach in the preparation of the project work plan , the asterics partners have arrived at a concept that is relevant to the implementation of the astronomy related esfri facilities , and that is supported and endorsed by the main players of the multi - disciplinary astronomy and astroparticle physics communities .asterics is an ambitious project but the goals are clear and attainable .the ultimate measure of its success is how often the project results and products are incorporated into the esfri observatories . on timescales similar to the duration of the project ( 4 years ), success can also be measured via implementation on the esfri pathfinder telescopes the latter provide existing platforms on which asterics technologies can be tested and proven .the incorporation of other related projects ( e.g. euclid , ego ) provides further testing opportunities , and additionally addresses the demand to realise the potential offered by combining astrophysics and cosmology together .asterics is a project funded by the european commission under the horizon2020 programme ( i d 653477 ) .all the partner institutions and individual participants are gratefully acknowledged for their work in the project .
the large infrastructure projects for the next decade will allow a new quantum leap in terms of new possible science . esfri , the european strategy forum on research infrastructures , a strategic initiative to develop the scientific integration of europe , has identified four facilities ( ska , cta , km3net and e - elt ) deserving priority in support . the asterics project aims to address the cross - cutting synergies and common challenges shared by the various astronomy esfri and other world - class facilities . the project ( 22 partners across europe ) is funded by the eu horizon 2020 programme with 15 meuro in 4 years . it brings together for the first time the astronomy , astrophysics and particle astrophysics communities , in addition to other related research infrastructures .
a multi - element radio telescope is a spatially spread array of antennas ( or antenna elements ) whose noise - like responses are required to be time aligned , dynamically calibrated and combined or correlated in real time .the resulting estimates of spatio - temporal and spectral correlations between responses of pairs of elements can be used to recover the desired information on the strength and distribution of radio emission within the common field of view using standard post - processing software .thus the signal of interest is statistical in nature , resulting from a minute level of mutual coherence arising from weak celestial signals buried in noise . because of the large number of elements and the high sampling rates necessary for bandwidths exceeding several tens of mhz in recent arrays , real - time statistical estimation is essential to achieve practical data rates and volumes for recording and post processing .for instance , the recently initiated upgrade for the ooty radio telescope ( ort ) aims at treating the 30 m x 506 m parabolic cylindrical antenna of the ort as 264 independent sets of elements , each of which is required to be sampled at 80 ms / s , leading to a data generation rate of 21 gigasamples per second , exceeding 80 terabytes per hour . till recently, computing requirements of this scale forced a choice of custom hardware to be the most favored platform .however , rapid developments in the fields of digital technology , communication and computing have led to a changing trend towards alternative approaches for upcoming telescopes .such approaches range between a customized and reusable hardware library of components on an fpga platform , e.g. , the casper project , and a software - only approach , e.g. , at the gmrt .the gmrt case is an example of a recent transition from custom hardware to a software - only approach . in this paper , we have taken a middle path , where the real - time processing of a multi - element radio telescope is abstracted as a multi - sensor , data fusion problem and addressed in a new platform called the networked signal processing system ( nsps ) in terms of packetized , heterogeneous , distributed and real - time signal processing .it is a co - operative of two kinds of networks , among which one is a custom peer - to - peer network while the other is a part of a commodity processor network .the custom network includes subsystems related to the digitization and all intermediate routing and preprocessing blocks as the network nodes , in which the emphasis is on traffic shaping , on - the - fly processing and load balancing for effective distributed computing .however , all customized protocols are absorbed while crossing over the last mile to interface to the commodity processor network using a common industry - standard network protocol .the actual estimation of the correlations is carried out by nodes on the commodity network .in contrast to the traditional use of a packetized network as merely a data transport fabric between processing entities , we have the notion of `` a logical packet '' based on an application specific `` transaction unit '' , which itself may be composed of a large number of physical packets whose sizes are network - specific this unit refers to a time stretch long enough to facilitate a dynamical flagging mechanism or / and to relax the constraint on timing , synchronization and scheduling of workloads on the commodity operating systems on which processing is expected to be carried out . both these requirements necessitate lower level nsps nodes to be equipped with large memories , which are also used to route traffic selectively ( traffic shaping ) to higher levels in the nsps .two independent considerations have led us towards stretching the transaction unit to a good fraction of a second .one of these , as explained above , is to provide latency tolerance in order to simplify software on standard computing platforms , while the other arises from a desire to make explicit provision for preprocessing using concepts related to modern information theory . from this point of view and to attract the attention of experts from outside the field of radio astronomy , we have given a somewhat unconventional description of the signal path and analysis of processing requirements in sections [ sec : multielementradiotelescope ] and [ sec : realtimeprocessing ] in an attempt to illustrate the connection of the problem to information theory .another concept in literature which we find useful in the present context is that of `` multi - sensor data fusion '' , defined by , as a system model where `` spatially and temporally indexed data provided by different sources are combined ( fused ) in order to improve the processing and interpretation of these data '' .this model , widely used in applications like military target tracking , weather forecasting etc , has many features relevant for describing the control flow and pre - processing required in a multi - element radio telescope before correlation . in a sense ,the nsps is an adaptation of data fusion architecture to our domain .our analysis of the nature of the real - time problem results in a natural partitioning into two broad categories as elaborated in section [ sec : realtimeprocessing ] .this is our primary motivation for defining the architecture , described as a fusion tree in section [ sec : fusiontree ] .an illustration of the feasibility of its implementation is presented in section [ sec : an - implementation - example : ] by considering the case of the ort upgrade .we present here an abstract picture of a multi - element radio telescope , in which the primary beams of individual elements are viewed as virtual communication channels carrying different combinations of radiation from a set of independent celestial `` radio emitters '' .these are located in different directions within the common celestial region intercepted by the element primary beams .each individual radio emitter from this set is a source of stationary ( within the observing time ) random process .since signals received in any finite bandwidth can be spectrally decomposed , each primary beam can be considered equivalent to a set of independent communication channels of identical bandwidths corresponding to the spectral resolution .each such channel is characterized by a correlation timescale equal to reciprocal of its bandwidth .thus , spectral decomposition can be used to enable the propagation time differences for noise from different radio emitters to be within their correlation timescale .this implies that the corresponding channels of different primary beams are carriers of noise arising from different superpositions of the same set of random processes , but with different weights .information about the strength and distribution of radio emitters in the field of view can be considered to be coded in the correlation between corresponding communication channels of different elements .hence , we consider the real - time spectral correlation to be a fusion process for compressing the information conveyed by the responses of different elements without affecting decoding to be carried out in post - processing , say in the form of an image of a celestial region. however , in practice , the propagation medium introduces a variety of correlated and uncorrelated noise into these virtual channels which can erase or distort the information related to celestial emitters .this is represented schematically in fig .[ fig : antenna - beams - viewed ] . often , some of these distortions are characterized by a combination of dispersive and non - dispersive components which are _ localized _ in time and/or frequency , unlike the celestial signal which is more like a random noise . such a localization can be exploited by a pre - processing algorithm to enable suitable recognition and characterization of deviant ( non - random ) data and to tag them suitably .such data can then be segregated from those passed on to an irreversible fusion operation , e.g. , to minimize the biases in the correlations . on the other hand , in situations like observing fast transients with low duty cycle , orwhen one wants to find the direction in which such a non - random signal is present , the segregated data can be passed to an independent processing or recording stage for later use .we refer the interested reader for an analogous situation in a re - visit of maxwell s demon by to connect algorithmic randomness to physical entropy . for the present purpose, it suffices to note that such a segregation results effectively from an algorithmic feedback , requiring multiple passes through the data before fusion .the output of this feedback via multiple passes on stored data is also analogous to the concept of a relay network with `` side '' or meta information , described in , where this side information is used by other entities for selective processing of the data reaching them . in our abstraction , a provision for such an algorithmic feedback should be an essential part of the signal processing architecture , and should be present in the path between the digitizer and the fusion operator .this is only possible if the real - time system is equipped with adequate memory and preprocessing for characterization of data before they are sent to a correlator or beam forming system . _ _ by excluding such a provision , digital receivers in existing large radio telescopes are a potential source of irreversible biases in the recorded correlations , and suffer from inefficiency for requirements like detection of short term transients .antenna beams viewed as noisy virtual communication channels connecting a sky region and an antenna element .without getting into specific details of the real - time processing required for a large multi - element radio telescope , we abstract them into a combination of three broad categories : * _ embarrassingly parallel processing _ , e.g. , spectral decomposition of the incoming time series ( say via fft or polyphase filter banks ) and the recognition and management of path - induced distortions / interference on timescales significantly smaller than the integration time chosen for the correlations . * _ pipelined processing _ , e.g. , multi - beam formation ( say _ k _ beams ) by phasing n elements requires pipelined operations for each spectral band . *_ data fusion _ operations , in which the data originating from different antenna elements are hierarchically fused ( combined via routing and application - dependent processing ) along a chosen set of dimensions which include time , frequency and spatial spread .the most important fusion operation for an antenna array is the real - time correlation of signals from every possible pair of antenna elements in different frequency sub - bands . apart from being an process ( for an n element array ) from a computational point of view, this brings in the additional complication of routing large volumes of distributed data to appropriate data processing elements to provide a complete - graph connectivity between the sources of data and processing elements .significantly , cross - correlation between all possible pairs of signals is also essential for using self - calibration techniques to enable dynamic calibration of instrumental and atmospheric contributions to the data corresponding to different elements , before they are subjected to an irreversible fusing operation in a phased array .this makes a spectral correlator an implicit requirement , even for a phased array , for minimizing the irreversible loss of information resulting from distortions induced in the the path or the local environment . in our approach ,we bifurcate the requirements of a real - time system into _ commodity _ and _ custom _ segments . in the current state of technology ,the commodity segment can be fulfilled by subsystems available in the market while the custom segment can generally be realized on the basis of customized hardware and/or firmware layers based on cots technology .such a bifurcation is explained below for different functional categories of the nsps : * _ cots segment _ : computationally complex and/or latency tolerant processing , typically realized on a programmable platform ranging from workstations to a high performance cluster . * _ custom segment _ : latency critical , logic intensive and repetitive pattern of deterministic processing , well suited for a configurable platform , typically fpga - based . for efficient computation, we pay special attention to reducing coupling between data in order to target explicit parallelism at all levels of processing .multiple parallel circuits are implemented in the fpga - based custom segment , while the current trend of multicore processors with access to shared memory is exploited in high level software .further , the desired high signal bandwidth and large number of antenna elements make the processing complex and compute intensive .this aspect , and the advantage of quickly implementing exploratory algorithms , make a commodity compute cluster an attractive choice for the central computing .this is recognized by explicitly including the cluster in the cots segment mentioned above . * _ cots segment _ :commercial switches with all - to - all connectivity are used for data routing to commodity processors and broadcasting in the last mile , as well as for load balancing .the routing is controlled by manipulating the destination addresses on data packets .connection - less protocols like user datagram protocol ( udp ) are adequate for high - speed , streaming applications where a small fraction of lost packets does not affect performance adversely .packet collisions are minimized in full duplex , point - to - point connections between network partners , and also because data flow is extremely asymmetric .further , the criteria for load partitioning discussed in section [ sub : load - partitioning - and ] very often result in under - utilization of link speeds to match them to sustainable processing bandwidths . * _ custom segment _ : customized switches with static routes for traffic shaping are relevant when only a subset of the network data needs to flow to a subset of the nodes based on certain conditions . they are generally implemented in configurable logic . since many fpgas support gigabit ethernet mac as a hard ( or publicly accessible soft ) ip , this feature is useful while introducing a bridge between the peer - to - peer network and the commodity network in the last stage of the custom segment .in addition , some or all the major subsystems may have management support from an embedded or explicit on - board processor . * _ cots segment _ : a commodity network compatible with a typical high performance compute cluster , which includes gigabit ethernet as a _ de facto _ standard for interfacing with external systems . * _ custom segment _ : a peer - to - peer network which may include significant on - the - fly application specific operations , suitable for implementing on a standard fpga platform . a data pooler node of the nsps shown carrying out data fusion and traffic shaping ._ ] an implementation of the actual processing on a dedicated set of identical hardware circuits or parallel processors can take advantage of an intelligent network capable of elementary on - the - fly operations to achieve a balance on the dataflow and computing requirements .this is depicted in fig .[ fig : a - data - pooler , ] , where a `` data pooler '' node , as defined in section [ sec : fusiontree ] , is shown carrying out real - time fusion of the streaming data by using the side information made available by external sources . at the same time , the pooler is seen generating side information out of the fused data set by way of multiple processing passes on the stored data .the pooler can then segregate the data , and route the segregated components to different data sinks using the routing information available with the peer - to - peer link nodes .we use `` traffic shaping '' here in a more general sense than in internet traffic shaping ( which delays lower priority packets in favour of better network performance of higher priority packets ) to refer to both segregation of the incoming stream , as well as the specific routing of segregated data to different sinks .further , the efficiency of hierarchical computation can be significantly improved by accommodating some degree of pre - processing and/or partitioning of data in each level of the custom segment to facilitate the next level .* _ cots segment : _ the overall task supervision , command and monitor , user interface and the dynamic system monitoring are tasks whose complexity is best left to the commodity segment to handle , where a variety of tools ranging from mpi , compiler resources and advanced operating systems like linux or vxworks are available . * _ custom segment _ : event driven scheduling with periodic or quasi - periodic events generated conveniently in a low latency logic implementation suitable for an fpga .the interval between the events is stretched to handle an application - specific transaction unit to the extent permissible within the available resources .in this section , we present the nsps architecture as a _ multi - sensor data fusion tree , _ in which both conventional and `` virtual '' sensors play a role . while entities like antenna elements , round - trip phase / delay monitors , noise calibration etc can be treated as `` conventional '' sensors , `` virtual '' sensors result from processing blocks at various levels .for instance , pre - processing can result in a flagging mechanism to improve the reliability of fusion systems like correlators , in which the original data are erased while compressing their information into a statistical estimate to be passed to the next level .the signal processing system proposed in this paper is a set of spatially separated nodes of varying communication and processing capabilities , which are interconnected by a customized high speed tree - like packet switched network interfaced to master commodity nodes .this is equivalent to a _ data fusion tree _, with the nodes of the tree performing operations like traffic shaping , packet routing , or pre - processing before data fusion .accordingly , we have described the overall architecture of nsps in the form of a _ fusion tree _ ,schematically represented in fig .[ fig : a - conceptual - layout ] .however , each level provides a different mixture of functional capabilities .this has resulted from our recognition of the following features of the functional requirements : * distributed computing across many nodes of different processing capabilities , with local parameters guided by a processor capable of seeing a subset of all data . * data routing nodes with configurable routes for routing preprocessed data subsets . * nodes with large memory buffers to enable multiple passes on data , also for enabling memory based data transposition . * a high speed network interconnecting all nodes , with ability of cots nodes to tap into the network .* interface to a master commodity node through a standard interface like gigabit ethernet ( gige ) .thus , from a functional point of view , we classify the nodes in nsps into the following three categories : 1 .data poolers / fusers : nodes with sufficient on - board memory to allow packetizing and multiple processing passes on incoming data .these break the need for many - to - many connectivity in the correlation process by transposing data via memory based switches .the transposition , based on different parameters , ultimately serves a packet of data suitable for processing by a single element of a distributed system in an embarrassingly parallel manner .data routers : these elements are endowed with high speed links to either peers or more powerful processors to whom incoming packets are routed based on statically configured routes .these form an integral part of our architecture , helping in the load balancing by directing appropriate subsets of preprocessed data to different processing elements .we use commodity switches for routing data to multiple external sinks by forming many - to - many connections between data sources and data processors .data processors : these elements have high compute density and can be used for preprocessing , as well as for data rate reduction .we classify processors into two groups as mentioned earlier : 1 .those catering to computationally complex and/or latency tolerant processing , an example of which is the estimation of system calibration parameters based on a long ( few minutes ) history of data , and its dynamic updation .this processing is generally carried out by sending a subset of the data to a central processor .2 . latency critical , logic intensive and repetitive pattern of deterministic processing .an example of this class is the real - time block - level data encoding process requiring the estimation of block level statistics .this can be carried out by multiple passes on small segments of data .thus , we visualize the nsps as a _ restricted _ distributed system , depending primarily on stripped down lightweight networking protocols and the static routes set up during system configuration .data routers , both customized and commodity , play an important role in reorganizing data to be computationally palatable to processing nodes in this scheme .a master node is in charge of command , configuration and control , and is almost always a commodity node like a pc .it may be noted that custom processing is spread across the nsps tree by explicitly advocating local intelligence in every node . as an illustration of the inherent facilitation of distributed processing in the nsps ,some important aspects of the interconnection mechanism are elaborated in the following subsections .interconnects in the data fusion tree consist of the following four essential graphs , which can either be logical or explicitly physical manifestations . 1 . the _ data _ network is a simplex , high bandwidth net connecting the leaf nodes of the fusion tree to a central processor , possibly passing through several collation levels of the signal processing tree .the _ control and monitor _ network is a full duplex , low bandwidth network and interconnects all nodes hierarchically through management processors to a central monitoring station .the _ calib _ network is a full duplex , low bandwidth network .this allows calibration information to reach the data fusing nodes before the sequence of irreversible fusion operations take place .the _ clock _ network is a full duplex network , providing the distribution of clocking signals to the various nodes , as well as allowing a round trip clock phase measurement . in actual implementations , it may be simpler to realize these in terms of a set of simplex networks , among which clock , control , monitor and calib are directed towards the leaf nodes while the data and status ( including response to monitor queries ) belong to simplex networks which flow from different levels of the fusion tree into the master node .our network implementation can be bifurcated into the following sets : 1 .customized high speed serial peer - to - peer links terminating into peer - to - peer switches which implement a subset of the complete graph connectivity .commodity high speed serial links terminating into commodity networking equipment , with ability to interface to standard processing nodes . in typical implementations, we expect custom links to be on a passive optical network ( pon ) based communication stack .gigabit ethernet is the preferred choice for the backbone of commodity segment to connect to the custom network .we exploit the high speed serializing ability of modern fpgas to dispatch data on high bandwidth copper links and use pon components to meet the spatial spread required to reach remote nodes over fibre links . as long as the bandwidth requirements are met, no specific preference is implied for a choice among different networking technologies .thus , some implementations may utilize the embedded multi - gigabit serializers in fpgas for peer - to - peer links while others may refer the cross dispersion of data to a compute cluster s high bandwidth infiniband network , or use commodity gigabit ethernet .this leads to the need for a flexible bridging mechanism which can be exploited by implementations .for instance , data can be conveniently transmitted over a peer - to - peer or a commodity link due to the maintained commonality of their interfaces . the high speed network internal to the nsps tree has features which are restricted and stripped down versions of those found in commodity networks .this is an optimization due to the highly controlled network which exists within the telescope receiver environment .our network differs from regular networks in several aspects : * a controller node is assigned for every sub - tree at a given level . this entity forwards command and status information between controller ( level-0 ) and upstream levels .thus , it is not a typical peer - to - peer network which does not have such a hierarchical control structure .broadcast and multicast domains available in commodity networks are used to implement control and monitor mechanisms , while explicit `` pull '' mechanisms are implemented on the custom network . here , the `` pull '' refers to the explicit request for data made by a downstream node to an upstream node .the advantage of a `` pull '' mechanism is that data is made available to a downstream node only when it is ready to handle the data , as inferred from the downstream node s request .also , if a downstream node is busy , then the upstream node loses data in integral packets , thus maintaining timing information .* the network configuration and routes are fixed statically in an application dependent manner at configure time .there is no node discovery , and data routing does not have an explicit mechanism for handling node failure .since all data flows towards a logical sink , there is also no destination address , although source addresses can be preserved .this simplifies network management to a large extent at the cost of non - redundancy of nsps entities . *communication protocols : all elements in our network , including master nodes , generate similar kinds of packets which contain an 8 byte header with fixed fields .these are typically indicative of the nature of accompanying data , as well as its timestamp , source and other meta information .system state can also be propagated through these packets , or by forming special status packets .the restricted meta information processing makes it simpler to realize packet formation in hardware with simple state machines . * high speedserial interconnects : all entities in the internal network communicate via high speed serial interconnects with a clock recoverable from the encoded data .this approach allows us to transmit data long - haul over fibre , or short - haul over copper without any changes .in particular , we discourage bus - based interconnection between physically separated nodes .once preprocessed data is ready within the nsps , it needs to be transferred onto a commodity network for reaching commodity nodes for post processing or archiving .local intelligence in the peer network can be used to partition the data such that the interfaces to commodity nodes use link speeds commensurate with their processing ability . for simplicity ,we have used gigabit ethernet as a typical standard external interface .this is a popular high speed serial interconnect with a vast amount of infrastructure available in the commodity market .it also allows transmission over copper ( utp ) to interface directly with commodity servers , or fibre ( via conversion to 1000basex ) for long - haul transmission .commodity servers of moderate ability can then be used as data sinks with minimal customization .this is also motivated by the fact that many modern fpgas have embedded high speed serial interconnects on chip , with complete gigabit ethernet support in the form of on - chip gigabit macs or as publicly available libraries . for the last mile connectivity ,udp can be used since it is a simple connectionless protocol with minimal overheads on top of ip .it is also possible to fill the relevant udp fields during system configuration , and hold them static for the duration of an observation .each of the internal network types carrying data ( _ data , calib _ and _ control _ ) can then be easily made available over a different udp port as part of the design .this allows an application program to associate independent threads to service these streams .the distributed nature of our architecture requires status monitoring of all nodes and links , which can be handled by the individual sub - tree roots and communicated to the master controller .this is implemented by a status `` pull '' scheme by which controlling entities periodically query the status of all nodes in the nsps tree rooted with them by way of a special aya ( `` areyoualive '' ) packet .the nodes respond with an iaa ( `` iamalive '' ) packet containing selected status information .similarly , control packets contain command and configuration data .each command packet typically results in a status reply from the targeted entity , which confirms the receipt of the command , and regenerates a control packet for the entities controlled by it .the master node can use this in an appropriately scheduled housekeeping operation to discover failure of nodes .this network is meant to carry data from the nsps tree which is relevant to forming calibration solutions for the array .the calibration mechanism is to be applied differently for the two main modes of observation with the nsps : * in the interferometric mode , the correlator can work independently of the actual gains and phases of the sensor elements , since the observations include calibration scans at reasonable time intervals .off - line processing can infer intermediate variations by supplementing interpolation between calibration scans with dynamic calibration schemes like self - calibration based on the partial , low bandwidth dataset available over the calibration network .* on the other hand , real - time beam formation includes an irreversible fusing operation which requires dynamic calibration to be part of data fusion .fortunately , it is often possible to use a relatively small subset of the data ( non - contiguous timeslices or a chosen frequency sub - band ) for this purpose to enable short term predictions of gain variations . these can be fed to the fusing nodes well in time before irreversible fusing operations are performed .since the complexity of the actual algorithm used for calibration makes it better suited for a general purpose computer , the _ calibration network _ can be used to route the relevant subset of data to a commodity switch and deliver the calibration parameters to the appropriate nsps tree level . in the spatially distributed , direct rf sampling nsps architecture, clocks passed to samplers have very stringent signal quality constraints in terms of net jitter and stability .the alignment of the multiple data sources before fusion requires high relative stability of the sampling clocks with random jitters much smaller than the reciprocal of the highest frequency in the sampled signal .clock distribution should also include a mechanism for ensuring the traceability of timekeeping at all digitizing blocks to a centrally maintained time standard to a very high accuracy .the implementation can benefit from commercial clock distributors which have embedded phase - lock loop clock synthesizer with on - chip voltage controlled oscillator ( vco ) and a per port delay tuning for the distributed clocks .all data flowing in the nsps is packetized with a custom , low overhead header .all subsystems accept and generate data in a packetized fashion .this reflects the inherent asynchronous nature of our system .packets traversing our platform are atomic and capable of independent existence .the packetizing of data means that data loss due to network congestion or buffer over - runs is never arbitrary , but always in units of packets . at any instant ,our network can have different kinds of packets traversing it , corresponding to different stages in the processing .the basic unit of packet size is maintained as 8 bytes , which is a natural unit or sub - unit for different memory and processing hierarchies .adequate padding is used if necessary to maintain this condition .the header is mandated to have a few fixed fields which are common in size and layout across packet types , allowing processing entities to examine packets which can be processed by them , while discarding the others . in a broadcast network, this approach can waste bandwidth when packets are discarded , but the wastage can be minimised by setting up static routes between partner nodes .this is possible in both the custom and commodity peer - to - peer link nodes .the command network , on the other hand , is a broadcast network , with nodes passing on commands not addressed to them to all other nodes downstream of themselves .data sources can include packet specific extensions to the packet headers generated by them .the following fields are suggested as a mandatory part of the packet header : * source identifier : at every level of the tree , nodes are endowed with a unique identifier which supplants the existing upstream source i d , if any processing is carried out on the packet . *datatype : this field allows processing entities to recognize which packets are palatable to them and to reject others .* data pixel descriptor : this field lays out the size and description of the smallest unit of data transfer to be one of an allowed set , which is implementation dependent .* streams : this field records the number of independent signal sources present in each packet .* packet size : the size of a packet is expressed in units of words as specified by the datatype field . *timestamp : this field is populated as early as possible in the data generation path and maintained across data processing .this field is generally populated by a timestamp counter running on either a reference clock or on the sampling clock itself and is traceable to the centrally maintained time standard .this specification is efficient for real - time streaming data description with minimum overhead . for archival of processed data , a standard format which allows multiple binary streams to maintain their identity , like the fits or vlbi data interchange format ( as proposed by the vdif task force ( 2009 ) )can be used .no subsystem in our scheme is source synchronous , be it at the hardware or the software level .all subsystems have enough memory for a store - and - forward of several packets .this allows the processing to happen at the packet level , on a faster clock than the sampling clock of front end adcs .it also eases the timing requirements of designs implemented in fpgas and makes them more tolerant of clocking errors .sequencers play an important role in our architecture , generating events on which the processing progresses .the sequencers generate necessary globally aligned events to which any action taken on the basis of commands from commodity network will get aligned .a simple example is an implementation where all processes in the peer network operate at a block level , with a periodic event signifying the need for scheduling a new process as a result of the arrival of a new block of data .it is important to match the communication bandwidth to processing abilities at every level in the signal processing tree .more specifically , event markers generated by the sequencers should facilitate a partitioning of processing in each level into abstract transactions , where each transaction deals with the entire data collected over a convenient timeslice * and * the relevant data are locally available _ on demand_. in particular , it is desirable that processing at the central commodity segment is facilitated at a cadence suited for general purpose operating system scheduling to achieve latency tolerance .for instance , to be commensurate with a housekeeping tick of 10 milliseconds in typical linux configurations , the transaction timeslices should be several times longer . among the available axes in processing space along which the data can be partitioned and distributed to parallel processors ,the time axis is often the most convenient for slicing , as individual timeslices can be considered independent . for a large network ,we realize this from a hierarchical set of data poolers _ _ populating different levels of the processing tree , which can collate data from different sources and partition them along the time dimension at each level .in this section , we provide an example of implementation of nsps by giving an outline of the system being planned for modernizing the ort .the ort is a 506 m x 30 m equatorially mounted cylindrical telescope with an equispaced linear array of 1056 dipoles along the focal line .each dipole has a tuned low noise amplifier with about 40 mhz bandwidth centered at 327mhz , although the existing analog phasing network restricts the bandwidth to about 10 mhz .a collaborative program for upgrading the ort has been undertaken jointly by the raman research institute ( rri ) and the national centre for radio astrophysics ( ncra ) , which operates the ort . in this program , the feed array is logically divided into 264 identical segments , where each segment represents an independent antenna element of size 1.93 m x 30 m . the aim of this upgrade is to reconfigure the ort into a programmable 264-element array .when completed , the reconfigured ort will have an instantaneous field of view of , bandwidth of and will be equipped with an nsps - based digital receiver .currently , prototypes have been tested for a large fraction of the custom segment of the nsps and the analog signal conditioning subsystems . the final production and integrationis expected to be completed in 2011 .the digitizers are organized in 22 digitization blocks located below the reflector at a spacing of about 23 m , where each digitization block includes a 12-channel digitizer capable of operating at 100 ms / s .all the 22 digitization blocks are connected in a star topology with a central system using a peer - to - peer network on optical fibres with multiple links operating at speeds of 2.5 gigabits / sec from each block .proposed nsps implementation for configuring ort as a 264 element programmable telescope ._ ] the proposed system consists of four hierarchical levels as illustrated in fig .[ fig : proposed - nsps - implementation ] , the high speed peer network uses the light weight aurora protocol simplex links for data uplink , with last mile via gige .currently , bandwidths of upto 100mbps per gige link into level-0 memory have been sustained .* _ level 3 : _ this level is composed of the distributed digitization infrastructure and is installed at the antenna base .the prime components of this level are the digitizers , the sampling clock derivation and conditioning circuitry , the first level data organizer and peer - to - peer link handler .all these components required for handling 12 sensor elements from a 23 m section of the ort are implemented as a single board .* * the adcs ( dual channel ad9600 ) are capable of a 100ms / s sampling for signals with frequencies upto .the dynamic range at 10 bit resolution allows us to implement an agc in software .more importantly , the implementation can benefit from the on - chip sampling clock conditioner , divider and duty cycle stabilizer .for instance it is possible to provide a sine - wave with frequency 2 - 8 times the sampling clock , and use the on - chip features to convert to square - wave , enable duty - cycle stabilizer and divide by suitable integer to get the sampling clock , thus reducing the overall sampling clock jitter and hence the phase noise in the sampled data .this feature is useful in direct ( harmonic ) sampling of the incoming rf since the nyquist sampling interval in a band - limited rf is decided only by the bandwidth while jitter tolerance depends on the highest frequency content . ** this level has an embedded clock synthesizer and distributor for the on - board 12-channel adc based on a reference distributed on fibre by the central high stability clock distributor .it is received using a digital fibre optic receiver .the embedded clock distributor is based on a clock buffer and distributor ( lmk03020 ) which has an on - chip vco and per - port delay tuning . * * the first data processing block is implemented in a xilinx spartan6 ( lx45 t ) fpga to perform a conversion of the adc 10 bit resolution data to via configurable , table - driven logic , where the look - up - table ( lut ) is dynamically updated to accommodate innovative schemes of compression and segregation . for instance , let us assume that a choice among a pre - determined set of luts is best suited for coding / compressing the data in a set of physical packets associated with a transaction unit . here, each table implements a different encoding of the input word to an output with lesser number of bits per word .further , we assume that every word of each physical packet is encoded by a choice between two luts out of the set , to represent normal and segregated(flagged ) data . for decompression by downstream nodes ,the tags of these two luts can be accommodated in the packet header , while a one - bit selection between the two can be associated with each data word - thus achieving the dual purpose of flagging and scaling at the word level .such a scheme can accommodate a wide range of scale factors and hence a large dynamic range within a logical packet .suitable thresholds for packet - level choice of luts can be generated on the basis of integrated power over a reasonable time stretch as part of the pre - processing . * * the data router node buffers data from all 12 sensors in internal memory and reorganizes them to form packets containing identical time - stretches from all antenna elements .the peer - to - peer link out of each 23 m section which connects level-3 to level-2 is implemented using the 4 available rocketio multi - gigabit onboard serializers on the spartan6 .our choice of sfp for the current implementation can sustain link speeds of upto 2.6gbps on single mode fibre , while we use the light - weight aurora protocol at a wire speed of 2.5gbps for communication between level 3 and level 2 .* _ level 2 : _ this level is implemented using an fpga ( virtex5 lx50 t ) board whose on - board resources include 2 gb of memory and 8 multi - gigabit transceivers and expansion connectors . this board can sustain the following level-2 functionalities : * * fft block : here , the data processor block first decodes data from a pair of sensors , packs them into the real and imaginary parts of a 32-bit complex integer word , and implements a pipeline stage ( e.g. , radix-64 ) of a split radix fft for all pairs of incoming channels .the processing resources are enough to handle upto 24 channels ( 2 level-3 entities ) at the maximum sampling rate . * * the data pooler block operates using the large local ddr2 ram and the interconnection with other level-2 cards to pool subsets of both local and remote level-2 data into transaction oriented packets by partitioning data along the time axis . here, each transaction refers to the processing of a specific timeslice for the entire array . * * the data router block collects data from the local ram in units appropriate for transfer to each outgoing port , packetizes them and sends out selected time slices to level-1 entities .provision for computation offloading is provided in the form of spare peer - to - peer links which can add on more level-2 cards . * _level 1 : _ at this level , a memory - based , nsps to commodity network bridge is implemented .large bursts of continuous time slices are first buffered in ram , and then sent out over gige as properly timestamped packets .the level implements load partitioning as configured by the root level by manipulating ethernet destination addresses of streams going into the data gige switch .there is possibility of implementing a data processor block for the remaining 16 point fft operation pending from the split radix fft .* _ level 0 : _ at the root of the nsps tree , a medium level cluster is proposed to handle both the communication and processing requirements for forming correlations between all sensors .it is important to note that the cluster inter - node traffic is significantly reduced due to the data routing and transposition carried out using the data pooling nodes at the various levels .the formation of actual correlations and the calibration parameter estimation is carried out by this level .the above mentioned partitioning of the load into the 3 levels can be used to bring a subset of data from all 264 elements into one node via a quad - gige card .the control network is a simplex , one - way channel from a master with unique i d ( in the commodity segment ) to the peer network through the bridge node .thus , while data can be routed to arbitrary nodes in the cluster depending on the udp destination addresses set during configuration , commands are accepted by the bridge node only through a privileged link from the master .this simplifies assigning privileges to operations related to starting , stopping or resetting the acquisition state machines , configuring the network routes on customized hardware switches , changing ethernet destination addresses , or changing the contents of the luts used in earlier nodes like the digitizers .while asynchronous , packetized processing over standard networks is a relatively new concept in radio telescopes , it is being embraced enthusiastically due to the many benefits it offers to the system designer .even among this class of telescope data processors , contemporary architectures usually have a direct link from the samplers to the central processor . some operations like a digital filter bank or fftare carried out remotely , while others like cross - correlation is done centrally .the memory - rich architectures of modern fpgas help in distributing computing to remote nodes and enables buffering to allow multiple passes on the streaming data .as the data volume grows , e.g. , in the central pooling stations , the processing can be supported by large off - chip memory using commercial memory modules , routinely supported by modern fpgas .this provides substantial enhancement to buffering for transaction level operations and data partitioning . a peak data rate of 100 ms / s x 4bits for 264 elements forthe ort would correspond to about 13.2gb / s for which the level-1 buffering of 22 gb in 11 virtex-5 cards shown in fig .[ fig : proposed - nsps - implementation ] is comfortable to sustain transactions of upto a fraction of a second duration .the use of standard software stacks also allows us to leverage the various high performance modes being worked upon by system optimizers , e.g. , the zero copy mechanism on linux .another challenging problem with large arrays is the so - called _ corner turning _ problem _ _ , _ _ which refers to the transposition of the input signal matrix needed to achieve the all - to - all communication necessary for correlation .earlier approaches have looked at either commercial switches or entirely customized switches for routing data .we break this problem down into levels , and apply a hybrid of commercial as well as custom routing .the data pooler element is utilized to implement a memory based switch , while the cots ( gige ) network controller manages another level of redirection by manipulating udp destination addresses .we have presented a packetized , heterogeneous and distributed signal processing architecture for radio interferometric signal processing which elevates the network to a core system component .the architecture addresses some of the core issues pertaining to interferometric signal processing .we visualize this problem as that of an appropriate workload creation and scheduled dispatch to matched processors over a data flow tree . here , the leaf nodes are sources of data , with data processors handling a managed slice of the processing at the intermediate nodes of this tree .we emphasize the use of cots components , both hardware and software , for rapid deployment , ease of maintenance , and lowering the cost of the architecture implementation .the goal of realizing a programmable telescope with nsps is facilitated by defining rigid interfaces between both hardware and software components .this can allow exchange of a variety of data with varying communication and computing requirements between levels in the network .most of the individual nodes in the nsps can change the nature of their processing within the limits specified by their designed personality and available resources at the node .this allows offloading of computing requirements in a hierarchical manner up the nsps tree , trading off implementation time with hardware capability of an application mode . due to the rigidity of interfacing protocols as well as the standardized networks making up the system, we can comfortably add nodes which can tap into the nsps in order to carry out a different processing chain .data duplication , if required , can be carried out by cots components ( e.g. , by switches operating in broadcast mode ) thus reducing development load .we have presented an outline of the nsps implementation being planned for configuring the ort as a programmable 264 element telescope .our architecture is optimally tuned to service the needs of medium sized arrays .we advocate full software processing for smaller arrays , with an increasing factor of hardware offload as the array size grows .this approach has being taken by us in building a 44 element demonstrator as a precursor to the receiver for the full 264 element ort array .this receiver exploits all nsps aspects we have dwelt on , and is in an advanced stage of completion .the 44-element demonstrator for the ort is being built as a collaborative effort between radio astronomy laboratory at raman research institute and the observatory staff at the ooty radio telescope .many of the ideas presented here evolved during the trials of demonstrator subsystems for which we specially thank colleagues both at rri and ort .crs would like to thank madan rao and dwarakanath whose comments have helped improving the manuscript .we thank the anonymous referee whose comments have been very helpful in improving the clarity of presentation in the manuscript
a new architecture is presented for a networked signal processing system ( nsps ) suitable for handling the real - time signal processing of multi - element radio telescopes . in this system , a multi - element radio telescope is viewed as an application of a _ multi - sensor , data fusion _ problem which can be decomposed into a general set of computing and network components for which a practical and scalable architecture is enabled by current technology . the need for such a system arose in the context of an ongoing program for reconfiguring the ooty radio telescope ( ort ) as a programmable 264-element array , which will enable several new observing capabilities for large scale surveys on this mature telescope . for this application , it is necessary to manage , route and combine large volumes of data whose real - time collation requires large i / o bandwidths to be sustained . since these are general requirements of many multi - sensor fusion applications , we first describe the basic architecture of the nsps in terms of a _ fusion tree _ before elaborating on its application for the ort . the paper addresses issues relating to high speed distributed data acquisition , field programmable gate array ( fpga ) based peer - to - peer networks supporting significant on - the fly processing while routing , and providing a last mile interface to a typical commodity network like gigabit ethernet . the system is fundamentally a pair of two co - operative networks , among which one is part of a commodity high performance computer cluster and the other is based on commercial - off the - shelf ( cots ) technology with support from software / firmware components in the public domain .
optoelectronic oscillators ( oeos ) combine a nonlinear modulation of laser light with optical storage to generate ultra - pure microwaves for lightwave telecommunication and radar applications . their principal specificity is their extremely low phase noise , which can be as low as dbrad/hz at khz from a ghz carrier . despite some interesting preliminary investigations ,the theoretical determination of phase noise in oeos is still a partially unsolved problem .the qualitative features of this phase noise spectrum can be recovered using some heuristical guidelines or rough approximations , but however , a rigorous theoretical background is still lacking .there are several reasons which can explain that absence of theoretical background .a first reason is that before refs . , there was no time domain model to describe such systems , so that stochastic analysis could not be used to perform the phase noise study .moreover , unlike most of oscillators , the oeo is a delay - line oscillator , and very few had been done to study the effect phase noise on time - delay induced limit - cycles .finally , the oeo is subjected to multiple noise sources , which are sometimes non - white , like the flicker ( also referred to as " ) noise which is predominant around the microwave carrier .the objective of this work is to propose a theoretical study where all these features are taken into account .the plan of the article is the following . in section [ phasediffusion ] ,we present the phase diffusion approach in autonomous systems .it is a brief review where the fundamental concepts of phase diffusion are recalled , and where some important earlier contributions are highlighted .then , we derive in section [ phasesdde ] a stochastic delay - differential equation for the phase noise study .we show that for our purpose , the global interaction of noise with the system can be decomposed into two contributions , namely an additive and a multiplicative noise contribution .section [ belowth ] is devoted to the study of the noise spectrum below threshold .it will appear that the spectrum below threshold will not only be important to validate the stochastic model , but also that it enables an accurate calibration of additive noise . in section [ aboveth ] , we address the problem of phase noise when there is a microwave output using fourier analysis , and we show that it is possible to have an accurate image of the phase noise spectrum in all frequency ranges . the last section concludes the article .for an ideal ( noise - free ) oscillator , the fourier spectrum is a collection of dirac peaks , standing for the fundamental frequency and its harmonics .the effect of amplitude white noise is to add a flat background , while the peaks do keep their zero linewidth ; it is the effect of phase noise to widen the linewidth of these peaks .some pioneering papers on the topic of phase noise in autonomous oscillators using stochastic calculus had been published forty years ago . in particular , it was demonstrated that a general framework to study the problem of phase noise in a self - sustained oscillator could be built using some minimalist assumptions .the first point is that a strong nonlinearity is an essential necessity in oscillators , in the sense that nonlinearity can not be regarded as small because it controls the operating level of the oscillator .the second important point is that the phase is only _ neutrally _ stable , so that quasilinear methods which assume that fluctuations from some operating point are small ( linearization techniques ) can not be applied _ directly_. the phase is neutrally stable as a consequence of the phase - invariance of autonomous oscillators . in other words , limit - cycles are stable against amplitude perturbations , while there is no mechanism able to stabilize the phase to a given value : hence , phase perturbations are undamped , but they do not diverge exponentially , though . in a noise free oscillator ,the stroboscopic " state point on the limit - cycle is immobile , but in the presence of noise , it moves randomly along the limit - cycle : in other words , the phase of the oscillator undergoes a _ diffusion _ process , in all points similar to a one - dimensional brownian motion . in the most simple case ,the random fluctuations of the phase are referred to as a _ wiener process _ , obeying an equation of the kind , where is a gaussian white noise with autocorrelation , while is a parameter referred to as the _ diffusion constant_. it can be demonstrated that the phase variance diverges linearly as , and the single - side band phase noise spectrum ( in dbc / hz ) explicitly reads ] is the slowly varying amplitude of the microwave .we can significantly simplify the right - hand side term of eq .( [ oeo_original ] ) because the cosine of a sinusoidal function of frequency can be fourier - expanded in harmonics of . in other words , since is nearly sinusoidal around , then the fourier spectrum of ] and the jacobi - anger expansion where is the -th order bessel function of the first kind .hence , since the filter of the feedback loop is narrowly resonant around , it can be demonstrated that discarding all the spectral components of the signal except the fundamental is an excellent approximation , so that eq .( [ oeo_original ] ) can be rewritten as \ , \cos [ \omega_0 ( t - t)+ \psi(t - t ) ] \ , .\label{oeo_j1}\end{aligned}\ ] ] in order to include noise effects in this equation , we will consider two main noise contributions in this system .the first contribution is an _ additive noise _ , corresponding to random environmental and internal fluctuations which are uncorrelated from the eventual existence of a microwave signal .the effect of this noise can be accounted for by addition as a langevin forcing term , to be added in the right - hand side of eq .( [ oeo_j1 ] ) .this additive noise can be assumed to be spectrally white , and since we are interested by its intensity around the carrier frequency , it can be explicitly written as where is a complex gaussian white noise , whose correlation is , so that the corresponding power density spectrum is .the second contribution is a _ multiplicative noise _ due to a noisy loop gain .effectively , the normalized gain parameter explicitly reads \ , .\label{gamma_explicit}\end{aligned}\ ] ] if all the parameters of the system are noisy ( i.e. , we replace by , by , etc . ) , then the gain may be replaced in eq .( [ oeo_j1 ] ) by , where the is the overall gain fluctuation .we therefore introduce the dimensionless multiplicative noise which is in fact the relative gain fluctuation . in the oeo configuration, we have .this noise is in general spectrally complex , as it is the sum of noise contributions which are very different ( noise from the photodetector , from the amplifier , etc . ) . in agreement with the usual noise spectrum of amplifiers and photodetectors, we will here consider that this multiplicative noise is flicker ( i.e. , varies as ) near the carrier , and white above a certain knee - value .we therefore assume the following empirical noise power density where is the low corner frequency of the flicker noise , while is the high corner frequency .more precisely , we consider that the noise is white below and above , while it remains flicker in between .typically , we may consider hz and khz , so that the flicker noise is extended over a frequency span of more than 4 orders of magnitude . + to avoid the integral term of eq .( [ oeo_j1 ] ) which is complicated to manage analytically , it is mathematically convenient to use the intermediate integral variable which is also nearly sinusoidal with a zero mean value . using eqs .( [ oeo_j1 ] ) , ( [ decomp_noises ] ) and ( [ multiplicative_noise ] ) , it can be shown that the slowly - varying amplitude obeys the stochastic equation \ , \ ,\left [ \frac{1}{2}e^{i \omega_0 ( t - t)}e^{i\psi_t } + { \rm c.c . } \right ] \nonumber \\ & & \times { \rm j_1}[2 |\dot{{\cal b}}_t+i \omega_0 { \cal b}_t|]+ 2 \delta \omega \left [ \frac{1}{2}\zeta_a(t)e^{i \omega_0 t } + { \rm c.c . } \right ] \ , , \nonumber\\ \label{eq_calb_stoch_1}\end{aligned}\ ] ] where c.c .stands for the complex conjugate of the preceding term .we can assume and ; the relationship therefore gives , so that we can finally derive from eq .( [ eq_calb_stoch_1 ] ) the following stochastic equation for the slowly varying envelope \ , { \rm jc_1}[2 |{\cal a}_t| ] { \cal a}_t \nonumber \\ & & + \mu e^{i\vartheta } \zeta_a(t)\ , , \label{eqt_a_stoch}\end{aligned}\ ] ] where is the first order _ bessel cardinal _ function of the first kind .the phase condition has been set to , so that the dynamics of interest is restricted to the case .the key parameters of this equation are \ , , \label{def_param_sdde}\end{aligned}\ ] ] where is the quality factor of the rf filter . since , we may simply consider that and .the complex term is a kind of `` filter operator '' , which can be simply equated to the half - bandwidth when the -factor of the filter is sufficiently high , as it was done in ref .it is also noteworthy that in the complex amplitude equation ( [ eqt_a_stoch ] ) , the initial multiplicative noise remains a _variable , while the additive noise becomes _complex_. we had recently shown , in agreement with the experiment , that the oeo has three fundamental regimes . for , the system does not oscillate and the trivial fixed point is stable ; for , the system sustains a pure microwave oscillation , with a constant amplitude and frequency ; and at last , for , the system enters into a regime where the amplitude of the microwave is unstable , and turns to be nonlinearly modulated .we can consider that this phenomenology is still correct as long as .with the aid of the stochastic delay - differential equation ruling the dynamics of , we may now derive analytically the power spectrum density of the oscillator , below and above threshold .however , it should be stressed that in _ all _ cases , stochastic variables should be manipulated with respect to the rules of stochastic calculus when an integral / differential transformation is applied to them .in general , no interest is paid to the study of the noise power density spectrum below threshold in oeos .this lack of interest can be explained by the fact that there is no oscillation in this regime , and the system randomly fluctuates around the trivial equilibrium .however , as we will further see , this regime is particularly interesting because it enables to understand how the noise interacts with the system . from the stability theory of delay - differential equations with complex coefficients , the deterministic solution of eq .( [ eqt_a_stoch ] ) below threshold is the trivial fixed point .after linearization around this solution , eq .( [ eqt_a_stoch ] ) can simply be rewritten as where we have used .this equation indicates that the multiplicative noise has no significative influence below threshold , because the product is a second - order term .therefore , the noise power below threshold is _ essentially _ determined by additive noise .equation ( [ a_bel_th ] ) is linear with constant coefficients : hence , the power density spectrum can directly be obtained as one can determine the total output power below threshold due to the white noise fluctuations in the system through the formula where is the output impedance ( in our case , ) .the dimensionless power can not be calculated analytically for : it can nevertheless be determined either by numerical simulation of eq .( [ a_bel_th ] ) , or through a numerical computation of the integral , where is given by eq .( [ a_bel_th_four_2 ] ) .however , in the open - loop configuration ( ) , the noisy output power can be analytically determined as through the use the fourier integral , or using fundamental results from stochastic calculus since eq .( [ a_bel_th ] ) degenerates to the well - known orstein - uhlenbeck equation .therefore , knowing the bandwidth of the rf filter and the half - wave voltage of the mz interferometer , an open - loop measurement of the output power can directly give an experimental a value for the white noise power density through eq .( [ noise_pow_gam0 ] ) . in our system, we have experimentally measured nw ( or dbm ) , which corresponds to rad/hz . this value for the powercan also be obtained by other means [ see appendix a ] . the curve displaying the power variation as a function of the normalized gain under threshold is shown in fig .[ noise_power_profile ] , and there is an excellent agreement between the experimental data and our analytical formula of eq .( [ noise_pow_true ] ) .it may be interesting to note that the noise power apparently diverges at .in fact , one should not forget that this result is obtained using eq .( [ a_bel_th ] ) , which is only valid for . when , the amplitude of increases and the higher order terms of the bessel cardinal function are not negligible anymore , so that the eqs .( [ a_bel_th ] ) and ( [ a_bel_th_four_2 ] ) are no more valid .hence , divergence of the noise power is prevented by the nonlinear terms of eq .( [ eqt_a_stoch ] ) which become predominant in a very narrow range just below the threshold .a noteworthy study on this topic of noisy oscillators near threshold is ref . .it is also noteworthy that for , the noise spectrum follows the spectral shape of the rf filter .however , when is increased ( still below threshold ) , a first qualitative difference emerges , since the spectrum still follows the spectral shape of the filter , but its fine structure is composed by a collection of peaks which are the signature of microwave ring - cavity modes , as it can be seen in figs .[ spec_bt_num ] and [ spec_bt_exp ] .variation of the rf noise output power as a function of the normalized gain , under threshold .the solid line is the theoretical prediction of eq .( [ noise_pow_true ] ) with rad/hz , and the symbols represent the experimentally measured data .the gain was varied through attenuation in the electric branch of the loop.,width=302 ]above threshold , the amplitude of the microwave obeys the nonlinear algebraic equation =1/(2 \gamma) ] progressively becomes negligible as is increasing , so that the ring - cavity peaks excited by white noise become strongly damped ( for being outside the rf bandwidth ) . in this case, the phase noise decays as however , the phase noise does not decrease monotonically as up to infinity : in fact , for , there is a second phase noise floor induced by the coupling between phase fluctuations and amplitude fluctuations ( second - order effect , see ref .this article has presented a theoretical study of phase noise in oeos .our approach has consisted in a langevin formalism , that is , in adding noise sources to a core deterministic model for the microwave dynamics .we have found a excellent agreement between the main predictions of the model and the experimental results .there is also an good agreement between this theory and the results that are known from the literature , or from our earlier works .the main advantage of this approach is that it enables within the same framework to understand the behavior of the system under and above threshold , as the same model continuously accounts for all the observed features independently of the value of the gain .however , we have not taken into account in this first model the noise generated by the filter ( noisy and ) , and the delay time ( noisy ) .fluctuations associated to these parameters may induce interesting stochastic features , that will be adressed in future work .another line of investigation is to achieve a better spectral and statistical fitting of the multiplicative noise , which is an essential variable for the determination of phase noise spectra .future work will also emphasize on phase noise reduction methods , such as optical filtering , multiple - loop architectures , or quadratic crossed nonlinearities .in the open - loop configuration , the total output power can also be obtained using some quantum electronics formulas .effectively , the output power can be explicitly expressed as \,g \delta f \ , , \label{app_1}\end{aligned}\ ] ] where is the total gain of our two cascaded amplifiers ( and db at ghz ) , db is the noise figure of the first amplifier , k is the room temperature , is the boltzmann constant , is the electron charge , ma is the photodiode current , is the equivalent load impedance for the photodiode , and mhz is the bandwidth of the rf filter .the formula gives nw , while we have measured nw . the combination of eqs .( [ noise_pow_gam0 ] ) and ( [ app_1 ] ) also gives a method to determine directly from the specifications of the various optoelectronic components used in the oscillation loop .we use it chain rules to derive the stochastic differential equation for the phase .we first rewrite eq .( [ a_above_th ] ) under the differential form { \cal a}_t dt + \mu e^{i\vartheta } \ , d{\cal w}_a \ , , \label{app_9}\end{aligned}\ ] ] where is a differential wiener process .note that and .the fact that explains why the differential terms of second order should be taken into account in stochastic calculus , so that usual differentiation and chain rules do not generally apply . when considering the second order one may consider , and discard higher order terms since \ll dt ] so that finally , this result is also the one we may have recovered through the usual rules of differential calculus ( however , note that it is not so for the equation ruling the power variable ) . also note that this equation is valid only as long as the approximation of neglecting in eq .( [ eq_calb_stoch_1 ] ) is valid .it is possible to gain a different physical insight into the phase noise problem in oeos , using an alternative methodology related to the conventional theory of feedback oscillators .we hereafter briefly sketch the main lines of this heuristical approach .the oscillator consists of an amplifier of gain ( constant ) and of a feedback path of transfer function in closed loop .the function selects the oscillation frequency , while the gain compensates for the feedback loss .this general model is independent of the nature of the amplifier and of the frequency selector .we assume that the barkhausen condition for stationary oscillation is verified at the carrier frequency by through a gain - control mechanism . under this hypothesis, the phase noise is modeled by the scheme shown in fig .[ scheme_barkhausen ] , in which all signals are _ the phases of the oscillator loop _ .the main reason for describing the oscillator in this way is that we get rid of the non - linearity , pushing it in the loop - gain stabilization .the ideal amplifier repeats " the phase of the input , for it has a gain of ( exact ) in the phase - noise model .the real amplifier introduces the random phase in the loop . in this representation ,the phase noise is always additive noise , regardless of the physical mechanism involved .this eliminates the mathematical difficulty inherent in the parametric nature of flicker noise and of the noise originated from the environment fluctuations .the feedback path is described by the transfer function of the phase perturbation . in the case of the delay - line oscillator ,the feedback path is a delay line of delay followed by a selector filter .the latter is necessary , otherwise the oscillator would oscillate at any frequency multiple of , with no preference . implementing the selector as a bandpass filter ( a resonator ) of group delay ,the phase - perturbation response of the feedback path is we assume that all the phase perturbations in the loop are collected in the random function , regardless of the physical origin ( amplifier , photodetector , optical fiber , etc . ) . denoting with the oscillator output phase ,the oscillator is described by the phase - perturbation transfer function . by inspection on fig .[ scheme_barkhausen ] , and using the basic equations of feedback , the oscillator transfer function reads and the oscillator phase noise spectrum would be given by .x. s. yao and l. maleki , optoelectronic oscillator for photonic systems " , _ieee j. of quantum electron ._ * 32 * , pp . 11411149 ( 1996 ) .y. kouomou chembo , l. larger , h. tavernier , r. bendoula , e. rubiola and p. colet `` dynamic instabilities of microwaves generated with optoelectronic oscillators '' , _ opt ._ * 32 * , 25712573 ( 2007 ) .a. demir , a. mehrotra , and j. roychowdhury , phase noise in oscillators : a unifying theory and numerical methods for characterization " , _ ieee trans .theory and appl ._ * 47 * , pp . 655674 ( 2000 ) .g. j. coram , a simple 2-d oscillator to determine the correct decomposition of perturbations into amplitude and phase noise " , _ ieee trans .i , fund . theory and appl ._ , * 48 * , pp . 896898 ( 2001 ) .e. rubiola , _ phase noise and frequency stability in oscillators _ , cambridge university press , november 2008 ( in press ) .isbn 978 - 0521 - 88677 - 2 .
we introduce a stochastic model for the determination of phase noise in optoelectronic oscillators . after a short overview of the main results for the phase diffusion approach in autonomous oscillators , an extension is proposed for the case of optoelectronic oscillators where the microwave is a limit - cycle originated from a bifurcation induced by nonlinearity and time - delay . this langevin approach based on stochastic calculus is also successfully confronted with experimental measurements . [ [ keywords ] ] keywords : + + + + + + + + + optoelectronic oscillators , phase noise , microwaves , semiconductor lasers , stochastic analysis .
we consider space - time block codes ( stbcs ) for an transmit antenna , receive antenna , quasi - static mimo channel ( mimo system ) with rayleigh flat fading .the system can be modeled as where is the codeword matrix transmitted over channel uses , is the received matrix , is the channel matrix and the matrix is the additive noise at the receiver .the entries of and are i.i.d .zero mean , unit variance , circularly symmetric complex gaussian random variables .( stbc ) an stbc encoding real independent information symbols , denoted by , , is a set of complex matrices given by ^t \in \mathcal{a } \right\},\ ] ] where the complex matrices , which are called _ linear dispersion _ or _ weight matrices _ , are linearly independent over the real field , , and the finite set is called the _ signal set_. ( code rate ) the rate of an stbc is the average number of information symbols transmitted in each channel use . for the stbc given by ,the code rate is real symbols per channel use , or complex symbols per channel use ( cspcu ) .the linear independence of the weight matrices in the definition of an stbc implies that . _ throughout this paper , unless otherwise specified , the code rate is taken to be in terms of complex symbols per channel use . _ generally , the signal set is chosen in such a way that the stbc has full - diversity and large coding gain . in most cases is chosen to be a subset of , where is a full - ranked matrix .one such instance is when the symbols are partitioned into multiple encoding groups , and each group of symbols is encoded independently of other groups using a lattice constellation , such as in clifford unitary weight designs ( in which case is an orthogonal matrix ) , quasi - orthogonal stbcs and coordinate interleaved orthogonal designs .there are also instances where the real symbols are encoded independently using regular pam constellations of possibly different minimum distances , in which case is a diagonal matrix with positive entries . for a complex matrix ,let its real and imaginary components be denoted by and , respectively .let denote the complex vector obtained by stacking the columns of one below the other and ^t.\ ] ] now , the system model given by can be expressed as where , and the _ equivalent channel matrix _ is given by .\ ] ] consider the vector of transformed information symbols which takes values from , where denotes the ring of integers .the components of take finite integer values , i.e. , , with for some finite .hence one can use a sphere decoder to decode and then obtain the ml estimate of the information vector .the ml decoder output is given by where denotes the frobenius norm of a matrix .it is claimed in without proof that is a sufficient condition for the system of equations defined by to be _ not underdetermined _ ,i.e. , for in section [ sec2 ] , we show that the claim made in is true only for .this observation is the gateway to the new results presented from section [ sec2 ] onwards . for a systemwhere , the sphere decoder complexity , averaged over noise and channel realizations , is _ independent of the constellation size _ and is roughly polynomial in the dimension of the sphere decoding search , , . however , if the rank of is less than , the average sphere decoding complexity is no more independent of the constellation size .when , the conventional sphere decoder needs to be modified as follows : the matrix resulting from the -decomposition of has the form , where is a upper triangular full - rank matrix .there is a corresponding partition of as ^t ] , and ] which is non - zero with probability .hence , with probability 1 . from proposition[ pr : rank_independent_c_mult ] and theorem [ th : main_theorem ] , with probability .so , 1 . for , .since and can be decoded independently of each other , from proposition [ singular ] , the sphere decoding complexity of the first group is independent of while that of the second group is .consequently , the sphere decoding complexity of the code in is for 3 receive antennas .2 . for , and the stbc in is non - singular for 4 or more receive antennas .hence , its sphere decoding complexity is independent of .a family of -group ml decodable stbcs was constructed in for , antennas with rate cspcu .this family includes the rate code of for as a special case .the number of symbols in the stbc is . in the rest of this subsectionwe show that the sphere decoding complexity is which is polynomial in , and so is large for all , where is the smallest integer greater than or equal to .note that the sphere decoding complexity is a decreasing function of the number of receive antennas .the stbcs constructed in have a block diagonal structure .the weight matrices for antennas are of the form , where are unitary and .noting that the stbc is 2-group decodable , denote the set of weight matrices belonging to the first and the second group by and , respectively .for all the matrices in , is constant , say , and for all the matrices in , is constant , say .each group contains real symbols .we will now derive the rank of the submatrix of that corresponds to .let .consider the set where .since is unitary , from proposition [ pr : rank_independent_c_mult ] , the rank of equals the rank of , the equivalent channel matrix corresponding to .since the multiplication of all the weight matrices of an stbc by a unitary matrix does not affect its multigroup ml decodability , for any matrix belonging to and any , we have where and are block diagonal and are of the form for some unitary matrices and . since and satisfy and are block diagonal , from lemma 1 of , .thus , for , is a unitary , hermitian matrix .next we use proposition [ pr : rank_independent_basis ] to find the rank of .note that is same as the span of { \bf 0 } & { \bf 0 } \\{ \bf 0 } & \textbf{d}_{n^2 + 1 } \end{bmatrix}.\ ] ] since are linearly independent , are also linearly independent .further , the first matrix in is linearly independent of the remaining matrices and hence the dimension of the span of the remaining matrices in is . without loss of generality ,let us assume that are linearly independent , thus is the space of all hermitian matrices .then , equals the space spanned by { \bf 0 } & { \bf 0 } \\ { \bf 0 } & \textbf{d}_{n^2 } \end{bmatrix}.\ ] ] from proposition [ pr : rank_independent_basis ] ,it is enough if we concentrate on the stbc whose weight matrices are given by .let the channel matrix be partitioned as , where .we need to compute the dimension of the space spanned by the weight matrices multiplied on the right by which is {\bf 0 } \\\textbf{d}_{n^2}\textbf{h}_2 \end{bmatrix } \right\rangle.\ ] ] with probability , is non - zero and hence the first matrix is linearly independent of the remaining matrices . from theorem[ th : main_theorem ] , the dimension of the span of the remaining matrices is with probability .thus equals with probability .a similar result can also proved for the second ml decoding group , i.e. , for . from proposition[ pr : sum_of_ranks_multigroup ] , which equals comparing this with , we see that the stbc given in is non - singular only if .hence the code is singular for all .now , the two groups of symbols can be ml decoded independently of each other , and the number of symbols in each group is , with .thus , the sphere decoding complexity of the stbc is instead of .hence , _ multigroup ml decodability reduces the sphere decoding complexity even if the stbc is singular_. in , -group ml decodable codes for all and all even were constructed with rate .the number of symbols per group is . in this subsection , we show that the codes of this family are singular for all receive antennas and that their sphere decoding complexity is . using proposition [ pr : m_less_than_n ] ,it is clear that these codes are non - singular if and only if .the structure and derivation of the sphere decoding complexity of these codes is similar to that of the codes from , which was discussed in section [ sec3b ] .the weight matrices of the stbcs in are of the form , where .for all the matrices in the first group , for some constant matrix , and for all the matrices in the second group , for some constant matrix .the stbcs constructed in are such that for each , the submatrices of any two weight matrices belonging to different groups are hurwitz - radon orthogonal .we consider the case where , are semi - unitary i.e. , .we derive the sphere decoding complexity only for the first group . using a similar argumentthe complexity for the second group can be derived , and it is same as that of the first group .since is semi - unitary , there exists a unitary matrix such that .consider the new stbc obtained by multiplying all the weight matrices of the original stbc on the left by .then , the lower submatrix of all the weight matrices of the second group are of the form .since the lower submatrix of every matrix in the first group is hurwitz - radon orthogonal to , the weight matrices in the first group of have the following structure : \textbf{v}_1 \\\textbf{b } \\\textbf{e } \end{bmatrix} ] , where . the all zero submatrix at the upper right corner of is due to the -group ml decodability property of the code .the two sphere decoders corresponding to the two ml decoding groups use the matrices and respectively .clearly the rank of and is , and hence the sphere decoding complexity is . ' '' '' the non delay - optimal code for has rate and number of symbols per group .the weight matrices of the first group are of the form , where each .the first block is one of the matrices of the form , where is hermitian , and the remaining matrices , are of the form for some set of unitary matrices . using an argument similar to the one used with delay - optimal codes, it can be shown that the sphere decoding complexity of the non delay optimal codes is of the order of and that the codes are non - singular only for . for and equalvalues of and , the non - delay optimal codes of and the codes of have the same rate and sphere decoding complexity .in this paper we have introduced the notion of singularity of stbcs and showed that all known families of high rate multigroup ml decodable codes are singular for certain number of receive antennas .the following facts which were not known before have been shown .* though the , code of and the , code of have identical rate of cspcu , the sphere decoding complexity of the code from is less than that of the code from .* for , and equal values of , the delay - optimal codes of and the codes of have equal rate and the same order of sphere decoding complexity . * for equal values of and , the codes in and the non - delay optimal codes of have identical rate and sphere decoding complexities .the results and ideas presented in this paper have brought to light the following important open problems . * is there an algebraic criterion that ensures that a code is non singular ?for example , is every code with non vanishing determinant also non - singular for all ?* do there exist high rate multigroup ml decodable codes that are non - singular for arbitrary values of ?* do there exist singular high rate multigroup ml decodable stbcs with lower sphere decoding complexity than that of the known codes ?this work was supported partly by the drdo - iisc program on advanced research in mathematical engineering through a research grant , and partly by the inae chair professorship grant to b. s. rajanproof of theorem [ th : main_theorem ] from proposition [ pr : m_less_than_n ] it is clear that the theorem is true for .thus , we will only consider the case . before giving the proof of theorem [ th : main_theorem ] we present two results which are used in the proof .let be the columns of the matrix .we first prove the result for .let the channel realization , where and .now , consider the matrix .we have , which is non - zero w.p.1 .thus , w.p.1 , the columns of are linearly independent over , and since is the first column of , this means that w.p.1 , does not belong to the column space of the matrix .now consider any .let be an permutation matrix such that .since is full - ranked , belongs to the column space of if and only if belongs to the column space of .since is unitary , the distribution of and are one and the same , and hence the probability that belongs to the column space of is .thus , with probability , does not belong to the column space of . for a given channel realization ,since is of rank w.p.1 , the dimension of over is equal to w.p.1 .let and for , let be the vector space homomorphism that sends the vector to the real number .we are interested in the dimension of the subspace of which is composed of vectors whose component is purely real , i.e. , in the dimension of . for any given , andhence is either or .suppose , , then , there is no vector in such that is non - zero because if such a exists , the vector belongs to and the imaginary part of its component is , and thus , which is a contradiction . since , for all the vectors in , the component is , we have =\textbf{0}\}.\ ] ] thus , the dimension of the column space of and $ ] are the same .this means that belongs to the column space of . from proposition[ pr : appendix_first ] , belongs to the column space of w.p. and hence w.p. .thus , w.p.1 . from rank - nullity theorem , w.p.1 ._ proof of theorem [ th : main_theorem ] : _ let the weight matrices of the stbc be and let the space of hermitian matrices over be given by. for a given channel realization , let be the -vector space homomorphism that sends the matrix to . clearly , is equal to the dimension of the subspace over .since is isomorphic to as vector spaces , we have thus , it is enough to show that w.p.1 . let and let denote the rows of .then , satisfies , and since the is hermitian , the first component of is purely real . from proposition [ pr :appendix_second ] , whose dimension is w.p.1 . given a choice of , since is hermitian , the first component of equals the conjugate of the second component of , and the second component of is purely real . as a result of these restrictions and since , i.e. , , belongs to a coset of the subspace of whose dimension is .similarly , given a choice for , where , the first components of are fixed and the imaginary part of the component is zero . hence , belongs to a coset of a subspace of with dimension .thus , the dimension of equals with probability .this completes the proof .+ k. p. srinath and b. s. rajan , `` low ml - decoding complexity , large coding gain , full - rate , full - diversity stbcs for 2x2 and 4x2 mimo systems , '' _ ieee j. sel .topics signal process . _ ,3 , no . 6 , pp .916 - 927 , dec . 2009
in the landmark paper by hassibi and hochwald , it is claimed without proof that the upper triangular matrix encountered during the sphere decoding of any linear dispersion code is full - ranked whenever the rate of the code is less than the minimum of the number of transmit and receive antennas . in this paper , we show that this claim is true only when the number of receive antennas is at least as much as the number of transmit antennas . we also show that all known families of high rate ( rate greater than complex symbol per channel use ) multigroup ml decodable codes have rank - deficient matrix even when the criterion on rate is satisfied , and that this rank - deficiency problem arises only in asymmetric mimo with number of receive antennas less than the number of transmit antennas . unlike the codes with full - rank matrix , the average sphere decoding complexity of the stbcs whose matrix is rank - deficient is polynomial in the constellation size . we derive the sphere decoding complexity of most of the known high rate multigroup ml decodable codes and show that for each code , the complexity is a decreasing function of the number of receive antennas .
with massive new data sets shrinking the statistical error bars on cosmological quantities , it is becoming increasingly important to avoid inaccuracies in their modeling and analysis .for example , one of the less interesting aspects of modern studies of large scale structure is having to deal with complex angular masks .this humble but essential task is rendered more time - consuming by the fact that angular masks may require updating as a survey progresses .the purpose of this paper is to describe a scheme intended to remove much of the drudgery and scope for inadvertent error or unnecessary approximation involved in defining and using angular masks .the scheme is implemented in a publically available software package mangle , which can be obtained , along with complete documentation , from http://casa.colorado.edu/~ajsh / mangle/. the present paper is not a software manual : for that , visit the aforesaid website .rather , the purpose of this paper is to set forward the philosophy and to detail the algorithms upon which the software is built .angular masks of galaxy survey have grown progressively more complicated through the years .the first complete redshift surveys of galaxies , the first center for astrophysics redshift survey ( cfa1 ; ) , and the first southern sky redshift survey ( ssrs1 ; ) , had rather simple angular boundaries , defined by simple cuts in declination and in galactic latitude .the level of complexity increased with the _ iras _ redshift surveys .the first of these , the _ iras _ 2 jy redshift survey , in addition to a cut in galactic latitude , excluded 1465 lunes of high cirrus or contamination by local group galaxies , each lune being an approximately square with boundaries of constant ecliptic latitude and longitude .the angular masks of subsequent _ iras _ surveys followed a similar theme , leading up to the pscz survey , whose high - latitude mask ( the one most commonly used in large scale structure studies ) consisted of the whole sky less 11,477 ecliptic lunes .the automatic plate measuring ( apm ) survey ( , b , 1996 ) and associated surveys such as the apm - stromlo survey , consisted of a union of a couple of hundred photographic plates , cut to , each drilled with a smrgsbord of holes to avoid bright stars , satellite trails , plate defects , and the like .the edges of the excluded holes were straight lines on photographic plates , but unlike the _ iras _ surveys , the holes were not necessarily rectangles , their boundaries were not necessarily lines of constant latitude and longitude , and different holes could overlap .= 6.5 in the anglo - australian telescope 2 degree field survey ( 2df ; ; ) is a redshift survey of galaxies from the apm survey , and thus inherits the holey angular mask of the apm survey .superimposed on the apm backdrop , the 2df angular mask consists of several hundred overlapping diameter circular fields .the various overlaps of the circular fields have , at least in early releases of the data , various degrees of completeness . the sloan digital sky survey ( sdss ; ) has an angular mask comparable in complexity to that of the 2df survey .it consists of several stripes from the parent photometric survey , peppered with holes masked out for a variety of reasons .superimposed on the stripes are circular fields from the redshift survey .recently , the sdss team used the mangle scheme described in the present paper as part of the business of computing the 3d galaxy power spectrum . with both the 2df and sdss data going public , it has seemed sensible to publish the scheme so that others can use it too .the scheme described in the present paper began life in the delightful atmosphere of an aspen center for physics workshop in 1985 .the mathematics of harmonization ( [ harmonize ] ) and other aspects of the computation of angular integrals are written up in an appendix to .the methods described therein were first applied by , and have been used regularly by him since that time .the idea of adapting the methods to deal with angular masks in a rather general way , and in particular the concept of balkanization , is new to the present paper .the mangle software has been applied to the 2df 100k survey by , and to the sdss by .the figures in this paper were prepared from files generated by the mangle software .figure [ twoqzfig ] shows a zoom of a small piece of the northern angular mask of the 2df qso redshift survey ( 2qz ) 10k release .the angular mask of this survey is defined by files ( downloadable from http://www.2dfquasar.org/spec_cat/masks.html ) giving the boundaries of : ( 1 ) ukst plates , ( 2 ) holes in ukst plates , and ( 3 ) fields .these boundaries are illustrated in the top left panel of figure [ twoqzfig ] .the 2qz team provide the completeness of the angular mask in the pixelized form illustrated in the top right panel of figure [ twoqzfig ] .the 2qz mask is typical of the way that angular masks are defined in modern galaxy surveys .[ cols= " < , < " , ] motivated by common practice , an angular mask is defined in the present paper to be an arbitrary union of arbitrarily weighted angular regions bounded by arbitrary numbers of edges .the restrictions on the mask are 1 . that each edge must be part of some circle on the sphere ( but not necessarily a great circle ) , and 2 . that the weight within each subregion of the mask must be constantthis definition of an angular mask by no means covers all theoretical possibilities , but it does reflect the actual practices of makers of galaxy surveys .the broad utility of spherical polygons to delineate angular regions is widely appreciated ; for instance , they play an integral part in the sdss database .the definition implies that an angular mask is a union of arbitrarily weighted non - overlapping polygons .a polygon is defined to be the intersection of an arbitrary number of caps , where a cap is defined to be a spherical disk , a region on the unit sphere above some line of constant latitude with respect to some arbitrary polar axis . for reference ,table [ deftable ] collects definitions of mask , polygon , cap , and certain other terms used in this paper .the bottom left panel of figure [ twoqzfig ] shows the 2qz mask ` balkanized ' ( see [ balkanize ] below ) into non - overlapping polygons .the bottom right panel of figure [ twoqzfig ] shows the mask reconstructed from spherical harmonics up to ( see [ harmonize ] below ) .the information specifying a mask ( its angular boundaries and completeness ) is collected in files which we refer to as ` polygon files ' .typically a command in the mangle suite of software will : 1 . read in one or more polygon files , possibly in different formats ; 2 .do something to or with the polygons ; 3 .write an output file , possibly a polygon file , or files .the strategy adopted in the mangle software is to permit the most flexible possible input format for polygon files , the idea being to be able to read the files provided by the makers of a galaxy survey as far as possible in their original form , or perhaps mildly edited .mangle reads and writes several different formats of polygon files : * circle ; * vertices ; * edges ; * rectangle ; * polygon . for convenience , there are five additional formats that provide useful information about polygons , but that can only be written , not read , because the information they provide is too limited , or ambiguous , to specify polygons completely .the five output only formats are : area ; graphics ; i d ; midpoint ; weight .an abbreviated description of each format appears below ; see http://casa.colorado.edu/~ajsh/mangle/ for full details .the * circle * format is able to describe polygons in all generality .a circle is defined by the azimuth and elevation of its north polar axis , and by the angular radius , the polar angle , of the circle .each circle defines a cap .a polygon is an intersection of caps , and a line of the form containing angles defines a polygon with caps .the * vertices * format specifies polygons by a sequence of vertices , assumed to be joined by great circles .the general form of a line specifying a polygon in vertices format is which defines a polygon with caps . in vertices format ,a line with angles defines a polygon with n caps .the * edges * format is a souped - up version of the vertices format .whereas the vertices format joins each pair of vertices with a great circle , the edges format uses an additional point ( or additional points ) between each pair of vertices to define the shape of the circle joining the vertices .although the edges format retains more information about a polygon than the vertices format , in general it does not retain all information about a polygon .a rectangle is a special kind of 4-cap polygon bounded by lines of constant azimuth and elevation .the * rectangle * format is offered not only because some masks are defined this way ( for example , the _ iras _ masks ) , but also because the symmetry of rectangles permits accelerated computation of their spherical harmonics ( [ harmonize ] ) .a line in rectangle format looks like with precisely 4 angles .the * polygon * format is the default output format for polygon files . besides the circle format , it is the only other format that is able to describe polygons in all generality without loss of information .it stores each cap , not as three angles as in the circle format , but rather as a unit vector along the north pole of the cap , together with a quantity , which is equal both to the area of the cap divided by , and to half the square of the 3-dimensional distance between the north pole and the cap boundary .it seems doubtful that one would want to create an original mask file in polygon format , since it is a bit peculiar , but it is the format used internally by the mangle software , and it specifies polygons in the manner expected for many years past by the fortran backend .the advantage of the format is that some computational operations are simpler and faster if the cap axis is stored as a unit vector rather than as an azimuth and elevation . of the purely output formats ,one of the most useful is the * graphics * format , which is useful for making plots of polygons .the mangle software does not incorporate any plotting software : it is assumed that you have your own favourite plotting package .the graphics format is similar to edges format , but is generally more economical .whereas in edges format there is a specified number of points per edge , in graphics format there is a specified number of points per of azimuthal angle along each edge .thus in graphics format curvier edges get more points than straighter edges .the graphics format is implemented only as output , not as input , because of ambiguity in the interpretation of the format .another useful output format is the * midpoint * format , which returns a list containing the angular position of a point inside each polygon of a mask .this can be helpful in assigning weights to the polygons of a mask , if you have your own software that returns a weight given an angular position .[ midpoint ] for more about how midpoints of polygons are computed .the * area * , * i d * , and * weight * output formats give lists of , respectively , the areas , identity numbers , and weights ( completenesses ) of the polygons of a mask .one of the basic tasks that the mangle software does is resolve a mask into a set of non - overlapping polygons .this takes place in a sequence of four steps , elaborated in the subsections following . 1 .snap ; 2 .balkanize ; 3 . weight ; 4 .unify . resolving a mask into non - overlapping polygonsgreatly simplifies the logic of dealing with a mask , since it allows subsequent processing ( generation of random catalogues , computation of spherical harmonics , plotting , etc . ) to proceed without recourse to the intricate hierarchy of overlapping geometric entities and the associated complicated series of inclusion and exclusion rules that tend to characterize a survey mask .each individual polygon is by definition an intersection of caps .geometrically , this implies that a polygon is convex : the interior angles at the vertices of a polygon are all less than .the requirement that a polygon be an intersection of caps greatly simplifies the logic , since it means that a point lies inside a polygon if and only if it lies inside each of the caps of the polygon .the first thing that must be done on all the original polygon files of a mask is to ` snap ' them .this process identifies almost coincident cap boundaries and snaps them together .the problem is that the positions of the intersections of two almost but not exactly coincident circles ( cap boundaries ) on the unit sphere may be subject to significant numerical uncertainty . to avoid numerical problems ,such circles must be made exactly coincident .you might think that that near - but - not - exactly - coincident circles would hardly ever happen , but in practice they occur often , because a mask designer tries to make two polygons abut , but imprecision or numerical roundoff defeats an exact abutment .the snap process adjusts the edges of each polygon , but it leaves the number and order of polygons the same as in the input file(s ) .edges that appear later in the input file(s ) are snapped to earlier edges .the snap process offers four tunable tolerances : * axis tolerance .are the axes of two caps within this angular tolerance of each other , either parallel or anti - parallel ?if so , change the axis of the second cap to equal that of the first cap . * latitude tolerance . if the axes of two caps coincide , are their latitude boundaries within this tolerance of each other ?if so , change the latitude of the second cap to equal that of the first cap .the two caps may lie either on the same or on opposite sides of the latitude boundary . * edge tolerance , and edge to length tolerance .are the two endpoints and midpoint of an edge closer to a cap boundary than the lesser of ( a ) the edge tolerance , and ( b ) the edge to length tolerance times the length of the edge ?in addition , does at least one of the two endpoints or midpoint of the edge lie inside all other caps of the polygon that owns the cap boundary ?if so , change the edge to align with the cap boundary .the purpose of the first two of these tolerances , the axis tolerance and the latitude tolerance , is obvious .the remaining two tolerances , the edge tolerance and the edge to length tolerance , are necessary because it is possible for two edges , if they are short enough , to almost coincide even though the axes and latitudes of their corresponding caps differ significantly . by default , the three angular tolerances ( axis , latitude , and edge ) are all two arcseconds , which is probably sufficient for typical large scale structure masks . the tolerances can be tightened considerably before numerical problems begin to occur , so it is fine to tighten the tolerance for a mask whose edges are more precisely defined . the default edge to length tolerance , should be fine in virtually all cases .the snap process accomplishes its work in two stages : * snap axes and latitudes of pairs of caps together , passing repeatedly through all pairs of caps until no more caps are snapped . *snap edges of polygons to other edges , again passing repeatedly through all pairs of caps until no more caps are snapped . as a finishing touch ,snap prunes each of the snapped polygons in order to eliminate superfluous caps , those whose removal leaves the area of the polygon unchanged . the process of resolving a mask into disjoint polygons we dub ` balkanization ' , since it fragments an input set of possibly overlapping polygons into many non - overlapping connected polygons .the process involves two successive stages : 1 .fragment the polygons into non - overlapping polygons , some of which may be disconnected .identify disconnected polygons and subdivide them into connected parts .= 1.8 in the algorithm for the first stage of balkanization is simple and pretty : * is the intersection of two polygons neither empty nor equal to the first polygon ?if so , find a circle , a cap boundary , of the second polygon that divides the first polygon , and split the first polygon into two along that circle .* iterate .notice that only one of the two parts of the split polygon overlaps the second polygon , and that only the overlapping part needs iterating . for any pair of polygons ,iteration ceases when the overlapping part lies entirely inside the second polygon .the final overlapping part is equal to the intersection of the original first polygon with the second polygon .all other fragments of the first polygon lie outside the second polygon .figure [ balkanizefig ] illustrates an example of the first stage of balkanization for two overlapping polygons a and b. first , a is split against b , which takes two iterations of the above cycle .then , b is split against a. again , this takes two iterations of the above cycle .the final system consists of 5 non - overlapping polygons .note that splitting the system shown in panel ( a ) of figure [ balkanizefig ] into its three connected parts ( the part of a that does not intersect b , the part of b that does not intersect a , and the intersection ab of a and b ) would not constitute a successful balkanization , since two of these regions are not convex and hence not polygons .one might ask , why not stop at panel ( c ) in figure [ balkanizefig ] ?do not the three polygons there already form a satisfactory set of non - overlapping polygons ?the answer is that the intersection polygon ab may well have a weight different from those of the non - overlapping parts of the parent a and b polygons ( this is typically true for example in the 2df and sdss surveys ) . to deal with this eventuality, balkanization must continue to completion , as illustrated in panel ( e ) .the question of whether two polygons overlap is determined by computing the area of the intersection of the polygons .the area is proportional to the monopole harmonic , computed as described in [ harmonize ] .the intersection of two polygons is itself a polygon , consisting of the intersection of the two sets of caps defining the polygons .stage 1 of the balkanization procedure yields polygons that can contain two or more connected parts , as illustrated in figure [ discpolfig ] .stage 2 attempts to subdivide such disconnected polygons into connected parts by computing the connected boundaries of the polygon , and lassoing ( see [ lasso ] ) each connected boundary with an extra circle .figure [ eyefig ] illustrates a polygon that has two distinct connected boundaries by virtue of being not simply - connected rather than not connected .a region is said to be * simply - connected * if , according to the usual mathematical definition , it is connected and any closed curve within it can be continuously shrunk to a single point . loosely speaking ,this means that a simply - connected polygon has no holes .because it is connected , the polygon of figure [ eyefig ] need not be split .the strategy to deal with non - simply - connected polygons is based on the following theorem , proven in the appendix : * a connected part of a polygon is simply - connected if and only if all the boundaries of the connected part belong to a single group*. a group is defined here as follows : two circles are friends , belonging to the same group , if they intersect ( anywhere , not necessarily inside the polygon ) , and friends of friends are friends . according to this definition of group , the circles on a single connected boundary necessarily all belong to the same group . however , the circles on two distinct connected boundaries may or may not belong to the same group .this theorem implies that it is necessary to lasso only those boundaries of a polygon that belong to the same group . in figure [ discpolfig ] , for example , the two boundaries of the polygon belong to the same group of three intersecting circles , so these two boundaries must be lassoed , partitioning the polygon into two parts . in figure [ eyefig ] , on the other hand , the two boundaries belong to two separate groups , and need not be lassoed . figure [ discpolsfig ] illustrates a more complicated polygon , similar to the polygon of figure [ discpolfig ] but pierced with two circular holes .the polygon contains four boundaries belonging to three groups .the two original boundaries inherited from figure [ discpolfig ] belong to the same intersecting group of circles , but the additional two holes form two separate groups . hereonly the two boundaries belonging to the same group need lassoing .= 2.5 in figure [ fractalfig ] illustrates a yet more complicated multiply - connected polygon .the mangle software balkanizes this polygon correctly into seven polygons , a stringent test of the algorithms .a corollary of the theorem proven in the appendix is that the polygon formed by the intersection of the caps bounded by the circles of a single group must be a union of simply - connected parts . for example, the two parts of the polygon in figure [ discpolfig ] must be simply - connected which evidently they are because the circles of the polygon all belong to the same group .in the course of the proof in the appendix , it is shown that if the boundary of a polygon falls into two ( or more ) groups , then the circles of a second group must lie entirely inside exactly one of the simply - connected parts of the polygon bounded by the first group .for example , each of the two circles bounding the two holes in the polygon of figure [ discpolsfig ] must lie entirely inside exactly one of the two simply - connected parts of the original polygon from figure [ discpolfig ] , which again is evidently true .it follows from the statement of the previous paragraph that in lassoing the connected boundaries of a group , it is necessary to consider only the boundaries belong to the same group : any boundary belonging to another group can be ignored , because it must lie entirely inside one of the simply - connected parts bounded by the first group .thus each lassoing circle is required to enclose fully its connected boundary , while excluding fully all other connected boundaries belonging to the same group ; there is no constraint on the lasso from boundaries belonging to other groups .if a group of circles of a polygon defines a single boundary , then that boundary needs no lassoing , but stage 2 balkanization nevertheless attempts to lasso the boundary if the number of caps of the group exceeds the number of vertices .for example , in its original configuration the polygon shown in figure [ eggboxfig ] has a large number of caps .none of the caps can be discarded , since each excludes a small piece of the sky .here it is advantageous to lasso the polygon with an extra circle , allowing most of the original caps to be discarded as superfluous .a lasso that lassos the lone boundary of a group is discarded if the lasso completely encloses all the circles of the group to which the boundary belongs . forthen either the lasso completely encloses the polygon , in which case it is superfluous , or else the lasso lies completely inside the simply - connected region bounded by the lone boundary , in which case the lasso , if kept , would divide the simply - connected region in two , which would be incorrect . if on the other hand the lasso of a lone boundary of a group intersects at least one of the circles of the group , then the lasso must completely enclose the simply - connected region bounded by the lone boundary , as in figure [ eggboxfig ] ; the lasso can not lie inside the simply - connected region because it is being assumed that the lasso intersects a circle of the group , whereas no such circle can exist within the simply - connected region .stage 2 balkanization may need more than one pass to succeed .figure [ chainfig ] shows an example of a polygon , bounded by one large cap punctuated by fifteen small caps , that contains four parts bounded by four boundaries all belonging to the same single group .the top and bottom boundaries can be lassoed successfully with single circles , but the middle two boundaries can not : any circle that encloses either of the middle boundaries necessarily intersects another boundary somewhere . herestage 2 balkanization succeeds by submitting the polygon to two passes . in the first pass ,the polygon is split into three polygons , consisting of the top and bottom connected parts , plus a third polygon containing the two middle parts . in the secondpass , the third polygon is split into two , completing the partitioning of the original polygon into its four parts . in certain convoluted cases , such as the polygon shown in figure [ vfig ], it can be impossible to lasso any of the connected boundaries of the polygon with a circle that wholly encloses a connected boundary while wholly excluding all other connected boundaries in the same group .stage 2 balkanization gives up attempting to lasso a boundary after a certain maximum number of attempts , but it keeps a record of the best - attempt lasso , the one that encloses as much as possible of a boundary while wholly excluding all other boundaries in the same group .stage 2 balkanization proceeds to split the polygon into two parts with the best - attempt lasso , and then submits the two parts to a further pass .the polygon of figure [ vfig ] , for example , contains two parts bounded by two connected boundaries neither of which can be lassoed with a circle that completely encloses the boundary while completely excluding the other boundary .finding no satisfactory lasso , stage 2 balkanization splits the polygon into two polygons with a best - attempt lasso , shown as a dashed line in figure [ vfig ] .the two polygons are then submitted to further passes of stage 2 balkanization , which in this case succeeds with one pass .the upshot is that the original polygon is balkanized into four polygons .= 2.5 in it is conceivable that the algorithm of the above paragraph could continue for ever , continually splitting a polygon into two and continually failing to lasso successfully all the boundaries of the split polygons .however , polygons that defy lassoing have to be filamentary in character ( long , thin , and windy ) , such as that shown in figure [ wfig ] , and splitting such a polygon in two generally makes it less filamentary , like putting spaghetti in a blender . in the case of the polygon of figure [ wfig ] ,stage 2 balkanization forcibly splits the polygon 8 times , eventually balkanizing the 13-part polygon into 34 disjoint parts .suffice to say that we know of no polygon that fails the algorithm , and it is possible that no such polygon exists . if the reader finds one , please tell us about it . in practice , the mangle software bails out if a polygon has to be split forcibly in two more than a certain maximum number ( 100 ) of times . even in this last gasp case , the set of polygons output by balkanization still constitutes a valid set of non - overlapping polygons that completely tile the mask .the only problem is that the ` failed ' polygons , those which could not be partitioned completely , may contain two or more disjoint parts with different weights .to finish , balkanization prunes each of the balkanized polygons in order to eliminate superfluous caps , those whose removal leaves the area of the polygon unchanged .for example , pruning discards the many superfluous caps of the polygon of figure [ eggboxfig ] .the caps are tested in order , with any new lassoing cap being tested last , so that the many superfluous caps are discarded , and the lassoing cap is kept .the algorithm to lasso a connected boundary of a polygon is to pick a point , initially taken to be the barycentre of the centres of the edges of the connected boundary ( or , if the connected boundary consists of a single circle , the centre of that circle ) , and find the circle centred on that point which most tightly encloses the boundary .the lassoing circle is enlarged slightly if possible , as a precaution against numerical problems that might potentially occur if the lasso just touched an edge or vertex of the boundary .a lasso that lassos , i.e. that encloses completely , one connected boundary of a polygon , is required to exclude completely all other connected boundaries belonging to the same group .a lasso attempt can sometimes fail if a polygon has two or more connected boundaries .a lasso attempt fails if the angular distance from the centre of the lasso to the farthest point on the to - be - lassoed connected boundary is greater than the angular distance from the centre of the lasso to the nearest point on all other connected boundaries belonging to the same group .if a lasso attempt fails , then the centre point of the lasso is shifted over the unit sphere along the vector direction from to , by an amount that puts the centre point just slightly closer to than .the lasso is then reattempted .the process of shifting the centre point and retrying a lasso is repeated until either the lasso succeeds , or until a certain maximum number of attempts has been made .multiple intersections occur where 3 or more circles ( cap boundaries ) intersect at a single point .multiple intersections pose a potential source of numerical problems , because the topology around multiple intersections may vary depending on numerics , as illustrated in figure [ multfig ] .circles kiss if they just touch . again , kissing circles pose a potential source of numerical problems , because whether two circles kiss may vary depending on numerics , as illustrated in figure [ kissfig ] .mangle is equipped to deal with both multiply - intersecting and kissing circles , and should cope in almost all cases , although it is possible to fool manglewith a sufficiently complicated polygon , for example a polygon whose vertices have a fractal distribution of separations .the strategy is as follows .circles are considered to be multiply intersecting , crossing at a single vertex , if the intersections are closer than a certain tolerance angle .similarly , circles are considered to kiss , touching at a single vertex , if their kissing distance is closer than the tolerance angle .the algorithm is friends - of - friends : two vertices closer than the tolerance are friends , and a friend of a friend is a friend . the position of each vertex of a polygon , where edge intersects ( or kisses ) edge , is computed two different ways , first as the intersection of edge with edge , then as the intersection of edge with edge . for each of the two ways of computing it ,the intersection is tested against all other circles , to determine whether the intersection is or is not a vertex of the polygon , that is , whether the intersection lies on the edge of the polygon , or outside the polygon . for consistency, the test should give the same result in both computations : the intersection should be a vertex in both cases , or it should not be a vertex in both cases . if an inconsistency is detected , then the tolerance angle is doubled ( or set to a tiny number , if the tolerance is zero ) , and the computation is repeated for the inconsistent polygon , until consistency is achieved . by default ,the initial tolerance angle for multiple intersections and kissings is arcseconds . in figure[ multfig ] , the intersection of the two diagonals is a vertex of the polygon in the left panel , but is not a vertex in the right panel , because it lies outside the polygon . in the case of an exact multiple intersection , as in the middle panel of figure [ multfig ] , the intersection is considered to be a vertex of the polygon only if and are both edges of the polygon .thus the intersection of the two diagonals in the middle panel of figure [ multfig ] is a vertex , because both diagonals are edges , but the intersection of the horizontal with either diagonal is not a vertex , because the horizontal is not an edge of the polygon . if is an edge , and it intersects multiply with a bunch of other circles , then the adjacent edge is formed by the circle which ` bends most tightly ' around the polygon , that is , the circle whose interior angle at the vertex is the smallest , or , if two circles subtend the same interior angle ( within the tolerance angle ) , then the circle whose polar angle is the smallest . in figure [ kissfig ] , the two circles and intersect at no vertices in the left panel , and at two vertices in the right panel . in the case of an exact kiss , as in the middle panel of figure [ kissfig ] , the kissing point is considered to be a vertex only if and are both edges of the polygon .thus the kissing point is _ not _ a vertex of the the upper and lower polygons , the two disks , but it is a vertex of the middle polygon , the pointy one .two polygons that just touch or kiss at a single isolated point ( or at a set of isolated points ) are considered to be disconnected from each other .thus for example the top and bottom polygons in the middle panel of figure [ kissfig ] are considered to be disconnected from each other ; and similarly the left and right polygons in the middle panel of figure [ kissfig ] are considered to be disconnected from each other . in practice , consistency of the topology of the distribution of vertices around a polygon is checked by means of a 64-bit check number . if the intersection of edge with edge ( two edges can intersect at two separate points , so is an ordered pair , going from edge to edge right - handedly around the boundary of the polygon ) is determined to be a vertex of the polygon , then a 64-bit pseudo - random integer is added to the check number , and if the same intersection of edge with edge is determined to be a vertex of the polygon , then the same 64-bit pseudo - random integer is subtracted from the check number . for consistency , the check number should be zero for the entire polygon .it is conceivable , with probability 1 in , or less than 1 in 10 billion billion , that the check number could evaluate to zero accidentally , but this seems small enough not to worry about , especially since inconsistency should be a rare occurrence in the first place .figure [ flowerfig ] illustrates a mask designed to be as ` difficult ' as possible : it contains many multiply intersecting and nearly multiply - intersecting circles , and many kissing and nearly kissing circles , including several simultaneously multiply - intersecting and multiply - kissing circles .the mangle software copes with this , a non - trivial accomplishment .the sum of the areas of the 332 polygons of the balkanized mask of figure [ flowerfig ] differ from the area , , of the overall bounding rectangle by , which is definitely satisfactory .given the algorithms , one could expect the numerical uncertainty in the area of a single polygon to be no better than machine precision times , which on the machine used for this computation was about .when balkanizing , does the order of the polygons in the input polygon files matter ?the answer is yes , if the input polygons overlap , and if the overlapping polygons carry different weights .as described in the following subsection [ weight ] , if two polygons overlap , then the weight of the polygon that appears later in the input file(s ) overrides the weight of the earlier polygon .if all polygons have the same weight ( say 1 ) , then the order of the input polygon files does not really matter .however , it may lead to a slightly smaller eventual polygon file ( after unifying , see [ unify ] ) if large , coarse polygons are put first , and small , finely detailed polygons are put last .each connected polygon of a mask may have a different weight . in galaxy surveys ,the ` weight ' attached to a polygon is the completeness of the survey in that polygon .these weights must be supplied by the user .if no weights are supplied , then the weight defaults to 1 .if the input polygons of a mask overlap , then the policy adopted by the mangle software is to allow the weights of later polygons in polygon files to override the weights of earlier polygons .thus for example , to drill empty holes in a region , one would put the polygons of the parent region first ( with weight 1 , perhaps ) , and follow them with polygons specifying the holes ( with weight 0 ) .there are three ways to apply weights to polygons .the first way is simply to edit the polygon file or files specifying the mask . attached to each polygon in a polygon fileis a line that includes a number for the weight ; one simply edits that number .the second way is to specify weights in a file .the mangle software contains a facility to read in these weights , and to apply them successively to the polygons of a polygon file .suppose that you have your own software that returns a weight given an angular position in a mask .the mangle software includes a utility ( [ midpoint ] ) to create a file giving the angular position of a point inside each polygon of a mask .this file of angular positions becomes input to your own software , which should create a file of weights , which in turn can be fed back to mangle .the third way to apply weights to polygons is to write a subroutine ( in either fortran or c ) that returns a weight given an angular position , and compile it into mangle .the mangle software includes some template examples of how to do this . in our experience ,method two is the method of choice , except in cases that are simple enough that method one suffices .the set of non - overlapping polygons that emerges from balkanizing and weighting may be more complicated than necessary .the mangle software includes a facility for simplifying the polygons of a mask , which we call unification .unification is not strictly necessary , but it tidies things up , and it can save subsequent operations , such as harmonization ( see [ harmonize ] ) , a lot of computer time .= 1.8 in unification eliminates polygons with zero weight , and does its best to merge polygons with the same weight .the algorithm is to pass repeatedly through a set of polygons , merging a pair of polygons wherever the pair can be merged into a single polygon by eliminating a single abutting edge .figure [ unifyfig ] illustrates an example of the unification procedure .unification does not necessarily accomplish the most efficient unification , nor , as illustrated in figure [ nounifyfig ] , is unification necessarily exhaustive .the mangle software contains several utilities that do various things with a mask .one of the most important of these is a utility to take the spherical harmonic transform of a mask , a process we call harmonization . in particular , the area of a mask is proportional to the zeroth harmonic .computation of the area of a polygon is basic to several of the mangle algorithms .for example , whether two polygons intersect is determined by whether the area of their intersection is non - zero .the method for computing the spherical harmonics of a mask consisting of a union of polygons is described in the appendix of .the algorithm is recursive and stable , able to compute harmonics to machine precision to arbitrarily high order , limited only by computer power and patience .the recursion , as implemented in the mangle software , recovers correctly from underflow , which can occur at large harmonic number .while the recursive algorithm by itself is fast , there is a numerical penalty to be paid for allowing the polygons of a mask to have arbitrary shape : the computation time for harmonics up to increases as , a pretty steep penalty when is large .the computation time is proportional to the number of edges of the polygons of the mask , and on a 750mhz pentium iii it takes 3 cpu minutes per edge to compute harmonics up to .thus the method is slow compared to fast algorithms specially designed for regular pixelizations , such as healpix .the most time - consuming part of the computation is rotating the harmonics of an edge from its natural frame of reference into the final frame of reference : it is this rotation that takes time . the rotation is unnecessary if the edge is a line of constant latitude in the final reference frame , and the computation goes faster , as , in this case .another acceleration is possible if two edges are related by a rotation about the polar axis of the final frame .although computing the harmonics of a single edge still takes time , the harmonics of a second edge rotated right - handedly by azimuthal angle from edge are , which is fast to compute . in practice , the mangle software currently implements the latter acceleration only in the special case where ( some of ) the polygons of a mask are rectangles , polygons bounded by lines of constant azimuth and elevation .the acceleration applies only if at least two rectangles of the mask have the same minimum and maximum elevation .two such rectangles need not be adjacent in the polygon file : mangle reorders the computation of polygons so as to take advantage of acceleration where possible . for completeness , we give here an overview of the method detailed by .the spherical harmonic coefficients of a mask , a function of angular direction , are defined by where are the usual orthonormal spherical harmonics , and denotes an interval of solid angle about .the key mathematical trick is to convert the integral ( [ olm ] ) for from an integral over the solid angle of the mask to an integral over its edges .this is done by introducing the square of the angular momentum operator into the integrand which is valid except for the monopole harmonic , dealt with below .the hermitian character of the angular momentum operator allows equation ( [ olml ] ) to be rewritten by assumption , the mask is a sum over polygons , and is a constant within each polygon .it follows that in equation ( [ olmp ] ) is a sum over polygons , with the contribution from each polygon being a vector whose magnitude is times times a dirac delta - function , and whose direction is along the boundary of the polygon , winding right - handedly about the polygon .thus the integral ( [ olmp ] ) reduces to a sum of integrals over the boundaries of the polygons the boundary of a polygon is a set of edges , so the integral in equation ( [ olmo ] ) becomes a sum of integrals over each edge of each polygon .thus a harmonic is a sum of contributions from each edge of each polygon hence the analytic problem reduces to that of determining the harmonics of the edge of a polygon .the problem is well suited to computation , and could easily be parallelized if required ( currently , mangle is not parallelized ) .stable recursive formulae for computing the harmonics of a polygon edge are given by .first , the harmonics of an edge are computed in a special frame of reference where the axis of the edge cap is along the polar direction ( the direction ) of the spherical harmonics .the harmonics of the edge are then rotated into the actual frame of reference .the most time consuming part of the computation is the second part , the rotation , with the computational time going as .the above derivation of the harmonics of the mask fails for the monopole harmonic , for which equation ( [ olml ] ) is invalid .the monopole harmonic is where is the weighted area of the mask , a weighted sum of the areas of the polygons the general formula for the area of a polygon is where , an integer , is the euler characteristic (= faces minus edges plus vertices of any triangulation ) of the polygon , and are the lengths and polar angles of the edges of the polygon , and are the exterior angles ( minus interior angle ) at the vertices of the polygon .the euler characteristic of a polygon , a topological quantity , is calculable topologically it equals two plus the number of connected boundaries minus twice the number of groups to which the boundaries belong but in practice it is quicker to compute the euler characteristic as follows .first , in the trivial case that the polygon is the whole sky , its area is .second , in the special case that the polygon is a single cap with edge , its area is .otherwise , the polygon is an intersection of two or more caps .if the area of any one of the caps is less than or equal to , then the area of the polygon must be less than , so the euler characteristic must take that integral value which makes the area lie in the interval .this leaves the case where every one of the caps of the polygon has area greater than .the policy in this case is to introduce an extra cap which splits the polygon into two parts , each of whose areas is less than ; the area of the polygon is then the sum of the areas of the two parts . in practice ,polygons which are intersections of caps all of whose areas exceed are fairly uncommon , so the slow down involved in splitting these particular polygons is not great .moreover the splitting is necessary only for the monopole harmonic : the higher order harmonics can be computed from the original polygon without splitting it .the mangle software contains a ` map ' utility to reconstruct the mask at arbitrary points from spherical harmonics up to a given maximum for example , the bottom right panel of figure [ twoqzfig ] was generated using this utility .another important feature provided in the suite of mangle software is a utility to compute precisely the angular cross - correlation at given angular separation between given ` data ' points and ` random ' points in the mask .the angular integral is done analytically and evaluated to machine precision , rather than by monte carlo integration with random points . if the ` data ' points are chosen randomly within the mask , as in [ random ], then the cross - correlation becomes equivalent to the angular autocorrelation , for random - random , at given angular separation between pairs of points in the mask .the advantage of computing the angular integral analytically over the traditional monte carlo method is that it eliminates unnecessary shot noise . as discussed for example by , this unnecessary shot noise can adversely affect the performance of estimators of the correlation function at small scales . in the case of the pscz high latitude mask , which balkanizes into 744 polygons , it takes 5 cpu minutes per 1000 ` data ' points to compute at 1000 angular separations with a 1.2ghz pentium iii .the contribution of a data point at position to the angular correlation at angular separation is a weighted sum over the polygons of the mask of the azimuthal angle subtended within the polygon by a circle of radius centred at .the correlation at angular separation is an average over the contributions from each data point .the computation of is done at a specified angular separation or set of separations , not , as is common with monte carlo integration , over a bin of angular separations . for a finite number of data points , and a mask consisting , as in the present paper , of a union of weighted polygons , the correlation is a continuous function of angular separation , except that , if a data point happens to coincide with the axis of an edge of a polygon , then the function will be discontinous at a separation equal to the polar angle of the said edge .furthermore , the correlation is a differentiable function of separation except at a finite , possibly large , number of discontinuities where equals the separation between a data point and either a vertex of a polygon or a point on the edge of a polygon where the separation from is extremal . although , as a function of separation , is thus typically discontinuous in the derivative , and occasionally discontinuous in itself , in practical galaxy surveys tends to be relatively smooth , especially when the number of data points is large .thus in the practical case it is usually fine to sample at a suitably large number of angular separations , and to interpolate ( linearly ) on such a table .the dr utility loops in turn through each data point , so that two points take twice as much cpu time as one point . for each data point , the dr utility attempts to accelerate the computation with respect to the angular separations by first computing the minimum and maximum angles between the point and each polygon in the mask .the information about the minimum and maximum angles is used to decide whether the circle about lies entirely outside or entirely inside a polygon , in which case the ( unweighted ) angle subtended within the polygon is zero or . in practical casesthe angle subtended is often zero for the great majority of polygons of a mask , especially when the mask is composed of many polygons .since calculation of the subtended angle can be skipped if the angle is zero , computation can be greatly speeded up .further acceleration comes from ordering the polygons in increasing order of the minimum angle from the given point to each polygon .this allows the computation to loop to the next value of as soon as it hits a polygon for which the subtended angle is zero , rather than checking through large numbers of polygons that all have zero subtended angle .the angular auto - correlation between pairs of points in a mask can be computed in various ways . the traditional method ,as suggested by the random - random designation , is to count pairs of random points .however , the traditional method is certainly not the most precise method , and it may not be the most efficient , especially at small scales , if large numbers of random points are needed to reduce the shot noise to a subdominant level .an alternative method , mentioned above , is to compute using the algorithm with the ` data ' points chosen randomly within the mask .although this method is subject to some shot noise , the shot noise is liable to be substantially less than that of the traditional method at small scales . at the largest angular separations , a choice method is to compute from its spherical harmonic expansion , truncated at some suitably large harmonic : where is a legendre polynomial , which is accurate at angular scales .the exact expression for the autocorrelation at angular separation is for a mask of polygons as considered in this paper , this double integral over solid angles can be transformed into a double integral over the edges of the polygons where is the weighted area , equation ( [ a ] ) , of the mask , and is the green s function of the scalar product of angular momentum operators : } & ( x_{12 } \geq x ) \\ 0 & ( x_{12 } \leq x ) \ .\end{array } \right.\ ] ] unfortunately the integral ( [ rrexact ] ) can not be solved analytically , and we have not attempted to implement its numerical solution in the mangle software . gives a series expansion of the expression ( [ rrexact ] ) valid at small scales , but the series expansion is liable to break down already at tiny scales in the typically complex masks of modern surveys ( the coefficients of the series expansion change discontinuously wherever the angular separation equals the distance between distinct vertices , or more generally any extremal distance between pairs of edges ) , making the series expansion of limited applicability .the mangle software contains a utility for generating random points inside a mask .as indicated above , this can be useful for example in computing the angular auto - correlation of pairs of points in the mask .the algorithm , which is quite fast , is as follows : 1 . select randomly a polygon in the mask , with probability proportional to the product of the polygon s weight and area .lasso that polygon with a circle that is intended to be a tight fit , but is not necessarily minimal .generate a point randomly within the circle , test whether the point lies inside the polygon , and keep the point if it does .iterate .a lasso is computed for a polygon as needed , but is then recorded , so that a lasso is computed only once for any polygon .if the desired number of random points exceeds the number of polygons in the mask , then the computation starts by lassoing every polygon in the mask .the mangle software contains several other utilities , described below .one simple but oft - used utility is one that copies a polygon file or files into another polygon file in a different format see [ format ] for an abbreviated description of the possible formats .for example , most of the figures in this paper were produced from points generated by copying polygon files into graphics format .the utility for copying polygon files has some switches to copy polygons with weights only in some interval , or with areas only in some interval .this makes it easy for example to discard polygons with small weights or small areas . the mangle software contains routines that return the vertices of the polygons of a mask , and to return the positions of points along the edges of a mask .for example , when a polygon file is copied into graphics format , the copy utility invokes these routines .a single polygon can have more than one connected boundary , as illustrated in figures [ discpolfig][fractalfig ] and [ chainfig][wfig ] . herethe routines return the vertices and edge points on distinct boundaries as distinct sets .the routines to determine the distinct connected boundaries of a polygon are used in stage 2 of the balkanization process , [ balkanize ] .the mangle software contains a routine to find a point inside each polygon of a mask .the aim is to find a point that is squarely inside the polygon , well away from its edges .for example , the weights attached to the balkanized set of polygons shown in the lower left panel of figure [ twoqzfig ] were obtained by picking a point inside each polygon , and evaluating the weight at that point from the pixelized map provided by the 2qz team .the pixelization means that the map is reliable only away from the edges of a polygon , so it is important to pick the point squarely inside the polygon .the algorithm finds one point for each distinct connected boundary of a polygon .if the polygon contains non - simply - connected parts , as in figures [ eyefig][fractalfig ] , that means that the algorithm will return more points than there are distinct connected parts of the polygon ; however , having more points than necessary is not a problem .the algorithm to find a point inside a polygon is mildly paranoid . for each connected boundary of a polygon , the algorithm first determines the barycentre of the midpoints of the edges of the connected boundary .this barycentre can not be guaranteed to lie inside the polygon , so it is not enough to stop here . instead, great circles are drawn from the barycentre to each of the midpoints of the edges of the connected boundary . on each of these great circles ,the midpoint of the segment of the great circle lying within the polygon , with one end of the segment being the midpoint of an edge of the connected boundary , is determined .each of these segment midpoints inside the polygon is tested to see how far away it is from the nearest vertex or edge of the polygon , including edges and vertices other than on the connected boundary .the desired point inside the polygon is chosen to be that segment midpoint which is furthest away from any vertex or edge .the mangle software contains a utility to find inside which polygon or polygons of a polygon file a given point lies .a point may lie inside zero polygons , or one polygon , or more than one polygon .the utility takes no short cuts : it tests all points against all polygons .this can take time if there are large numbers of points and large numbers of polygons .if the polygon file has been produced by balkanization , then a point would normally lie inside at most one polygon .however , a point at the edge of two abutting polygons is considered to lie inside both polygons .the observing strategies of modern galaxy surveys typically produce angular masks with complex boundaries and variable completeness .the purpose of this paper has been to set forward a scheme that is able to deal accurately and efficiently with such angular masks , and thereby to reduce both the labour and the chance for inadvertent error .the fundamental idea is to resolve a mask into a union of non - overlapping polygons each of whose edges is part of a circle ( not necessarily a great circle ) on the sphere .the scheme has been implemented in a suite of software , mangle , downloadable from http://casa.colorado.edu/~ajsh / mangle/. the mangle software includes several utilities for accomplishing common tasks associated with angular masks of galaxy surveys .this includes generating random catalogues reflecting the angular selection function ( a tool employed in almost all galaxy survey analysis ) , measuring the and angular integrals ( needed for estimating the correlation function ) , and expanding the mask in spherical harmonics ( a key step in various techniques for measuring the power spectrum and redshift space distortions ) .the scheme was originally motivated by the nature of real angular masks of real galaxy surveys , and the underlying angular routines have been battle - tested over many years .the full apparatus of the mangle software has been used on the 2df survey by , and on the sdss survey by .ajsh was supported by nasa atp award nag5 - 10763 and by nsf grant ast-0205981 .mt was supported by nsf grants ast-0071213 & ast-0134999 , nasa grants nag5 - 9194 & nag5 - 11099 , and fellowships from the david and lucile packard foundation and the cottrell foundation .we thank yongzhong xu for his contributions to an early version of the balkanization software , and michael blanton for suggestions .colless m. et al .( the 2dfgrs team ; 29 authors ) , 2001 , mnras , 328 , 1039 ( astro - ph/0106498 ) croom s. m. , smith r. j. , boyle b. j. , shanks t. , loaring n. s. , miller l. , lewis i. j. , 2000 , mnras , 322 , l29 ( 2qz , available at http://www.2dfquasar.org/spec_cat/ ) da costa l. n. , pellegrini p. s. , davis m. , meiksin a. , sargent w. l. w. , tonry j. l. , 1991 , apjs , 75 , 935 gorski k. m. , wandelt b. d. , hansen f. k. , hivon e. , banday a. j. , 1999 , astro - ph/9905275 hamilton a. j. s. , 1993a , apj , 406 , l47 hamilton a. j. s. , 1993b , apj , 417 , 19 huchra , j. , davis , m. , latham , d. and tonry , j. ( 1983 ) apjs , 52 , 89 kerscher m. , szapudi i. , szalay a. , 2000 , apj , 535 , l13 ( astro - ph/9912088 ) lewis i. j. et al .( 2df team ; 23 authors ) , 2002 , mnras , 333 , 279 ( astro - ph/0202175 ) loveday j. , peterson b. a. , maddox s. j. , efstathiou g. , 1996 , apjs , 107 , 201 maddox s. j. , efstathiou g. , sutherland w. j. , 1990b , mnras , 246 , 433 maddox s. j. , efstathiou g. , sutherland w. j. , 1996 , mnras , 283 , 1227 maddox s. j. , sutherland w. , efstathiou g. , loveday j. , 1990a , mnras , 243 , 692 saunders w. , sutherland w. j. , maddox s. j. , keeble o. , oliver s. j. , rowan - robinson m. , mcmahon r. g. , efstathiou g. p. , tadros h. , white s. d. m. , frenk c. s. , carraminana a. , hawkins m. r. s. , 2000 , mnras , 317 , 55 ( pscz , available at http://www-astro.physics.ox.ac.uk/~wjs/pscz.html ) strauss m. a. , huchra j. p. , davis m. , yahil a. , fisher k. b. , tonry , j. , 1992 , apjs , 83 , 29 tegmark m. , hamilton a. j. s. , xu y. , 2002 , mnras , 335 , 887 ( astro - ph/0111575 ) tegmark m. et al .( sdss collaboration ) , 2003 , apj , to be submitted york d. g. , et al .( sdss collaboration , 144 authors ) , 2000 , aj , 120 , 1579this appendix proves the following theorem , invoked in [ balkanize2 ] ( see table [ deftable ] for a definition of the term * group * ) : first , suppose that a polygon contains a region which is connected but not simply - connected .it is required to prove that the boundaries of this region belong to at least two distinct groups . by definition of simply - connectedness, a continuous line can be drawn entirely inside the non - simply - connected region such that one connected boundary of the region lies entirely on one side of the line , and another connected boundary of the region lies entirely on the other side of the line . in the polygon of figure [ eyefig ] , for example, such a line would circulate around the central boundary while remaining inside the outer boundary .the continuous line can not intersect any of the circles forming the caps of the polygon , because if the line did intersect a circle , then the line would be inside the polygon on one side of the intersection , and outside the polygon on the other side of the intersection , contradicting the assumption that the line lies entirely inside the polygon . hence the continuous line must partition the circles of the polygon into two non - intersecting groups .the two connected boundaries on either side of the continuous line must therefore belong to two distinct groups , as was to be demonstrated .conversely , suppose that a polygon has boundaries that belong to at least two distinct groups .it is required to prove that the two groups delineate a connected but non - simply - connected region of the polygon .consider the polygon , call it a , formed by the intersection of all the caps of one group .the polygon a so formed must consist of one or more simply - connected parts ; for if any connected part of a were not simply - connected , then according the previous paragraph the boundaries of that part would belong to different groups , contradicting the assumption that the caps of a all belong to a single group .now consider similarly the polygon , call it b , formed by the intersection of all the caps of a second group .the boundaries of polygon b must lie entirely inside one and only one of the simply - connected parts of a. for certainly b must lie inside at least one part of polygon a , since a must enclose the parent polygon , and the boundary of b lies inside ( at the border of ) the parent polygon .but the boundary of polygon b can not lie in more than one part of a , because if it did then , because the circles of b are all in the same group and therefore connected to each other , there would be a path lying along the circles of b traversing continuously from one part of a to another , and therefore necessarily intersecting one of the boundaries of polygon a , contradicting the assumption that the circles of a and b belong to distinct groups that nowhere intersect . similarly , the boundaries of polygon a must lie entirely inside one and only one of the simply - connected parts of b. this argument has identified two special boundaries : a boundary of a that entirely encloses b ; and a boundary of b that entirely encloses a. the region enclosed by these two special boundaries delineates a region of the polygon that is connected but not simply - connected . for consider a continuous line which is displaced slightly off the special boundary of a , towards the special boundary of b. by construction , the continuous line lies entirely inside both polygons a and b. the continuous line can not intersect any of the circles of a or b , because if it did then the line would lie inside a ( or b ) on one side of the intersection , and outside a ( or b ) on the other side , contradicting the fact that the line lies entirely inside a and b. the continuous line could possibly intersect circles belonging to a third group of circles of the parent polygon . however , if the continuous line is displaced off the special boundary of a by a sufficiently small amount that it does not encounter any third group , then the continuous line will lie entirely inside the parent polygon .this continous line forms a line inside the polygon that can not be shrunk continuously to a point , so the polygon must contain a region that is connected but not simply - connected , as was to be demonstrated .
this paper presents a scheme to deal accurately and efficiently with complex angular masks , such as occur typically in galaxy surveys . an angular mask is taken to be an arbitrary union of arbitrarily weighted angular regions bounded by arbitrary numbers of edges . the restrictions on the mask are ( i ) that each edge must be part of some circle on the sphere ( but not necessarily a great circle ) , and ( ii ) that the weight within each subregion of the mask must be constant . the scheme works by resolving a mask into disjoint polygons , convex angular regions bounded by arbitrary numbers of edges . the polygons may be regarded as the ` pixels ' of a mask , with the feature that the pixels are allowed to take a rather general shape , rather than following some predefined regular pattern . among other things , the scheme includes facilities to compute the spherical harmonics of the angular mask , and data - random and random - random angular integrals . a software package mangle which implements this scheme , along with complete software documentation , is available at http://casa.colorado.edu/~ajsh / mangle/. large - scale structure of universe methods : data analysis